Sample records for absolute time scale

  1. Effect of helicity on the correlation time of large scales in turbulent flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2017-11-01

    Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.

  2. AMS radiocarbon dating and varve chronology of Lake Soppensee: 6000 to 12000 14C years BP

    NASA Astrophysics Data System (ADS)

    Hajdas, Irena; Ivy, Susan D.; Beer, Jürg; Bonani, Georges; Imboden, Dieter; Lotted, André F.; Sturm, Michael; Suter, Martin

    1993-12-01

    For the extension of the radiocarbon calibration curve beyond 10000 14C y BP, laminated sediment from Lake Soppensee (central Switzerland) was dated. The radiocarbon time scale was obtained using accelerator mass spectrometry (AMS) dating of terrestrial macrofossils selected from the Soppensee sediment. Because of an unlaminated sediment section during the Younger Dryas (10000 11000 14C y BP), the absolute time scale, based on counting annual layers (varves), had to be corrected for missing varves. The Soppensee radiocarbon-verve chronology covers the time period from 6000 to 12000 14C y BP on the radiocarbon time scale and 7000 to 13000 calendar y BP on the absolute time scale. The good agreement with the tree ring curve in the interval from 7000 to 11450 cal y BP (cal y indicates calendar year) proves the annual character of the laminations. The ash layer of the Vasset/Killian Tephra (Massif Central, France) is dated at 8230±140 14C y BP and 9407±44 cal y BP. The boundaries of the Younger Dryas biozone are placed at 10986±69 cal y BP (Younger Dryas/Preboreal) and 1212±86 cal y BP (Alleröd/Younger Dryas) on the absolute time scale. The absolute age of the Laacher See Tephra layer, dated with the radiocarbon method at 10 800 to 11200 14C y BP, is estimated at 12350 ± 135 cal y BP. The oldest radiocarbon age of 14190±120 14C y BP was obtained on macrofossils of pioneer vegetation which were found in the lowermost part of the sediment profile. For the late Glacial, the offset between the radiocarbon (10000 12000 14C y BP) and the absolute time scale (11400 13000 cal y BP) in the Soppensee chronology is not greater than 1000 years, which differs from the trend of the U/Th-radiocarbon curve derived from corals.

  3. The Assessment of Protective Behavioral Strategies: Comparing the Absolute Frequency and Contingent Frequency Response Scales

    PubMed Central

    Kite, Benjamin A.; Pearson, Matthew R.; Henson, James M.

    2016-01-01

    The purpose of the present studies was to examine the effects of response scale on the observed relationships between protective behavioral strategies (PBS) measures and alcohol-related outcomes. We reasoned that an ‘absolute frequency’ scale (stem: “how many times…”; response scale: 0 times to 11+ times) conflates the frequency of using PBS with the frequency of consuming alcohol; thus, we hypothesized that the use of an absolute frequency response scale would result in positive relationships between types of PBS and alcohol-related outcomes. Alternatively, a ‘contingent frequency’ scale (stem: “When drinking…how often…”; response scale: never to always) does not conflate frequency of alcohol use with use of PBS; therefore, we hypothesized that use of a contingent frequency scale would result in negative relationships between use of PBS and alcohol-related outcomes. Two published measures of PBS were used across studies: the Protective Behavioral Strategies Survey (PBSS) and the Strategy Questionnaire (SQ). Across three studies, we demonstrate that when measured using a contingent frequency response scale, PBS measures relate negatively to alcohol-related outcomes in a theoretically consistent manner; however, when PBS measures were measured on an absolute frequency response scale, they were non-significantly or positively related to alcohol-related outcomes. We discuss the implications of these findings for the assessment of PBS. PMID:23438243

  4. The recalibration of the IUE scientific instrument

    NASA Technical Reports Server (NTRS)

    Imhoff, Catherine L.; Oliversen, Nancy A.; Nichols-Bohlin, Joy; Casatella, Angelo; Lloyd, Christopher

    1988-01-01

    The IUE instrument was recalibrated because of long time-scale changes in the scientific instrument, a better understanding of the performance of the instrument, improved sets of calibration data, and improved analysis techniques. Calibrations completed or planned include intensity transfer functions (ITF), low-dispersion absolute calibrations, high-dispersion ripple corrections and absolute calibrations, improved geometric mapping of the ITFs to spectral images, studies to improve the signal-to-noise, enhanced absolute calibrations employing corrections for time, temperature, and aperture dependence, and photometric and geometric calibrations for the FES.

  5. Exponential bound in the quest for absolute zero

    NASA Astrophysics Data System (ADS)

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  6. Exponential bound in the quest for absolute zero.

    PubMed

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  7. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry.

    PubMed

    Wang, Guochao; Tan, Lilong; Yan, Shuhua

    2018-02-07

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  8. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry

    PubMed Central

    Tan, Lilong; Yan, Shuhua

    2018-01-01

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897

  9. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    PubMed

    Rutledge, Robert G

    2011-03-02

    Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  10. A Java Program for LRE-Based Real-Time qPCR that Enables Large-Scale Absolute Quantification

    PubMed Central

    Rutledge, Robert G.

    2011-01-01

    Background Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Findings Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. Conclusions The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples. PMID:21407812

  11. Trends in Racial and Ethnic Disparities in Infant Mortality Rates in the United States, 1989–2006

    PubMed Central

    Rossen, Lauren M.; Schoendorf, Kenneth C.

    2014-01-01

    Objectives. We sought to measure overall disparities in pregnancy outcome, incorporating data from the many race and ethnic groups that compose the US population, to improve understanding of how disparities may have changed over time. Methods. We used Birth Cohort Linked Birth–Infant Death Data Files from US Vital Statistics from 1989–1990 and 2005–2006 to examine multigroup indices of racial and ethnic disparities in the overall infant mortality rate (IMR), preterm birth rate, and gestational age–specific IMRs. We calculated selected absolute and relative multigroup disparity metrics weighting subgroups equally and by population size. Results. Overall IMR decreased on the absolute scale, but increased on the population-weighted relative scale. Disparities in the preterm birth rate decreased on both the absolute and relative scales, and across equally weighted and population-weighted indices. Disparities in preterm IMR increased on both the absolute and relative scales. Conclusions. Infant mortality is a common bellwether of general and maternal and child health. Despite significant decreases in disparities in the preterm birth rate, relative disparities in overall and preterm IMRs increased significantly over the past 20 years. PMID:24028239

  12. The 1994 international transatlantic two-way satellite time and frequency transfer experiment: Preliminary results

    NASA Technical Reports Server (NTRS)

    Deyoung, James A.; Klepczynski, William J.; Mckinley, Angela Davis; Powell, William M.; Mai, Phu V.; Hetzel, P.; Bauch, A.; Davis, J. A.; Pearce, P. R.; Baumont, Francoise S.

    1995-01-01

    The international transatlantic time and frequency transfer experiment was designed by participating laboratories and has been implemented during 1994 to test the international communications path involving a large number of transmitting stations. This paper will present empirically determined clock and time scale differences, time and frequency domain instabilities, and a representative power spectral density analysis. The experiments by the method of co-location which will allow absolute calibration of the participating laboratories have been performed. Absolute time differences and accuracy levels of this experiment will be assessed in the near future.

  13. The Berg Balance Scale has high intra- and inter-rater reliability but absolute reliability varies across the scale: a systematic review.

    PubMed

    Downs, Stephen; Marquez, Jodie; Chiarelli, Pauline

    2013-06-01

    What is the intra-rater and inter-rater relative reliability of the Berg Balance Scale? What is the absolute reliability of the Berg Balance Scale? Does the absolute reliability of the Berg Balance Scale vary across the scale? Systematic review with meta-analysis of reliability studies. Any clinical population that has undergone assessment with the Berg Balance Scale. Relative intra-rater reliability, relative inter-rater reliability, and absolute reliability. Eleven studies involving 668 participants were included in the review. The relative intrarater reliability of the Berg Balance Scale was high, with a pooled estimate of 0.98 (95% CI 0.97 to 0.99). Relative inter-rater reliability was also high, with a pooled estimate of 0.97 (95% CI 0.96 to 0.98). A ceiling effect of the Berg Balance Scale was evident for some participants. In the analysis of absolute reliability, all of the relevant studies had an average score of 20 or above on the 0 to 56 point Berg Balance Scale. The absolute reliability across this part of the scale, as measured by the minimal detectable change with 95% confidence, varied between 2.8 points and 6.6 points. The Berg Balance Scale has a higher absolute reliability when close to 56 points due to the ceiling effect. We identified no data that estimated the absolute reliability of the Berg Balance Scale among participants with a mean score below 20 out of 56. The Berg Balance Scale has acceptable reliability, although it might not detect modest, clinically important changes in balance in individual subjects. The review was only able to comment on the absolute reliability of the Berg Balance Scale among people with moderately poor to normal balance. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.

  14. Native American Students' Understanding of Geologic Time Scale: 4th-8th Grade Ojibwe Students' Understanding of Earth's Geologic History

    ERIC Educational Resources Information Center

    Nam, Younkyeong; Karahan, Engin; Roehrig, Gillian

    2016-01-01

    Geologic time scale is a very important concept for understanding long-term earth system events such as climate change. This study examines forty-three 4th-8th grade Native American--particularly Ojibwe tribe--students' understanding of relative ordering and absolute time of Earth's significant geological and biological events. This study also…

  15. Portrait of socio-economic inequality in childhood morbidity and mortality over time, Québec, 1990-2005.

    PubMed

    Barry, Mamadou S; Auger, Nathalie; Burrows, Stephanie

    2012-06-01

    To determine the age and cause groups contributing to absolute and relative socio-economic inequalities in paediatric mortality, hospitalisation and tumour incidence over time. Deaths (n= 9559), hospitalisations (n= 834,932) and incident tumours (n= 4555) were obtained for five age groupings (<1, 1-4, 5-9, 10-14, 15-19 years) and four periods (1990-1993, 1994-1997, 1998-2001, 2002-2005) for Québec, Canada. Age- and cause-specific morbidity and mortality rates for males and females were calculated across socio-economic status decile based on a composite deprivation score for 89 urban communities. Absolute and relative measures of inequality were computed for each age and cause. Mortality and morbidity rates tended to decrease over time, as did absolute and relative socio-economic inequalities for most (but not all) causes and age groups, although precision was low. Socio-economic inequalities persisted in the last period and were greater on the absolute scale for mortality and hospitalisation in early childhood, and on the relative scale for mortality in adolescents. Four causes (respiratory, digestive, infectious, genito-urinary diseases) contributed to the majority of absolute inequality in hospitalisation (males 85%, females 98%). Inequalities were not pronounced for cause-specific mortality and not apparent for tumour incidence. Socio-economic inequalities in Québec tended to narrow for most but not all outcomes. Absolute socio-economic inequalities persisted for children <10 years, and several causes were responsible for the majority of inequality in hospitalisation. Public health policies and prevention programs aiming to reduce socio-economic inequalities in paediatric health should account for trends that differ across age and cause of disease. © 2011 The Authors. Journal of Paediatrics and Child Health © 2011 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  16. Measurement of optical to electrical and electrical to optical delays with ps-level uncertainty.

    PubMed

    Peek, H Z; Pinkert, T J; Jansweijer, P P M; Koelemeij, J C J

    2018-05-28

    We present a new measurement principle to determine the absolute time delay of a waveform from an optical reference plane to an electrical reference plane and vice versa. We demonstrate a method based on this principle with 2 ps uncertainty. This method can be used to perform accurate time delay determinations of optical transceivers used in fiber-optic time-dissemination equipment. As a result the time scales in optical and electrical domain can be related to each other with the same uncertainty. We expect this method will be a new breakthrough in high-accuracy time transfer and absolute calibration of time-transfer equipment.

  17. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  18. Upper Limit of Weights in TAI Computation

    NASA Technical Reports Server (NTRS)

    Thomas, Claudine; Azoubib, Jacques

    1996-01-01

    The international reference time scale International Atomic Time (TAI) computed by the Bureau International des Poids et Mesures (BIPM) relies on a weighted average of data from a large number of atomic clocks. In it, the weight attributed to a given clock depends on its long-term stability. In this paper the TAI algorithm is used as the basis for a discussion of how to implement an upper limit of weight for clocks contributing to the ensemble time. This problem is approached through the comparison of two different techniques. In one case, a maximum relative weight is fixed: no individual clock can contribute more than a given fraction to the resulting time scale. The weight of each clock is then adjusted according to the qualities of the whole set of contributing elements. In the other case, a parameter characteristic of frequency stability is chosen: no individual clock can appear more stable than the stated limit. This is equivalent to choosing an absolute limit of weight and attributing this to to the most stable clocks independently of the other elements of the ensemble. The first technique is more robust than the second and automatically optimizes the stability of the resulting time scale, but leads to a more complicated computatio. The second technique has been used in the TAI algorithm since the very beginning. Careful analysis of tests on real clock data shows that improvement of the stability of the time scale requires revision from time to time of the fixed value chosen for the upper limit of absolute weight. In particular, we present results which confirm the decision of the CCDS Working Group on TAI to increase the absolute upper limit by a factor of 2.5. We also show that the use of an upper relative contribution further helps to improve the stability and may be a useful step towards better use of the massive ensemble of HP 507IA clocks now contributing to TAI.

  19. Time and Space in Tzeltal: Is the Future Uphill?

    PubMed Central

    Brown, Penelope

    2012-01-01

    Linguistic expressions of time often draw on spatial language, which raises the question of whether cultural specificity in spatial language and cognition is reflected in thinking about time. In the Mayan language Tzeltal, spatial language relies heavily on an absolute frame of reference utilizing the overall slope of the land, distinguishing an “uphill/downhill” axis oriented from south to north, and an orthogonal “crossways” axis (sunrise-set) on the basis of which objects at all scales are located. Does this absolute system for calculating spatial relations carry over into construals of temporal relations? This question was explored in a study where Tzeltal consultants produced temporal expressions and performed two different non-linguistic temporal ordering tasks. The results show that at least five distinct schemata for conceptualizing time underlie Tzeltal linguistic expressions: (i) deictic ego-centered time, (ii) time as an ordered sequence (e.g., “first”/“later”), (iii) cyclic time (times of the day, seasons), (iv) time as spatial extension or location (e.g., “entering/exiting July”), and (v) a time vector extending uphillwards into the future. The non-linguistic task results showed that the “time moves uphillwards” metaphor, based on the absolute frame of reference prevalent in Tzeltal spatial language and thinking and important as well in the linguistic expressions for time, is not strongly reflected in responses on these tasks. It is argued that systematic and consistent use of spatial language in an absolute frame of reference does not necessarily transfer to consistent absolute time conceptualization in non-linguistic tasks; time appears to be more open to alternative construals. PMID:22787451

  20. Large-Scale Simulation of Multi-Asset Ising Financial Markets

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2017-03-01

    We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.

  1. Absolute Scale Quantitative Off-Axis Electron Holography at Atomic Resolution

    NASA Astrophysics Data System (ADS)

    Winkler, Florian; Barthel, Juri; Tavabi, Amir H.; Borghardt, Sven; Kardynal, Beata E.; Dunin-Borkowski, Rafal E.

    2018-04-01

    An absolute scale match between experiment and simulation in atomic-resolution off-axis electron holography is demonstrated, with unknown experimental parameters determined directly from the recorded electron wave function using an automated numerical algorithm. We show that the local thickness and tilt of a pristine thin WSe2 flake can be measured uniquely, whereas some electron optical aberrations cannot be determined unambiguously for a periodic object. The ability to determine local specimen and imaging parameters directly from electron wave functions is of great importance for quantitative studies of electrostatic potentials in nanoscale materials, in particular when performing in situ experiments and considering that aberrations change over time.

  2. The Theory of Intelligence and Its Measurement

    ERIC Educational Resources Information Center

    Jensen, A. R.

    2011-01-01

    Mental chronometry (MC) studies cognitive processes measured by time. It provides an absolute, ratio scale. The limitations of instrumentation and statistical analysis caused the early studies in MC to be eclipsed by the "paper-and-pencil" psychometric tests started by Binet. However, they use an age-normed, rather than a ratio scale, which…

  3. Absolute irradiance of the Moon for on-orbit calibration

    USGS Publications Warehouse

    Stone, T.C.; Kieffer, H.H.; ,

    2002-01-01

    The recognized need for on-orbit calibration of remote sensing imaging instruments drives the ROLO project effort to characterize the Moon for use as an absolute radiance source. For over 5 years the ground-based ROLO telescopes have acquired spatially-resolved lunar images in 23 VNIR (Moon diameter ???500 pixels) and 9 SWIR (???250 pixels) passbands at phase angles within ??90 degrees. A numerical model for lunar irradiance has been developed which fits hundreds of ROLO images in each band, corrected for atmospheric extinction and calibrated to absolute radiance, then integrated to irradiance. The band-coupled extinction algorithm uses absorption spectra of several gases and aerosols derived from MODTRAN to fit time-dependent component abundances to nightly observations of standard stars. The absolute radiance scale is based upon independent telescopic measurements of the star Vega. The fitting process yields uncertainties in lunar relative irradiance over small ranges of phase angle and the full range of lunar libration well under 0.5%. A larger source of uncertainty enters in the absolute solar spectral irradiance, especially in the SWIR, where solar models disagree by up to 6%. Results of ROLO model direct comparisons to spacecraft observations demonstrate the ability of the technique to track sensor responsivity drifts to sub-percent precision. Intercomparisons among instruments provide key insights into both calibration issues and the absolute scale for lunar irradiance.

  4. Eye height scaling of absolute size in immersive and nonimmersive displays

    NASA Technical Reports Server (NTRS)

    Dixon, M. W.; Wraga, M.; Proffitt, D. R.; Williams, G. C.; Kaiser, M. K. (Principal Investigator)

    2000-01-01

    Eye-height (EH) scaling of absolute height was investigated in three experiments. In Experiment 1, standing observers viewed cubes in an immersive virtual environment. Observers' center of projection was placed at actual EH and at 0.7 times actual EH. Observers' size judgments revealed that the EH manipulation was 76.8% effective. In Experiment 2, seated observers viewed the same cubes on an interactive desktop display; however, no effect of EH was found in response to the simulated EH manipulation. Experiment 3 tested standing observers in the immersive environment with the field of view reduced to match that of the desktop. Comparable to Experiment 1, the effect of EH was 77%. These results suggest that EH scaling is not generally used when people view an interactive desktop display because the altitude of the center of projection is indeterminate. EH scaling is spontaneously evoked, however, in immersive environments.

  5. VUV photoionization cross sections of HO2, H2O2, and H2CO.

    PubMed

    Dodson, Leah G; Shen, Linhan; Savee, John D; Eddingsaas, Nathan C; Welz, Oliver; Taatjes, Craig A; Osborn, David L; Sander, Stanley P; Okumura, Mitchio

    2015-02-26

    The absolute vacuum ultraviolet (VUV) photoionization spectra of the hydroperoxyl radical (HO2), hydrogen peroxide (H2O2), and formaldehyde (H2CO) have been measured from their first ionization thresholds to 12.008 eV. HO2, H2O2, and H2CO were generated from the oxidation of methanol initiated by pulsed-laser-photolysis of Cl2 in a low-pressure slow flow reactor. Reactants, intermediates, and products were detected by time-resolved multiplexed synchrotron photoionization mass spectrometry. Absolute concentrations were obtained from the time-dependent photoion signals by modeling the kinetics of the methanol oxidation chemistry. Photoionization cross sections were determined at several photon energies relative to the cross section of methanol, which was in turn determined relative to that of propene. These measurements were used to place relative photoionization spectra of HO2, H2O2, and H2CO on an absolute scale, resulting in absolute photoionization spectra.

  6. Evidence for the timing of sea-level events during MIS 3

    NASA Astrophysics Data System (ADS)

    Siddall, M.

    2005-12-01

    Four large sea-level peaks of millennial-scale duration occur during MIS 3. In addition smaller peaks may exist close to the sensitivity of existing methods to derive sea level during these periods. Millennial-scale changes in temperature during MIS 3 are well documented across much of the planet and are linked in some unknown, yet fundamental way to changes in ice volume / sea level. It is therefore highly likely that the timing of the sea level events during MIS 3 will prove to be a `Rosetta Stone' for understanding millennial scale climate variability. I will review observational and mechanistic arguments for the variation of sea level on Antarctic, Greenland and absolute time scales.

  7. Self-interaction-corrected time-dependent density-functional-theory calculations of x-ray-absorption spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tu, Guangde; Rinkevicius, Zilvinas; Vahtras, Olav

    We outline an approach within time-dependent density functional theory that predicts x-ray spectra on an absolute scale. The approach rests on a recent formulation of the resonant-convergent first-order polarization propagator [P. Norman et al., J. Chem. Phys. 123, 194103 (2005)] and corrects for the self-interaction energy of the core orbital. This polarization propagator approach makes it possible to directly calculate the x-ray absorption cross section at a particular frequency without explicitly addressing the excited-state spectrum. The self-interaction correction for the employed density functional accounts for an energy shift of the spectrum, and fully correlated absolute-scale x-ray spectra are thereby obtainedmore » based solely on optimization of the electronic ground state. The procedure is benchmarked against experimental spectra of a set of small organic molecules at the carbon, nitrogen, and oxygen K edges.« less

  8. All-fibre photonic signal generator for attosecond timing and ultralow-noise microwave

    PubMed Central

    Jung, Kwangyun; Kim, Jungwon

    2015-01-01

    High-impact frequency comb applications that are critically dependent on precise pulse timing (i.e., repetition rate) have recently emerged and include the synchronization of X-ray free-electron lasers, photonic analogue-to-digital conversion and photonic radar systems. These applications have used attosecond-level timing jitter of free-running mode-locked lasers on a fast time scale within ~100 μs. Maintaining attosecond-level absolute jitter over a significantly longer time scale can dramatically improve many high-precision comb applications. To date, ultrahigh quality-factor (Q) optical resonators have been used to achieve the highest-level repetition-rate stabilization of mode-locked lasers. However, ultrahigh-Q optical-resonator-based methods are often fragile, alignment sensitive and complex, which limits their widespread use. Here we demonstrate a fibre-delay line-based repetition-rate stabilization method that enables the all-fibre photonic generation of optical pulse trains with 980-as (20-fs) absolute r.m.s. timing jitter accumulated over 0.01 s (1 s). This simple approach is based on standard off-the-shelf fibre components and can therefore be readily used in various comb applications that require ultra-stable microwave frequency and attosecond optical timing. PMID:26531777

  9. Low frequency ac waveform generator

    DOEpatents

    Bilharz, O.W.

    1983-11-22

    Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stablization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.

  10. Absolute dual-comb spectroscopy at 1.55 μm by free-running Er:fiber lasers

    NASA Astrophysics Data System (ADS)

    Cassinerio, Marco; Gambetta, Alessio; Coluccelli, Nicola; Laporta, Paolo; Galzerano, Gianluca

    2014-06-01

    We report on a compact scheme for absolute referencing and coherent averaging for dual-comb based spectrometers, exploiting a single continuous-wave (CW) laser in a transfer oscillator configuration. The same CW laser is used for both absolute calibration of the optical frequency axis and the generation of a correction signal which is used for a real-time jitter compensation in a fully electrical feed-forward scheme. The technique is applied to a near-infrared spectrometer based on a pair of free-running mode-locked Er:fiber lasers, allowing to perform real-time absolute-frequency measurements over an optical bandwidth of more than 25 nm, with coherent interferogram averaging over 1-s acquisition time, leading to a signal-to-noise ratio improvement of 29 dB over the 50 μs single shot acquisition. Using 10-cm single pass cell, a value of 1.9 × 10-4 cm-1 Hz-0.5 noise-equivalent-absorption over 1 s integration time is obtained, which can be further scaled down with a multi-pass or resonant cavity. The adoption of a single CW laser, together with the absence of optical locks, and the full-fiber design makes this spectrometer a robust and compact system to be employed in gas-sensing applications.

  11. Impact of stock market structure on intertrade time and price dynamics.

    PubMed

    Ivanov, Plamen Ch; Yuen, Ainslie; Perakakis, Pandelis

    2014-01-01

    We analyse times between consecutive transactions for a diverse group of stocks registered on the NYSE and NASDAQ markets, and we relate the dynamical properties of the intertrade times with those of the corresponding price fluctuations. We report that market structure strongly impacts the scale-invariant temporal organisation in the transaction timing of stocks, which we have observed to have long-range power-law correlations. Specifically, we find that, compared to NYSE stocks, stocks registered on the NASDAQ exhibit significantly stronger correlations in their transaction timing on scales within a trading day. Further, we find that companies that transfer from the NASDAQ to the NYSE show a reduction in the correlation strength of transaction timing on scales within a trading day, indicating influences of market structure. We also report a persistent decrease in correlation strength of intertrade times with increasing average intertrade time and with corresponding decrease in companies' market capitalization-a trend which is less pronounced for NASDAQ stocks. Surprisingly, we observe that stronger power-law correlations in intertrade times are coupled with stronger power-law correlations in absolute price returns and higher price volatility, suggesting a strong link between the dynamical properties of intertrade times and the corresponding price fluctuations over a broad range of time scales. Comparing the NYSE and NASDAQ markets, we demonstrate that the stronger correlations we find in intertrade times for NASDAQ stocks are associated with stronger correlations in absolute price returns and with higher volatility, suggesting that market structure may affect price behavior through information contained in transaction timing. These findings do not support the hypothesis of universal scaling behavior in stock dynamics that is independent of company characteristics and stock market structure. Further, our results have implications for utilising transaction timing patterns in price prediction and risk management optimization on different stock markets.

  12. Impact of Stock Market Structure on Intertrade Time and Price Dynamics

    PubMed Central

    Ivanov, Plamen Ch.; Yuen, Ainslie; Perakakis, Pandelis

    2014-01-01

    We analyse times between consecutive transactions for a diverse group of stocks registered on the NYSE and NASDAQ markets, and we relate the dynamical properties of the intertrade times with those of the corresponding price fluctuations. We report that market structure strongly impacts the scale-invariant temporal organisation in the transaction timing of stocks, which we have observed to have long-range power-law correlations. Specifically, we find that, compared to NYSE stocks, stocks registered on the NASDAQ exhibit significantly stronger correlations in their transaction timing on scales within a trading day. Further, we find that companies that transfer from the NASDAQ to the NYSE show a reduction in the correlation strength of transaction timing on scales within a trading day, indicating influences of market structure. We also report a persistent decrease in correlation strength of intertrade times with increasing average intertrade time and with corresponding decrease in companies' market capitalization–a trend which is less pronounced for NASDAQ stocks. Surprisingly, we observe that stronger power-law correlations in intertrade times are coupled with stronger power-law correlations in absolute price returns and higher price volatility, suggesting a strong link between the dynamical properties of intertrade times and the corresponding price fluctuations over a broad range of time scales. Comparing the NYSE and NASDAQ markets, we demonstrate that the stronger correlations we find in intertrade times for NASDAQ stocks are associated with stronger correlations in absolute price returns and with higher volatility, suggesting that market structure may affect price behavior through information contained in transaction timing. These findings do not support the hypothesis of universal scaling behavior in stock dynamics that is independent of company characteristics and stock market structure. Further, our results have implications for utilising transaction timing patterns in price prediction and risk management optimization on different stock markets. PMID:24699376

  13. Method and apparatus for two-dimensional absolute optical encoding

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    2004-01-01

    This invention presents a two-dimensional absolute optical encoder and a method for determining position of an object in accordance with information from the encoder. The encoder of the present invention comprises a scale having a pattern being predetermined to indicate an absolute location on the scale, means for illuminating the scale, means for forming an image of the pattern; and detector means for outputting signals derived from the portion of the image of the pattern which lies within a field of view of the detector means, the field of view defining an image reference coordinate system, and analyzing means, receiving the signals from the detector means, for determining the absolute location of the object. There are two types of scale patterns presented in this invention: grid type and starfield type.

  14. Single-grain 40Ar-39Ar ages of glauconies: implications for the geologic time scale and global sea level variations

    PubMed

    Smith; Evensen; York; Odin

    1998-03-06

    The mineral series glaucony supplies 40% of the absolute-age database for the geologic time scale of the last 250 million years. However, glauconies have long been suspected of giving young potassium-argon ages on bulk samples. Laser-probe argon-argon dating shows that glaucony populations comprise grains with a wide range of ages, suggesting a period of genesis several times longer ( approximately 5 million years) than previously thought. An estimate of the age of their enclosing sediments (and therefore of time scale boundaries) is given by the oldest nonrelict grains in the glaucony populations, whereas the formation times of the younger grains appear to be modulated by global sea level.

  15. Statistical properties of edge plasma turbulence in the Large Helical Device

    NASA Astrophysics Data System (ADS)

    Dewhurst, J. M.; Hnat, B.; Ohno, N.; Dendy, R. O.; Masuzaki, S.; Morisaki, T.; Komori, A.

    2008-09-01

    Ion saturation current (Isat) measurements made by three tips of a Langmuir probe array in the Large Helical Device are analysed for two plasma discharges. Absolute moment analysis is used to quantify properties on different temporal scales of the measured signals, which are bursty and intermittent. Strong coherent modes in some datasets are found to distort this analysis and are consequently removed from the time series by applying bandstop filters. Absolute moment analysis of the filtered data reveals two regions of power-law scaling, with the temporal scale τ ≈ 40 µs separating the two regimes. A comparison is made with similar results from the Mega-Amp Spherical Tokamak. The probability density function is studied and a monotonic relationship between connection length and skewness is found. Conditional averaging is used to characterize the average temporal shape of the largest intermittent bursts.

  16. Time-calibrated Milankovitch cycles for the late Permian.

    PubMed

    Wu, Huaichun; Zhang, Shihong; Hinnov, Linda A; Jiang, Ganqing; Feng, Qinglai; Li, Haiyan; Yang, Tianshui

    2013-01-01

    An important innovation in the geosciences is the astronomical time scale. The astronomical time scale is based on the Milankovitch-forced stratigraphy that has been calibrated to astronomical models of paleoclimate forcing; it is defined for much of Cenozoic-Mesozoic. For the Palaeozoic era, however, astronomical forcing has not been widely explored because of lack of high-precision geochronology or astronomical modelling. Here we report Milankovitch cycles from late Permian (Lopingian) strata at Meishan and Shangsi, South China, time calibrated by recent high-precision U-Pb dating. The evidence extends empirical knowledge of Earth's astronomical parameters before 250 million years ago. Observed obliquity and precession terms support a 22-h length-of-day. The reconstructed astronomical time scale indicates a 7.793-million year duration for the Lopingian epoch, when strong 405-kyr cycles constrain astronomical modelling. This is the first significant advance in defining the Palaeozoic astronomical time scale, anchored to absolute time, bridging the Palaeozoic-Mesozoic transition.

  17. Methods and apparatus for determining cardiac output

    NASA Technical Reports Server (NTRS)

    Cohen, Richard J. (Inventor); Sherman, Derin A. (Inventor); Mukkamala, Ramakrishna (Inventor)

    2010-01-01

    The present invention provides methods and apparatus for determining a dynamical property of the systemic or pulmonary arterial tree using long time scale information, i.e., information obtained from measurements over time scales greater than a single cardiac cycle. In one aspect, the invention provides a method and apparatus for monitoring cardiac output (CO) from a single blood pressure signal measurement obtained at any site in the systemic or pulmonary arterial tree or from any related measurement including, for example, fingertip photoplethysmography.According to the method the time constant of the arterial tree, defined to be the product of the total peripheral resistance (TPR) and the nearly constant arterial compliance, is determined by analyzing the long time scale variations (greater than a single cardiac cycle) in any of these blood pressure signals. Then, according to Ohm's law, a value proportional to CO may be determined from the ratio of the blood pressure signal to the estimated time constant. The proportional CO values derived from this method may be calibrated to absolute CO, if desired, with a single, absolute measure of CO (e.g., thermodilution). The present invention may be applied to invasive radial arterial blood pressure or pulmonary arterial blood pressure signals which are routinely measured in intensive care units and surgical suites or to noninvasively measured peripheral arterial blood pressure signals or related noninvasively measured signals in order to facilitate the clinical monitoring of CO as well as TPR.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klemt, M.

    Relative oscillator strengths of 139 Til lines were determined from emission measurements of a three chamber electric arc burning in an argon atmosphere. Introducing a small admixture of titanium chloride into the center of the arc, spectra of titanium could be observed end-on with no self-absorption and no selfreversal of the measured lines. The relative oscillator strengths were obtained from the Til line intensities and the measured arc temperature. Using absolute oscillator strengths of three resonance lines which had been measured by Reinke (1967), and several life time measurements from Hese (1970), Witt et al. (1971) and Andersen and Sorensenmore » (1972), the relative oscillator strengths were converted to an absolute scale. The accuracy of these absolute values is in the range of 20% to 40%. (auth)« less

  19. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  20. The protoelectric potential map (PPM): an absolute two-dimensional chemical potential scale for a global understanding of chemistry.

    PubMed

    Radtke, Valentin; Himmel, Daniel; Pütz, Katharina; Goll, Sascha K; Krossing, Ingo

    2014-04-07

    We introduce the protoelectric potential map (PPM) as a novel, two-dimensional plot of the absolute reduction potential (peabs scale) combined with the absolute protochemical potential (Brønsted acidity: pHabs scale). The validity of this thermodynamically derived PPM is solvent-independent due to the scale zero points, which were chosen as the ideal electron gas and the ideal proton gas at standard conditions. To tie a chemical environment to these reference states, the standard Gibbs energies for the transfer of the gaseous electrons/protons to the medium are needed as anchor points. Thereby, the thermodynamics of any redox, acid-base or combined system in any medium can be related to any other, resulting in a predictability of reactions even over different media or phase boundaries. Instruction is given on how to construct the PPM from the anchor points derived and tabulated with this work. Since efforts to establish "absolute" reduction potential scales and also "absolute" pH scales already exist, a short review in this field is given and brought into relation to the PPM. Some comments on the electrochemical validation and realization conclude this concept article. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  2. A California statewide three-dimensional seismic velocity model from both absolute and differential times

    USGS Publications Warehouse

    Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.

    2010-01-01

    We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.

  3. Trends in educational inequalities in cause specific mortality in Norway from 1960 to 2010: a turning point for educational inequalities in cause specific mortality of Norwegian men after the millennium?

    PubMed

    Strand, Bjørn Heine; Steingrímsdóttir, Ólöf Anna; Grøholt, Else-Karin; Ariansen, Inger; Graff-Iversen, Sidsel; Næss, Øyvind

    2014-11-24

    Educational inequalities in total mortality in Norway have widened during 1960-2000. We wanted to investigate if inequalities have continued to increase in the post millennium decade, and which causes of deaths were the main drivers. All deaths (total and cause specific) in the adult Norwegian population aged 45-74 years over five decades, until 2010 were included; in all 708,449 deaths and over 62 million person years. Two indices of inequalities were used to measure inequality and changes in inequalities over time, on the relative scale (Relative Index of Inequality, RII) and on the absolute scale (Slope Index of Inequality, SII). Relative inequalities in total mortality increased over the five decades in both genders. Among men absolute inequalities stabilized during 2000-2010, after steady, significant increases each decade back to the 1960s, while in women, absolute inequalities continued to increase significantly during the last decade. The stabilization in absolute inequalities among men in the last decade was mostly due to a fall in inequalities in cardiovascular disease (CVD) mortality and lung cancer and respiratory disease mortality. Still, in this last decade, the absolute inequalities in cause-specific mortality among men were mostly due to cardiovascular diseases (CVD) (34% of total mortality inequality), lung cancer and respiratory diseases (21%). Among women the absolute inequalities in mortality were mostly due to lung cancer and chronic lower respiratory tract diseases (30%) and CVD (27%). In men, absolute inequalities in mortality have stopped increasing, seemingly due to reduction in inequalities in CVD mortality. Absolute inequality in mortality continues to widen among women, mostly due to death from lung cancer and chronic lung disease. Relative educational inequalities in mortality are still on the rise for Norwegian men and women.

  4. Calibration strategies for the determination of stable carbon absolute isotope ratios in a glycine candidate reference material by elemental analyser-isotope ratio mass spectrometry.

    PubMed

    Dunn, Philip J H; Malinovsky, Dmitry; Goenaga-Infante, Heidi

    2015-04-01

    We report a methodology for the determination of the stable carbon absolute isotope ratio of a glycine candidate reference material with natural carbon isotopic composition using EA-IRMS. For the first time, stable carbon absolute isotope ratios have been reported using continuous flow rather than dual inlet isotope ratio mass spectrometry. Also for the first time, a calibration strategy based on the use of synthetic mixtures gravimetrically prepared from well characterised, highly (13)C-enriched and (13)C-depleted glycines was developed for EA-IRMS calibration and generation of absolute carbon isotope ratio values traceable to the SI through calibration standards of known purity. A second calibration strategy based on converting the more typically determined delta values on the Vienna PeeDee Belemnite (VPDB) scale using literature values for the absolute carbon isotope ratio of VPDB itself was used for comparison. Both calibration approaches provided results consistent with those previously reported for the same natural glycine using MC-ICP-MS; absolute carbon ratios of 10,649 × 10(-6) with an expanded uncertainty (k = 2) of 24 × 10(-6) and 10,646 × 10(-6) with an expanded uncertainty (k = 2) of 88 × 10(-6) were obtained, respectively. The absolute carbon isotope ratio of the VPDB standard was found to be 11,115 × 10(-6) with an expanded uncertainty (k = 2) of 27 × 10(-6), which is in excellent agreement with previously published values.

  5. Low frequency AC waveform generator

    DOEpatents

    Bilharz, Oscar W.

    1986-01-01

    Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stabilization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform itself. The cosine is synthesized by squaring the triangular waveform, raising the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.

  6. Absolute gravimetry as an operational tool for geodynamics research

    NASA Astrophysics Data System (ADS)

    Torge, W.

    Relative gravimetric techniques have been used for nearly 30 years for measuring non-tidal gravity variations with time, and thus have contributed to geodynamics research by monitoring vertical crustal movements and internal mass shifts. With today's accuracy of about ± 0.05µms-2 (or 5µGal), significant results have been obtained in numerous control nets of local extension, especially in connection with seismic and volcanic events. Nevertheless, the main drawbacks of relative gravimetry, which are deficiencies in absolute datum and calibration, set a limit for its application, especially with respect to large-scale networks and long-term investigations. These problems can now be successfully attacked by absolute gravimetry, with transportable gravimeters available since about 20 years. While the absolute technique during the first two centuries of gravimetry's history was based on the pendulum method, the free-fall method can now be employed taking advantage of laser-interferometry, electronic timing, vacuum and shock absorbing techniques, and on-line computer-control. The accuracy inherent in advanced instruments is about ± 0.05 µms-2. In field work, generally an accuracy of ±0.1 µms-2 may be expected, strongly depending on local environmental conditions.

  7. OARE flight maneuvers and calibration measurements on STS-58

    NASA Technical Reports Server (NTRS)

    Blanchard, Robert C.; Nicholson, John Y.; Ritter, James R.; Larman, Kevin T.

    1994-01-01

    The Orbital Acceleration Research Experiment (OARE), which has flown on STS-40, STS-50, and STS-58, contains a three axis accelerometer with a single, nonpendulous, electrostatically suspended proofmass which can resolve accelerations to the nano-g level. The experiment also contains a full calibration station to permit in situ bias and scale factor calibration. This on-orbit calibration capability eliminates the large uncertainty of ground-based calibrations encountered with accelerometers flown in the past on the orbiter, thus providing absolute acceleration measurement accuracy heretofore unachievable. This is the first time accelerometer scale factor measurements have been performed on orbit. A detailed analysis of the calibration process is given along with results of the calibration factors from the on-orbit OARE flight measurements on STS-58. In addition, the analysis of OARE flight maneuver data used to validate the scale factor measurements in the sensor's most sensitive range is also presented. Estimates on calibration uncertainties are discussed. This provides bounds on the STS-58 absolute acceleration measurements for future applications.

  8. Vacuum ultraviolet photoionization cross section of the hydroxyl radical.

    PubMed

    Dodson, Leah G; Savee, John D; Gozem, Samer; Shen, Linhan; Krylov, Anna I; Taatjes, Craig A; Osborn, David L; Okumura, Mitchio

    2018-05-14

    The absolute photoionization spectrum of the hydroxyl (OH) radical from 12.513 to 14.213 eV was measured by multiplexed photoionization mass spectrometry with time-resolved radical kinetics. Tunable vacuum ultraviolet (VUV) synchrotron radiation was generated at the Advanced Light Source. OH radicals were generated from the reaction of O( 1 D) + H 2 O in a flow reactor in He at 8 Torr. The initial O( 1 D) concentration, where the atom was formed by pulsed laser photolysis of ozone, was determined from the measured depletion of a known concentration of ozone. Concentrations of OH and O( 3 P) were obtained by fitting observed time traces with a kinetics model constructed with literature rate coefficients. The absolute cross section of OH was determined to be σ(13.436 eV) = 3.2 ± 1.0 Mb and σ(14.193 eV) = 4.7 ± 1.6 Mb relative to the known cross section for O( 3 P) at 14.193 eV. The absolute photoionization spectrum was obtained by recording a spectrum at a resolution of 8 meV (50 meV steps) and scaling to the single-energy cross sections. We computed the absolute VUV photoionization spectrum of OH and O( 3 P) using equation-of-motion coupled-cluster Dyson orbitals and a Coulomb photoelectron wave function and found good agreement with the observed absolute photoionization spectra.

  9. Vacuum ultraviolet photoionization cross section of the hydroxyl radical

    NASA Astrophysics Data System (ADS)

    Dodson, Leah G.; Savee, John D.; Gozem, Samer; Shen, Linhan; Krylov, Anna I.; Taatjes, Craig A.; Osborn, David L.; Okumura, Mitchio

    2018-05-01

    The absolute photoionization spectrum of the hydroxyl (OH) radical from 12.513 to 14.213 eV was measured by multiplexed photoionization mass spectrometry with time-resolved radical kinetics. Tunable vacuum ultraviolet (VUV) synchrotron radiation was generated at the Advanced Light Source. OH radicals were generated from the reaction of O(1D) + H2O in a flow reactor in He at 8 Torr. The initial O(1D) concentration, where the atom was formed by pulsed laser photolysis of ozone, was determined from the measured depletion of a known concentration of ozone. Concentrations of OH and O(3P) were obtained by fitting observed time traces with a kinetics model constructed with literature rate coefficients. The absolute cross section of OH was determined to be σ(13.436 eV) = 3.2 ± 1.0 Mb and σ(14.193 eV) = 4.7 ± 1.6 Mb relative to the known cross section for O(3P) at 14.193 eV. The absolute photoionization spectrum was obtained by recording a spectrum at a resolution of 8 meV (50 meV steps) and scaling to the single-energy cross sections. We computed the absolute VUV photoionization spectrum of OH and O(3P) using equation-of-motion coupled-cluster Dyson orbitals and a Coulomb photoelectron wave function and found good agreement with the observed absolute photoionization spectra.

  10. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    NASA Astrophysics Data System (ADS)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  11. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  12. Absolute calibration of optical streak cameras on picosecond time scales using supercontinuum generation

    DOE PAGES

    Patankar, S.; Gumbrell, E. T.; Robinson, T. S.; ...

    2017-08-17

    Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less

  13. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  14. A Typology for Charting Socioeconomic Mortality Gradients: "Go Southwest".

    PubMed

    Blakely, Tony; Disney, George; Atkinson, June; Teng, Andrea; Mackenbach, Johan P

    2017-07-01

    Holistic depiction of time-trends in average mortality rates, and absolute and relative inequalities, is challenging. We outline a typology for situations with falling average mortality rates (m↓; e.g., cardiovascular disease), rates stable over time (m-; e.g., some cancers), and increasing average mortality rates (m↑; e.g., suicide in some contexts). If we consider inequality trends on both the absolute (a) and relative (r) scales, there are 13 possible combination of m, a, and r trends over time. They can be mapped to graphs with relative inequality (log relative index of inequality [RII]; r) on the y axis, log average mortality rate on the x axis (m), and absolute inequality (slope index of inequality; SII; a) as contour lines. We illustrate this by plotting adult mortality trends: (1) by household income from 1981 to 2011 for New Zealand, and (2) by education for European countries. Types range from the "best" m↓a↓r↓ (average, absolute, and relative inequalities all decreasing; southwest movement in graphs) to the "worst" m↑a↑r↑ (northeast). Mortality typologies in New Zealand (all-cause, cardiovascular disease, nonlung cancer, and unintentional injury) were all m↓r↑ (northwest), but variable with respect to absolute inequality. Most European typologies were m↓r↑ types (northwest; e.g., Finland), but with notable exceptions of m-a↑r↑ (north; e.g., Hungary) and "best" or southwest m↓a↓r↓ for Spain (Barcelona) females. Our typology and corresponding graphs provide a convenient way to summarize and understand past trends in inequalities in mortality, and hold potential for projecting future trends and target setting.

  15. Sensation seeking, augmenting-reducing, and absolute auditory threshold: a strength-of-the-nervous-system perspective.

    PubMed

    Goldman, D; Kohn, P M; Hunt, R W

    1983-08-01

    The following measures were obtained from 42 student volunteers: the General and the Disinhibition subscales of the Sensation Seeking Scale (Form IV), the Reducer-Augmenter Scale, and the Absolute Auditory Threshold. General sensation seeking correlated significantly with the Reducer-Augmenter Scale, r(40) = .59, p less than .001, and the Absolute Auditory Threshold, r(40) = .45, p less than .005. Both results proved general across sex. These findings, that high-sensation seekers tend to be reducers and to lack sensitivity to weak stimulation, were interpreted as supporting strength-of-the-nervous-system theory more than the formulation of Zuckerman and his associates.

  16. Airborne Sea-Surface Topography in an Absolute Reference Frame

    NASA Astrophysics Data System (ADS)

    Brozena, J. M.; Childers, V. A.; Jacobs, G.; Blaha, J.

    2003-12-01

    Highly dynamic coastal ocean processes occur at temporal and spatial scales that cannot be captured by the present generation of satellite altimeters. Space-borne gravity missions such as GRACE also provide time-varying gravity and a geoidal msl reference surface at resolution that is too coarse for many coastal applications. The Naval Research Laboratory and the Naval Oceanographic Office have been testing the application of airborne measurement techniques, gravity and altimetry, to determine sea-surface height and height anomaly at the short scales required for littoral regions. We have developed a precise local gravimetric geoid over a test region in the northern Gulf of Mexico from historical gravity data and recent airborne gravity surveys. The local geoid provides a msl reference surface with a resolution of about 10-15 km and provides a means to connect airborne, satellite and tide-gage observations in an absolute (WGS-84) framework. A series of altimetry reflights over the region with time scales of 1 day to 1 year reveal a highly dynamic environment with coherent and rapidly varying sea-surface height anomalies. AXBT data collected at the same time show apparent correlation with wave-like temperature anomalies propagating up the continental slope of the Desoto Canyon. We present animations of the temporal evolution of the surface topography and water column temperature structure down to the 800 m depth of the AXBT sensors.

  17. An absolute scale for measuring the utility of money

    NASA Astrophysics Data System (ADS)

    Thomas, P. J.

    2010-07-01

    Measurement of the utility of money is essential in the insurance industry, for prioritising public spending schemes and for the evaluation of decisions on protection systems in high-hazard industries. Up to this time, however, there has been no universally agreed measure for the utility of money, with many utility functions being in common use. In this paper, we shall derive a single family of utility functions, which have risk-aversion as the only free parameter. The fact that they return a utility of zero at their low, reference datum, either the utility of no money or of one unit of money, irrespective of the value of risk-aversion used, qualifies them to be regarded as absolute scales for the utility of money. Evidence of validation for the concept will be offered based on inferential measurements of risk-aversion, using diverse measurement data.

  18. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  19. The influence of maturation, body size and physical self-perceptions on longitudinal changes in physical activity in adolescent girls.

    PubMed

    Fawkner, Samantha; Henretty, Joan; Knowles, Ann-Marie; Nevill, Alan; Niven, Ailsa

    2014-01-01

    The aim of this study was to adopt a longitudinal design to explore the direct effects of both absolute and relative maturation and changes in body size on physical activity, and explore if, and how, physical self-perceptions might mediate this effect. We recruited 208 girls (11.8 ± 0.4 years) at baseline. Data were collected at three subsequent time points, each 6 months apart. At 18 months, 119 girls remained in the study. At each time point, girls completed the Physical Activity Questionnaire for Children, the Pubertal Development Scale (from which, both a measure of relative and absolute maturation were defined) and the Physical Self-Perception Profile, and had physical size characteristics assessed. Multilevel modelling for physical activity indicated a significant negative effect of age, positive effect for physical condition and sport competence and positive association for relatively early maturers. Absolute maturation, body mass, waist circumference and sum of skinfolds did not significantly contribute to the model. Contrary to common hypotheses, relatively more mature girls may, in fact, be more active than their less mature peers. However, neither changes in absolute maturation nor physical size appear to directly influence changes in physical activity in adolescent girls.

  20. Asymptotic scaling properties and estimation of the generalized Hurst exponents in financial data

    NASA Astrophysics Data System (ADS)

    Buonocore, R. J.; Aste, T.; Di Matteo, T.

    2017-04-01

    We propose a method to measure the Hurst exponents of financial time series. The scaling of the absolute moments against the aggregation horizon of real financial processes and of both uniscaling and multiscaling synthetic processes converges asymptotically towards linearity in log-log scale. In light of this we found appropriate a modification of the usual scaling equation via the introduction of a filter function. We devised a measurement procedure which takes into account the presence of the filter function without the need of directly estimating it. We verified that the method is unbiased within the errors by applying it to synthetic time series with known scaling properties. Finally we show an application to empirical financial time series where we fit the measured scaling exponents via a second or a fourth degree polynomial, which, because of theoretical constraints, have respectively only one and two degrees of freedom. We found that on our data set there is not clear preference between the second or fourth degree polynomial. Moreover the study of the filter functions of each time series shows common patterns of convergence depending on the momentum degree.

  1. Absolute and Relative Reliability of Percentage of Syllables Stuttered and Severity Rating Scales

    ERIC Educational Resources Information Center

    Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark

    2014-01-01

    Purpose: Percentage of syllables stuttered (%SS) and severity rating (SR) scales are measures in common use to quantify stuttering severity and its changes during basic and clinical research conditions. However, their reliability has not been assessed with indices measuring both relative and absolute reliability. This study was designed to provide…

  2. A new accuracy measure based on bounded relative error for time series forecasting

    PubMed Central

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  3. A new accuracy measure based on bounded relative error for time series forecasting.

    PubMed

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  4. Absolute Distance Measurement with the MSTAR Sensor

    NASA Technical Reports Server (NTRS)

    Lay, Oliver P.; Dubovitsky, Serge; Peters, Robert; Burger, Johan; Ahn, Seh-Won; Steier, William H.; Fetterman, Harrold R.; Chang, Yian

    2003-01-01

    The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. The sensor uses a single laser in conjunction with fast phase modulators and low frequency detectors. We describe the design of the system - the principle of operation, the metrology source, beamlaunching optics, and signal processing - and show results for target distances up to 1 meter. We then demonstrate how the system can be scaled to kilometer-scale distances.

  5. Absolute photoionization cross-section of the methyl radical.

    PubMed

    Taatjes, Craig A; Osborn, David L; Selby, Talitha M; Meloni, Giovanni; Fan, Haiyan; Pratt, Stephen T

    2008-10-02

    The absolute photoionization cross-section of the methyl radical has been measured using two completely independent methods. The CH3 photoionization cross-section was determined relative to that of acetone and methyl vinyl ketone at photon energies of 10.2 and 11.0 eV by using a pulsed laser-photolysis/time-resolved synchrotron photoionization mass spectrometry method. The time-resolved depletion of the acetone or methyl vinyl ketone precursor and the production of methyl radicals following 193 nm photolysis are monitored simultaneously by using time-resolved synchrotron photoionization mass spectrometry. Comparison of the initial methyl signal with the decrease in precursor signal, in combination with previously measured absolute photoionization cross-sections of the precursors, yields the absolute photoionization cross-section of the methyl radical; sigma(CH3)(10.2 eV) = (5.7 +/- 0.9) x 10(-18) cm(2) and sigma(CH3)(11.0 eV) = (6.0 +/- 2.0) x 10(-18) cm(2). The photoionization cross-section for vinyl radical determined by photolysis of methyl vinyl ketone is in good agreement with previous measurements. The methyl radical photoionization cross-section was also independently measured relative to that of the iodine atom by comparison of ionization signals from CH3 and I fragments following 266 nm photolysis of methyl iodide in a molecular-beam ion-imaging apparatus. These measurements gave a cross-section of (5.4 +/- 2.0) x 10(-18) cm(2) at 10.460 eV, (5.5 +/- 2.0) x 10(-18) cm(2) at 10.466 eV, and (4.9 +/- 2.0) x 10(-18) cm(2) at 10.471 eV. The measurements allow relative photoionization efficiency spectra of methyl radical to be placed on an absolute scale and will facilitate quantitative measurements of methyl concentrations by photoionization mass spectrometry.

  6. Allan deviation analysis of financial return series

    NASA Astrophysics Data System (ADS)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  7. Updated Absolute Age Estimates for the Tolstoj and Caloris Basins, Mercury

    NASA Astrophysics Data System (ADS)

    Ernst, C. M.; Denevi, B. W.; Ostrach, L. R.

    2016-12-01

    Time-stratigraphic systems are developed to provide a framework to derive the relative ages of terrains across a given planet, estimate their absolute ages, and aid cross-planet comparisons. Mercury's time-stratigraphic system was modeled after that of the Moon, with five systems defined on the basis of geologic mapping using Mariner 10 images. From oldest to youngest, Mercury's time-stratigraphic system contains the pre-Tolstojan, Tolstojan, Calorian, Mansurian, and Kuiperian systems. The formations of the Tolstoj and Caloris basins mark the start of the Tolstojan and Calorian systems, respectively. The Mansurian and Kuiperian systems are defined by the type craters for which they are named. The completion of MESSENGER's global image dataset marks an appropriate time to re-assess the time-stratigraphic system of the innermost planet. Recent work suggests the Mansurian and Kuiperian systems may have begun as recently as 1.7 Ga and 280 Ma, respectively (Banks et al., 2016). We used MESSENGER data to re-evaluate the relative and absolute ages of the Tolstoj and Caloris basins in to complete the reassessment of Mercury's time-stratigraphic system. We redefine basin rim units for Tolstoj and Caloris determine the crater size-frequency distribution for craters larger than 10 km in diameter. Two models for crater production are used to derive absolute ages from the crater counts: Marchi et al., 2009 (M) using a main belt asteroid-like impactor size-frequency distribution, hard rock crater scaling relations, target strength of 2e7 dyne/cm2, and target and projectile densities of 3.4 g/cm3 and 2.6 g/cm3; and Le Feuvre and Wieczorek 2011 (L&W) using non-porous scaling relations. We find N(20) values (the number of craters ≥ 20 km in diameter per million square km) for the Caloris rim of 37 ± 7 and for the Tolstoj rim of 93 ± 15. We derived model ages of 3.9 Ga (M) and 3.7 Ga (L&W) for Tolstoj and 3.7 Ga (M) and 3.1 Ga (L&W) for Caloris. Analysis to refine the ages using new techniques (e.g., Michael et al., 2016) and explore a wider set of model parameters is ongoing.

  8. Results of the Calibration of the Delays of Earth Stations for TWSTFT Using the VSL Satellite Simulator Method

    NASA Technical Reports Server (NTRS)

    deJong, Gerrit; Kirchner, Dieter; Ressler, Hubert; Hetzel, Peter; Davis, John; Pears, Peter; Powell, Bill; McKinley, Angela Davis; Klepczynski, Bill; DeYoung, James; hide

    1996-01-01

    Two-way satellite time and frequency transfer (TWSTFT) is the most accurate and precise method of comparing two remote clocks or time scales. The accuracy obtained is dependent on the accuracy of the determination of the non-reciprocal delays of the transmit and the receive paths. When the same transponders in the satellite at the same frequencies are used, then the non-reciprocity in the Earth stations is the limiting factor for absolute time transfer.

  9. Accurate physical laws can permit new standard units: The two laws F→=ma→ and the proportionality of weight to mass

    NASA Astrophysics Data System (ADS)

    Saslow, Wayne M.

    2014-04-01

    Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.

  10. The effect of modeled absolute timing variability and relative timing variability on observational learning.

    PubMed

    Grierson, Lawrence E M; Roberts, James W; Welsher, Arthur M

    2017-05-01

    There is much evidence to suggest that skill learning is enhanced by skill observation. Recent research on this phenomenon indicates a benefit of observing variable/erred demonstrations. In this study, we explore whether it is variability within the relative organization or absolute parameterization of a movement that facilitates skill learning through observation. To do so, participants were randomly allocated into groups that observed a model with no variability, absolute timing variability, relative timing variability, or variability in both absolute and relative timing. All participants performed a four-segment movement pattern with specific absolute and relative timing goals prior to and following the observational intervention, as well as in a 24h retention test and transfers tests that featured new relative and absolute timing goals. Absolute timing error indicated that all groups initially acquired the absolute timing, maintained their performance at 24h retention, and exhibited performance deterioration in both transfer tests. Relative timing error revealed that the observation of no variability and relative timing variability produced greater performance at the post-test, 24h retention and relative timing transfer tests, but for the no variability group, deteriorated at absolute timing transfer test. The results suggest that the learning of absolute timing following observation unfolds irrespective of model variability. However, the learning of relative timing benefits from holding the absolute features constant, while the observation of no variability partially fails in transfer. We suggest learning by observing no variability and variable/erred models unfolds via similar neural mechanisms, although the latter benefits from the additional coding of information pertaining to movements that require a correction. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale

    NASA Astrophysics Data System (ADS)

    Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan

    2018-04-01

    This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.

  12. Articulated Multimedia Physics, Lesson 14, Gases, The Gas Laws, and Absolute Temperature.

    ERIC Educational Resources Information Center

    New York Inst. of Tech., Old Westbury.

    As the fourteenth lesson of the Articulated Multimedia Physics Course, instructional materials are presented in this study guide with relation to gases, gas laws, and absolute temperature. The topics are concerned with the kinetic theory of gases, thermometric scales, Charles' law, ideal gases, Boyle's law, absolute zero, and gas pressures. The…

  13. Formatting modifications in GRADE evidence profiles improved guideline panelists comprehension and accessibility to information. A randomized trial.

    PubMed

    Vandvik, Per Olav; Santesso, Nancy; Akl, Elie A; You, John; Mulla, Sohail; Spencer, Frederick A; Johnston, Bradley C; Brozek, Jan; Kreis, Julia; Brandt, Linn; Zhou, Qi; Schünemann, Holger J; Guyatt, Gordon

    2012-07-01

    To determine the effects of formatting alternatives in Grading of Recommendations Assessment, Development, and Evaluation (GRADE) evidence profiles on guideline panelists' preferences, comprehension, and accessibility. We randomized 116 antithrombotic therapy guideline panelists to review one of two table formats with four formatting alternatives. After answering relevant questions, panelists reviewed the other format and reported their preferences for specific formatting alternatives. Panelists (88 of 116 invited [76%]) preferred presentation of study event rates over no study event rates (median 1 [interquartile range (IQR) 1] on 1-7 scale), absolute risk differences over absolute risks (median 2 [IQR 3]), and additional information in table cells over footnotes (median 1 [IQR 2]). Panelists presented with time frame information in the tables, and not only in footnotes, were more likely to correctly answer questions regarding time frame (58% vs. 11%, P<0.0001), and those presented with risk differences and not absolute risks were more likely to correctly interpret confidence intervals for absolute effects (95% vs. 54%, P<0.0001). Information was considered easy to find, easy to comprehend, and helpful in making recommendations regardless of table format (median 6, IQR 0-1). Panelists found information in GRADE evidence profiles accessible. Correct comprehension of some key information was improved by providing additional information in table and presenting risk differences. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Estimating Anesthesia Time Using the Medicare Claim: A Validation Study

    PubMed Central

    Silber, Jeffrey H.; Rosenbaum, Paul R.; Even-Shoshan, Orit; Mi, Lanyu; Kyle, Fabienne; Teng, Yun; Bratzler, Dale W.; Fleisher, Lee A.

    2012-01-01

    Introduction Procedure length is a fundamental variable associated with quality of care, though seldom studied on a large scale. We sought to estimate procedure length through information obtained in the anesthesia claim submitted to Medicare to validate this method for future studies. Methods The Obesity and Surgical Outcomes Study enlisted 47 hospitals located across New York, Texas and Illinois to study patients undergoing hip, knee, colon and thoracotomy procedures. 15,914 charts were abstracted to determine body mass index and initial patient physiology. Included in this abstraction were induction, cut, close and recovery room times. This chart information was merged to Medicare claims which included anesthesia Part B billing information. Correlations between chart times and claim times were analyzed, models developed, and median absolute differences in minutes calculated. Results Of the 15,914 eligible patients, there were 14,369 where both chart and claim times were available for analysis. In these 14,369, the Spearman correlation between chart and claim time was 0.94 (95% CI 0.94, 0.95) and the median absolute difference between chart and claim time was only 5 minutes (95% CI: 5.0, 5.5). The anesthesia claim can also be used to estimate surgical procedure length, with only a modest increase in error. Conclusion The anesthesia bill found in Medicare claims provides an excellent source of information for studying operative time on a vast scale throughout the United States. However, errors in both chart abstraction and anesthesia claims can occur. Care must be taken in the handling of outliers in this data. PMID:21720242

  15. Global Fire Trends from Satellite ATSR Instrument Series

    NASA Astrophysics Data System (ADS)

    Arino, Olivier; Casadio, Stefano; Serpe, Danilo

    2010-12-01

    Global night-time fire counts for the years from 1995 to 2009 have been obtained by using the latest version of Along Track Scanning Radiometer TOA radiance products (level 1), and related trends have been estimated. Possible biases due to cloud coverage variations have been assumed to be negligible. The sampling number (acquisition frequency) has also been analysed and proved not to influence our results. Global night-time fire trends have been evaluated by inspecting the time series of hot spots aggregated a) at 2°x2° scale; b) at district/country/region/continent scales, and c) globally. The statistical significance of the estimated trend parameters has been verified by means of the Mann-Kendal test. Results indicate that no trends in the absolute number of spots can be identified at the global scale, that there has been no appreciable shift in the fire season during the last fourteen years, and that statistically significant positive and negative trends are only found when data are aggregated at smaller scales.

  16. Multi-scaling modelling in financial markets

    NASA Astrophysics Data System (ADS)

    Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.

    2007-12-01

    In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.

  17. Absolute Calibration of Si iRMs used for Measurements of Si Paleo-nutrient proxies

    NASA Astrophysics Data System (ADS)

    Vocke, R. D., Jr.; Rabb, S. A.

    2016-12-01

    Silicon isotope variations (reported as δ30Si and δ29Si, relative to NBS28) in silicic acid dissolved in ocean waters, in biogenic silica and in diatoms are extremely informative paleo-nutrient proxies. The resolution and comparability of such measurements depend on the quality of the isotopic Reference Materials (iRMs) defining the delta scale. We report new absolute Si isotopic measurements on the iRMs NBS28 (RM 8546 - Silica Sand), Diatomite, and Big Batch using the Avogadro measurement approach and comparing them with prior assessments of these iRMs. The Avogadro Si measurement technique was developed by the German Physikalish-Technische Bundesanstalt (PTB) to provide a precise and highly accurate method to measure absolute isotopic ratios in highly enriched 28Si (99.996%) material. These measurements are part of an international effort to redefine the kg and mole based on the Planck constant h and the Avogadro constant NA, respectively (Vocke et al., 2014 Metrologia 51, 361, Azuma et al., 2015 Metrologia 52 360). This approach produces absolute Si isotope ratio data with lower levels of uncertainty when compared to the traditional "Atomic Weights" method of absolute isotope ratio measurement calibration. This is illustrated in Fig. 1 where absolute Si isotopic measurements on SRM 990, separated by 40+ years of advances in instrumentation, are compared. The availability of this new technique does not say that absolute Si isotopic ratios are or ever will be better for normal Si isotopic measurements when seeking isotopic variations in nature, because they are not. However, by determining the absolute isotopic ratios of all the Si iRM scale artifacts, such iRMs become traceable to the metric system (SI); thereby automatically conferring on all the artifact-based δ30Si and δ29Si measurements traceability to the base SI unit, the mole. Such traceability should help reduce the potential of bias between different iRMs and facilitate the replacement of delta-scale artefacts when they run out. Fig. 1 Comparison of absolute isotopic measurements of SRM 990 using two radically different approaches to absolute calibration and mass bias corrections.

  18. Quantification of Treatment Effect Modification on Both an Additive and Multiplicative Scale

    PubMed Central

    Girerd, Nicolas; Rabilloud, Muriel; Pibarot, Philippe; Mathieu, Patrick; Roy, Pascal

    2016-01-01

    Background In both observational and randomized studies, associations with overall survival are by and large assessed on a multiplicative scale using the Cox model. However, clinicians and clinical researchers have an ardent interest in assessing absolute benefit associated with treatments. In older patients, some studies have reported lower relative treatment effect, which might translate into similar or even greater absolute treatment effect given their high baseline hazard for clinical events. Methods The effect of treatment and the effect modification of treatment were respectively assessed using a multiplicative and an additive hazard model in an analysis adjusted for propensity score in the context of coronary surgery. Results The multiplicative model yielded a lower relative hazard reduction with bilateral internal thoracic artery grafting in older patients (Hazard ratio for interaction/year = 1.03, 95%CI: 1.00 to 1.06, p = 0.05) whereas the additive model reported a similar absolute hazard reduction with increasing age (Delta for interaction/year = 0.10, 95%CI: -0.27 to 0.46, p = 0.61). The number needed to treat derived from the propensity score-adjusted multiplicative model was remarkably similar at the end of the follow-up in patients aged < = 60 and in patients >70. Conclusions The present example demonstrates that a lower treatment effect in older patients on a relative scale can conversely translate into a similar treatment effect on an additive scale due to large baseline hazard differences. Importantly, absolute risk reduction, either crude or adjusted, can be calculated from multiplicative survival models. We advocate for a wider use of the absolute scale, especially using additive hazard models, to assess treatment effect and treatment effect modification. PMID:27045168

  19. Anchoring effects in the judgment of confidence: semantic or numeric priming?

    PubMed

    Carroll, Steven R; Petrusic, William M; Leth-Steensen, Craig

    2009-02-01

    Over the last decade, researchers have debated whether anchoring effects are the result of semantic or numeric priming. The present study tested both hypotheses. In four experiments involving a sensory detection task, participants first made a relative confidence judgment by deciding whether they were more or less confident than an anchor value in the correctness of their decision. Subsequently, they expressed an absolute level of confidence. In two of these experiments, the relative confidence anchor values represented the midpoints between the absolute confidence scale values, which were either explicitly numeric or semantic, nonnumeric representations of magnitude. In two other experiments, the anchor values were drawn from a scale modally different from that used to express the absolute confidence (i.e., nonnumeric and numeric, respectively, or vice versa). Regardless of the nature of the anchors, the mean confidence ratings revealed anchoring effects only when the relative and absolute confidence values were drawn from identical scales. Together, the results of these four experiments limit the conditions under which both numeric and semantic priming would be expected to lead to anchoring effects.

  20. Mind and body therapy for fibromyalgia.

    PubMed

    Theadom, Alice; Cropley, Mark; Smith, Helen E; Feigin, Valery L; McPherson, Kathryn

    2015-04-09

    Mind-body interventions are based on the holistic principle that mind, body and behaviour are all interconnected. Mind-body interventions incorporate strategies that are thought to improve psychological and physical well-being, aim to allow patients to take an active role in their treatment, and promote people's ability to cope. Mind-body interventions are widely used by people with fibromyalgia to help manage their symptoms and improve well-being. Examples of mind-body therapies include psychological therapies, biofeedback, mindfulness, movement therapies and relaxation strategies. To review the benefits and harms of mind-body therapies in comparison to standard care and attention placebo control groups for adults with fibromyalgia, post-intervention and at three and six month follow-up. Electronic searches of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid), EMBASE (Ovid), PsycINFO (Ovid), AMED (EBSCO) and CINAHL (Ovid) were conducted up to 30 October 2013. Searches of reference lists were conducted and authors in the field were contacted to identify additional relevant articles. All relevant randomised controlled trials (RCTs) of mind-body interventions for adults with fibromyalgia were included. Two authors independently selected studies, extracted the data and assessed trials for low, unclear or high risk of bias. Any discrepancy was resolved through discussion and consensus. Continuous outcomes were analysed using mean difference (MD) where the same outcome measure and scoring method was used and standardised mean difference (SMD) where different outcome measures were used. For binary data standard estimation of the risk ratio (RR) and its 95% confidence interval (CI) was used. Seventy-four papers describing 61 trials were identified, with 4234 predominantly female participants. The nature of fibromyalgia varied from mild to severe across the study populations. Twenty-six studies were classified as having a low risk of bias for all domains assessed. The findings of mind-body therapies compared with usual care were prioritised.There is low quality evidence that in comparison to usual care controls psychological therapies have favourable effects on physical functioning (SMD -0.4, 95% CI -0.6 to -0.3, -7.5% absolute change, 2 point shift on a 0 to 100 scale), pain (SMD -0.3, 95% CI -0.5 to -0.2, -3.5% absolute change, 2 point shift on a 0 to 100 scale) and mood (SMD -0.5, 95% CI -0.6 to -0.3, -4.8% absolute change, 3 point shift on a 20 to 80 scale). There is very low quality evidence of more withdrawals in the psychological therapy group in comparison to usual care controls (RR 1.38, 95% CI 1.12 to 1.69, 6% absolute risk difference). There is lack of evidence of a difference between the number of adverse events in the psychological therapy and control groups (RR 0.38, 95% CI 0.06 to 2.50, 4% absolute risk difference).There was very low quality evidence that biofeedback in comparison to usual care controls had an effect on physical functioning (SMD -0.1, 95% CI -0.4 to 0.3, -1.2% absolute change, 1 point shift on a 0 to 100 scale), pain (SMD -2.6, 95% CI -91.3 to 86.1, -2.6% absolute change) and mood (SMD 0.1, 95% CI -0.3 to 0.5, 1.9% absolute change, less than 1 point shift on a 0 to 90 scale) post-intervention. In view of the quality of evidence we cannot be certain that biofeedback has a little or no effect on these outcomes. There was very low quality evidence that biofeedback led to more withdrawals from the study (RR 4.08, 95% CI 1.43 to 11.62, 20% absolute risk difference). No adverse events were reported.There was no advantage observed for mindfulness in comparison to usual care for physical functioning (SMD -0.3, 95% CI -0.6 to 0.1, -4.8% absolute change, 4 point shift on a scale 0 to 100), pain (SMD -0.1, CI -0.4 to 0.3, -1.3% absolute change, less than 1 point shift on a 0 to 10 scale), mood (SMD -0.2, 95% CI -0.5 to 0.0, -3.7% absolute change, 2 point shift on a 20 to 80 scale) or withdrawals (RR 1.07, 95% CI 0.67 to 1.72, 2% absolute risk difference) between the two groups post-intervention. However, the quality of the evidence was very low for pain and moderate for mood and number of withdrawals. No studies reported any adverse events.Very low quality evidence revealed that movement therapies in comparison to usual care controls improved pain (MD -2.3, CI -4.2 to -0.4, -23% absolute change) and mood (MD -9.8, 95% CI -18.5 to -1.2, -16.4% absolute change) post-intervention. There was no advantage for physical functioning (SMD -0.2, 95% CI -0.5 to 0.2, -3.4% absolute change, 2 point shift on a 0 to 100 scale), participant withdrawals (RR 1.95, 95% CI 1.13 to 3.38, 11% absolute difference) or adverse events (RR 4.62, 95% CI 0.23 to 93.92, 4% absolute risk difference) between the two groups, however rare adverse events may include worsening of pain.Low quality evidence revealed that relaxation based therapies in comparison to usual care controls showed an advantage for physical functioning (MD -8.3, 95% CI -10.1 to -6.5, -10.4% absolute change) and pain (SMD -1.0, 95% CI -1.6 to -0.5, -3.5% absolute change, 2 point shift on a 0 to 78 scale) but not for mood (SMD -4.4, CI -14.5 to 5.6, -7.4% absolute change) post-intervention. There was no difference between the groups for number of withdrawals (RR 4.40, 95% CI 0.59 to 33.07, 31% absolute risk difference) and no adverse events were reported. Psychological interventions therapies may be effective in improving physical functioning, pain and low mood for adults with fibromyalgia in comparison to usual care controls but the quality of the evidence is low. Further research on the outcomes of therapies is needed to determine if positive effects identified post-intervention are sustained. The effectiveness of biofeedback, mindfulness, movement therapies and relaxation based therapies remains unclear as the quality of the evidence was very low or low. The small number of trials and inconsistency in the use of outcome measures across the trials restricted the analysis.

  1. Fundamental principles of absolute radiometry and the philosophy of this NBS program (1968 to 1971)

    NASA Technical Reports Server (NTRS)

    Geist, J.

    1972-01-01

    A description is given work performed on a program to develop an electrically calibrated detector (also called absolute radiometer, absolute detector, and electrically calibrated radiometer) that could be used to realize, maintain, and transfer a scale of total irradiance. The program includes a comprehensive investigation of the theoretical basis of absolute detector radiometry, as well as the design and construction of a number of detectors. A theoretical analysis of the sources of error is also included.

  2. Energy dispersive X-ray analysis on an absolute scale in scanning transmission electron microscopy.

    PubMed

    Chen, Z; D'Alfonso, A J; Weyland, M; Taplin, D J; Allen, L J; Findlay, S D

    2015-10-01

    We demonstrate absolute scale agreement between the number of X-ray counts in energy dispersive X-ray spectroscopy using an atomic-scale coherent electron probe and first-principles simulations. Scan-averaged spectra were collected across a range of thicknesses with precisely determined and controlled microscope parameters. Ionization cross-sections were calculated using the quantum excitation of phonons model, incorporating dynamical (multiple) electron scattering, which is seen to be important even for very thin specimens. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Method of differential-phase/absolute-amplitude QAM

    DOEpatents

    Dimsdle, Jeffrey William [Overland Park, KS

    2007-07-03

    A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.

  4. Method of differential-phase/absolute-amplitude QAM

    DOEpatents

    Dimsdle, Jeffrey William [Overland Park, KS

    2008-10-21

    A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.

  5. Method of differential-phase/absolute-amplitude QAM

    DOEpatents

    Dimsdle, Jeffrey William [Overland Park, KS

    2009-09-01

    A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.

  6. Method of differential-phase/absolute-amplitude QAM

    DOEpatents

    Dimsdle, Jeffrey William [Overland Park, KS

    2007-07-17

    A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.

  7. Method of differential-phase/absolute-amplitude QAM

    DOEpatents

    Dimsdle, Jeffrey William

    2007-10-02

    A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.

  8. SyMRI of the Brain

    PubMed Central

    Hagiwara, Akifumi; Warntjes, Marcel; Hori, Masaaki; Andica, Christina; Nakazawa, Misaki; Kumamaru, Kanako Kunishima; Abe, Osamu; Aoki, Shigeki

    2017-01-01

    Abstract Conventional magnetic resonance images are usually evaluated using the image signal contrast between tissues and not based on their absolute signal intensities. Quantification of tissue parameters, such as relaxation rates and proton density, would provide an absolute scale; however, these methods have mainly been performed in a research setting. The development of rapid quantification, with scan times in the order of 6 minutes for full head coverage, has provided the prerequisites for clinical use. The aim of this review article was to introduce a specific quantification method and synthesis of contrast-weighted images based on the acquired absolute values, and to present automatic segmentation of brain tissues and measurement of myelin based on the quantitative values, along with application of these techniques to various brain diseases. The entire technique is referred to as “SyMRI” in this review. SyMRI has shown promising results in previous studies when used for multiple sclerosis, brain metastases, Sturge-Weber syndrome, idiopathic normal pressure hydrocephalus, meningitis, and postmortem imaging. PMID:28257339

  9. Calibration of the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Barnes, Robert; Baize, Rosemary; O'Connell, Joseph; Hair, Jason

    2010-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements.

  10. Interpretation of the COBE FIRAS CMBR spectrum

    NASA Technical Reports Server (NTRS)

    Wright, E. L.; Mather, J. C.; Fixsen, D. J.; Kogut, A.; Shafer, R. A.; Bennett, C. L.; Boggess, N. W.; Cheng, E. S.; Silverberg, R. F.; Smoot, G. F.

    1994-01-01

    The cosmic microwave background radiation (CMBR) spectrum measured by the Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on NASA's Cosmic Background Explorer (COBE) is indistinguishable from a blackbody, implying stringent limits on energy release in the early universe later than the time t = 1 yr after the big bang. We compare the FIRAS data to previous precise measurements of the cosmic microwave background spectrum and find a reasonable agreement. We discuss the implications of the absolute value of y is less than 2.5 x 10(exp -5) and the absolute value of mu is less than 3.3 x 10(exp -4) 95% confidence limits found by Mather et al. (1994) on many processes occurring after t = 1 yr, such as explosive structure formation, reionization, and dissipation of small-scale density perturbations. We place limits on models with dust plus Population III stars, or evolving populations of IR galaxies, by directly comparing the Mather et al. spectrum to the model predictions.

  11. Fire Suppression Properties of Very Fine Water Mist

    DTIC Science & Technology

    2005-01-01

    with the University of Heidelberg, developed an in situ oxygen sensor based on tunable diode laser absorption spectroscopy ( TDLAS ) to provide absolute... oxygen number densities in the presence of mist.3 Th e TDLAS oxygen sensor provides real-time, calibra- tion-free, quantitative oxygen ...Determination of Molecular Oxygen Concentrations in Full-Scale Fire Suppression Tests Using TDLAS ,” Proc. Combust. Inst. 29, 353-360 (2002).

  12. 242Pu absolute neutron-capture cross section measurement

    NASA Astrophysics Data System (ADS)

    Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; Bucher, B.; Chyzh, A.; Bredeweg, T. A.; Baramsai, B.; Couture, A.; Jandel, M.; Mosby, S.; O'Donnell, J. M.; Ullmann, J. L.

    2017-09-01

    The absolute neutron-capture cross section of 242Pu was measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. During target fabrication, a small amount of 239Pu was added to the active target so that the absolute scale of the 242Pu(n,γ) cross section could be set according to the known 239Pu(n,f) resonance at En,R = 7.83 eV. The relative scale of the 242Pu(n,γ) cross section covers four orders of magnitude for incident neutron energies from thermal to ≈ 40 keV. The cross section reported in ENDF/B-VII.1 for the 242Pu(n,γ) En,R = 2.68 eV resonance was found to be 2.4% lower than the new absolute 242Pu(n,γ) cross section.

  13. Changing ethnic inequalities in mortality in New Zealand over 30 years: linked cohort studies with 68.9 million person-years of follow-up.

    PubMed

    Disney, George; Teng, Andrea; Atkinson, June; Wilson, Nick; Blakely, Tony

    2017-01-01

    Internationally, ethnic inequalities in mortality within countries are increasingly recognized as a public health concern. But few countries have data to monitor such inequalities. We aimed to provide a detailed description of ethnic inequalities (Māori [indigenous], Pacific, and European/Other) in mortality for a country with high quality ethnicity data, using both standard and novel visualization methods. Cohort studies of the entire New Zealand population were conducted, using probabilistically-linked Census and mortality data from 1981 to 2011 (68.9 million person years). Absolute (standardized rate difference) and relative (standardized rate ratio) inequalities were calculated, in 1-74-year-olds, for Māori and Pacific peoples in comparison to European/Other. All-cause mortality rates were highest for Māori, followed by Pacific peoples then European/Other, and declined in all three ethnic groups over time. Pacific peoples experienced the slowest annual percentage fall in mortality rates, then Māori, with European/Other having the highest percentage falls - resulting in widening relative inequalities. Absolute inequalities, however, for both Māori and Pacific males compared to European/Other have been falling since 1996. But for females, only Māori absolute inequalities (compared with European/Other) have been falling. Regarding cause of death, cancer is becoming a more important contributor than cardiovascular disease (CVD) to absolute inequalities, especially for Māori females. We found declines in all-cause mortality rates, over time, for each ethnic group of interest. Ethnic mortality inequalities are generally stable or even falling in absolute terms, but have increased on a relative scale. The drivers of these inequalities in mortality are transitioning over time, away from CVD to cancer and diabetes; such transitions are likely in other countries, and warrant further research. To address these inequalities, policymakers need to enhance prevention activities and health care delivery, but also support wider improvements in educational achievement and socioeconomic position for highest need populations.

  14. Changing ethnic inequalities in mortality in New Zealand over 30 years: linked cohort studies with 68.9 million person-years of follow-up.

    PubMed

    Disney, George; Teng, Andrea; Atkinson, June; Wilson, Nick; Blakely, Tony

    2017-04-26

    Internationally, ethnic inequalities in mortality within countries are increasingly recognized as a public health concern. But few countries have data to monitor such inequalities. We aimed to provide a detailed description of ethnic inequalities (Māori [indigenous], Pacific, and European/Other) in mortality for a country with high quality ethnicity data, using both standard and novel visualization methods. Cohort studies of the entire New Zealand population were conducted, using probabilistically-linked Census and mortality data from 1981 to 2011 (68.9 million person years). Absolute (standardized rate difference) and relative (standardized rate ratio) inequalities were calculated, in 1-74-year-olds, for Māori and Pacific peoples in comparison to European/Other. All-cause mortality rates were highest for Māori, followed by Pacific peoples then European/Other, and declined in all three ethnic groups over time. Pacific peoples experienced the slowest annual percentage fall in mortality rates, then Māori, with European/Other having the highest percentage falls - resulting in widening relative inequalities. Absolute inequalities, however, for both Māori and Pacific males compared to European/Other have been falling since 1996. But for females, only Māori absolute inequalities (compared with European/Other) have been falling. Regarding cause of death, cancer is becoming a more important contributor than cardiovascular disease (CVD) to absolute inequalities, especially for Māori females. We found declines in all-cause mortality rates, over time, for each ethnic group of interest. Ethnic mortality inequalities are generally stable or even falling in absolute terms, but have increased on a relative scale. The drivers of these inequalities in mortality are transitioning over time, away from CVD to cancer and diabetes; such transitions are likely in other countries, and warrant further research. To address these inequalities, policymakers need to enhance prevention activities and health care delivery, but also support wider improvements in educational achievement and socioeconomic position for highest need populations.

  15. Pharmacometric Analysis of the Relationship Between Absolute Lymphocyte Count and Expanded Disability Status Scale and Relapse Rate, Efficacy End Points, in Multiple Sclerosis Trials.

    PubMed

    Novakovic, A M; Thorsted, A; Schindler, E; Jönsson, S; Munafo, A; Karlsson, M O

    2018-05-10

    The aim of this work was to assess the relationship between the absolute lymphocyte count (ALC), and disability (as measured by the Expanded Disability Status Scale [EDSS]) and occurrence of relapses, 2 efficacy endpoints, respectively, in patients with remitting-relasping multiple sclerosis. Data for ALC, EDSS, and relapse rate were available from 1319 patients receiving placebo and/or cladribine tablets. Pharmacodynamic models were developed to characterize the time course of the endpoints. ALC-related measures were then evaluated as predictors of the efficacy endpoints. EDSS data were best fitted by a model where the logit-linear disease progression is affected by the dynamics of ALC change from baseline. Relapse rate data were best described by the Weibull hazard function, and the ALC change from baseline was also found to be a significant predictor of time to relapse. Presented models have shown that once cladribine exposure driven ALC-derived measures are included in the model, the need for drug effect components is of less importance (EDSS) or disappears (relapse rate). This simplifies the models and theoretically makes them mechanism specific rather than drug specific. Having a reliable mechanism-specific model would allow leveraging historical data across compounds, to support decision making in drug development and possibly shorten the time to market. © 2018, The American College of Clinical Pharmacology.

  16. Universal scaling and nonlinearity of aggregate price impact in financial markets.

    PubMed

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2018-01-01

    How and why stock prices move is a centuries-old question still not answered conclusively. More recently, attention shifted to higher frequencies, where trades are processed piecewise across different time scales. Here we reveal that price impact has a universal nonlinear shape for trades aggregated on any intraday scale. Its shape varies little across instruments, but drastically different master curves are obtained for order-volume and -sign impact. The scaling is largely determined by the relevant Hurst exponents. We further show that extreme order-flow imbalance is not associated with large returns. To the contrary, it is observed when the price is pinned to a particular level. Prices move only when there is sufficient balance in the local order flow. In fact, the probability that a trade changes the midprice falls to zero with increasing (absolute) order-sign bias along an arc-shaped curve for all intraday scales. Our findings challenge the widespread assumption of linear aggregate impact. They imply that market dynamics on all intraday time scales are shaped by correlations and bilateral adaptation in the flows of liquidity provision and taking.

  17. Universal scaling and nonlinearity of aggregate price impact in financial markets

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2018-01-01

    How and why stock prices move is a centuries-old question still not answered conclusively. More recently, attention shifted to higher frequencies, where trades are processed piecewise across different time scales. Here we reveal that price impact has a universal nonlinear shape for trades aggregated on any intraday scale. Its shape varies little across instruments, but drastically different master curves are obtained for order-volume and -sign impact. The scaling is largely determined by the relevant Hurst exponents. We further show that extreme order-flow imbalance is not associated with large returns. To the contrary, it is observed when the price is pinned to a particular level. Prices move only when there is sufficient balance in the local order flow. In fact, the probability that a trade changes the midprice falls to zero with increasing (absolute) order-sign bias along an arc-shaped curve for all intraday scales. Our findings challenge the widespread assumption of linear aggregate impact. They imply that market dynamics on all intraday time scales are shaped by correlations and bilateral adaptation in the flows of liquidity provision and taking.

  18. Teaching the Double Layer.

    ERIC Educational Resources Information Center

    Bockris, J. O'M.

    1983-01-01

    Suggests various methods for teaching the double layer in electrochemistry courses. Topics addressed include measuring change in absolute potential difference (PD) at interphase, conventional electrode potential scale, analyzing absolute PD, metal-metal and overlap electron PDs, accumulation of material at interphase, thermodynamics of electrified…

  19. Advances in the Metrology of Absolute Value Assignments to Isotopic Reference Materials: Consequences from the Avogadro Project

    NASA Astrophysics Data System (ADS)

    Vocke, Robert; Rabb, Savelas

    2015-04-01

    All isotope amount ratios (hereafter referred to as isotope ratios) produced and measured on any mass spectrometer are biased. This unfortunate situation results mainly from the physical processes in the source area where ions are produced. Because the ionized atoms in poly-isotopic elements have different masses, such processes are typically mass dependent and lead to what is commonly referred to as mass fractionation (for thermal ionization and electron impact sources) and mass bias (for inductively coupled plasma sources.) This biasing process produces a measured isotope ratio that is either larger or smaller than the "true" ratio in the sample. This has led to the development of numerous fractionation "laws" that seek to correct for these effects, many of which are not based on the physical processes giving rise to the biases. The search for tighter and reproducible precisions has led to two isotope ratio measurement systems that exist side-by-side. One still seeks to measure "absolute" isotope ratios while the other utilizes an artifact based measurement system called a delta-scale. The common element between these two measurement systems is the utilization of isotope reference materials (iRMs). These iRMs are used to validate a fractionation "law" in the former case and function as a scale anchor in the latter. Many value assignments of iRMs are based on "best measurements" by the original groups producing the reference material, a not entirely satisfactory approach. Other iRMs, with absolute isotope ratio values, have been produced by calibrated measurements following the Atomic Weight approach (AW) pioneered by NBS nearly 50 years ago. Unfortunately, the AW is not capable of calibrating the new generation of iRMs to sufficient precision. So how do we get iRMs with isotope ratios of sufficient precision and without bias? Such a focus is not to denigrate the extremely precise delta-scale measurements presently being made on non-traditional and tradition stable isotope systems. But even absolute isotope ratio measurements have an important role to play in delta-scale schemes. Highly precise and unbiased measurements of the artifact anchor for any scale facilitates the replacement of that scale's anchor once the initial supply of the iRM is exhausted. Absolute isotope ratio measurements of artifacts at the positive and negative extremes of a delta-scale will allow the appropriate assignment of delta-values to these normalizing iRMs, thereby minimizing any scale contractions or expansions to either side of the anchor artifact. And finally, absolute values for critical iRMs with also allow delta-scale results to be used in other scientific disciplines that employ other units of measure. Precise absolute isotope ratios of Si has been one of the consequences of the Avogadro Project (an international effort to replace the original kilogram artifact with a natural constant, the Planck constant.) We will present the results of applying such measurements to the principal iRMs for the Si isotope system (SRM 990, Big Batch and Diatomite) and its consequences for their delta-Si29 and delta-Si30 values.

  20. An abrupt centennial-scale drought event and mid-holocene climate change patterns in monsoon marginal zones of East Asia.

    PubMed

    Li, Yu; Wang, Nai'ang; Zhang, Chengqi

    2014-01-01

    The mid-latitudes of East Asia are characterized by the interaction between the Asian summer monsoon and the westerly winds. Understanding long-term climate change in the marginal regions of the Asian monsoon is critical for understanding the millennial-scale interactions between the Asian monsoon and the westerly winds. Abrupt climate events are always associated with changes in large-scale circulation patterns; therefore, investigations into abrupt climate changes provide clues for responses of circulation patterns to extreme climate events. In this paper, we examined the time scale and mid-Holocene climatic background of an abrupt dry mid-Holocene event in the Shiyang River drainage basin in the northwest margin of the Asian monsoon. Mid-Holocene lacustrine records were collected from the middle reaches and the terminal lake of the basin. Using radiocarbon and OSL ages, a centennial-scale drought event, which is characterized by a sand layer in lacustrine sediments both from the middle and lower reaches of the basin, was absolutely dated between 8.0-7.0 cal kyr BP. Grain size data suggest an abrupt decline in lake level and a dry environment in the middle reaches of the basin during the dry interval. Previous studies have shown mid-Holocene drought events in other places of monsoon marginal zones; however, their chronologies are not strong enough to study the mechanism. According to the absolutely dated records, we proposed a new hypothesis that the mid-Holocene dry interval can be related to the weakening Asian summer monsoon and the relatively arid environment in arid Central Asia. Furthermore, abrupt dry climatic events are directly linked to the basin-wide effective moisture change in semi-arid and arid regions. Effective moisture is affected by basin-wide precipitation, evapotranspiration, lake surface evaporation and other geographical settings. As a result, the time scales of the dry interval could vary according to locations due to different geographical features.

  1. An Abrupt Centennial-Scale Drought Event and Mid-Holocene Climate Change Patterns in Monsoon Marginal Zones of East Asia

    PubMed Central

    Li, Yu; Wang, Nai'ang; Zhang, Chengqi

    2014-01-01

    The mid-latitudes of East Asia are characterized by the interaction between the Asian summer monsoon and the westerly winds. Understanding long-term climate change in the marginal regions of the Asian monsoon is critical for understanding the millennial-scale interactions between the Asian monsoon and the westerly winds. Abrupt climate events are always associated with changes in large-scale circulation patterns; therefore, investigations into abrupt climate changes provide clues for responses of circulation patterns to extreme climate events. In this paper, we examined the time scale and mid-Holocene climatic background of an abrupt dry mid-Holocene event in the Shiyang River drainage basin in the northwest margin of the Asian monsoon. Mid-Holocene lacustrine records were collected from the middle reaches and the terminal lake of the basin. Using radiocarbon and OSL ages, a centennial-scale drought event, which is characterized by a sand layer in lacustrine sediments both from the middle and lower reaches of the basin, was absolutely dated between 8.0–7.0 cal kyr BP. Grain size data suggest an abrupt decline in lake level and a dry environment in the middle reaches of the basin during the dry interval. Previous studies have shown mid-Holocene drought events in other places of monsoon marginal zones; however, their chronologies are not strong enough to study the mechanism. According to the absolutely dated records, we proposed a new hypothesis that the mid-Holocene dry interval can be related to the weakening Asian summer monsoon and the relatively arid environment in arid Central Asia. Furthermore, abrupt dry climatic events are directly linked to the basin-wide effective moisture change in semi-arid and arid regions. Effective moisture is affected by basin-wide precipitation, evapotranspiration, lake surface evaporation and other geographical settings. As a result, the time scales of the dry interval could vary according to locations due to different geographical features. PMID:24599259

  2. Calibration of historical geomagnetic observations from Prague-Klementinum

    NASA Astrophysics Data System (ADS)

    Hejda, Pavel

    2015-04-01

    The long tradition of geomagnetic observations on the Czech territory dates back to 1839, when regular observations were started by Karl Kreil at the Astronomical Observatory Prague-Klementinum. Observations were carried out manually, at the beginning more than ten times per day and the frequency later decreased to 5 daily observations. Around the turn of century the observations became to be disturbed by the increasing urban magnetic noise and the observatory was closed down in 1926. The variation measurements were completed by absolute measurements carried out several times per year. Thanks to the diligence and carefulness of Karl Kreil and his followers all results were printed in the yearbooks Magnetische und meteorologische Beobachtungen zu Prag and have thus been saved until presence. The entire collection is kept at the Central Library of the Czech Academy of Sciences. As the oldest geomagnetic data have been recently recognized as an important source of information for Space Weather studies, digitization and analysis of the data have been now started. Although all volumes have been scanned with the OCR option, the low quality of original books does not allow for an automatic transformation to digital form. The data were typed by hand to Excel files with a primary check and further processed. Variation data from 1839 to 1871 were published in measured units (scales of divisions). Their reduction to physical units was not as straight forward as we are used in recent observatories. There were several reasons: (i) the large heavy magnetic rods were not as stable as recent systems, (ii) the absolute measurements of horizontal components were carried out by the genius but rather complicated Gauss method, (iii) the intervals between absolute measurements was on the scale of months and eventual errors were not recognized timely. The presentation will discuss several methods and give examples how to cope with the problem.

  3. Characterizing the Severe Turbulence Environments Associated With Commercial Aviation Accidents. Part 1; 44 Case Study Synoptic Observational Analyses

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Huffman, Allan W.; Lux, Kevin M.; Charney, Joseph J.; Riordan, Allan J.; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)

    2002-01-01

    A 44 case study analysis of the large-scale atmospheric structure associated with development of accident-producing aircraft turbulence is described. Categorization is a function of the accident location, altitude, time of year, time of day, and the turbulence category, which classifies disturbances. National Centers for Environmental Prediction Reanalyses data sets and satellite imagery are employed to diagnose synoptic scale predictor fields associated with the large-scale environment preceding severe turbulence. These analyses indicate a predominance of severe accident-producing turbulence within the entrance region of a jet stream at the synoptic scale. Typically, a flow curvature region is just upstream within the jet entrance region, convection is within 100 km of the accident, vertical motion is upward, absolute vorticity is low, vertical wind shear is increasing, and horizontal cold advection is substantial. The most consistent predictor is upstream flow curvature and nearby convection is the second most frequent predictor.

  4. Temporal acceleration of spatially distributed kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Abhijit; Vlachos, Dionisios G.

    The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial {tau}-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based {tau}-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial {tau}-leapmore » method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1.« less

  5. Impact cratering and regolith dynamics. [on moon

    NASA Technical Reports Server (NTRS)

    Hoerz, F.

    1977-01-01

    The most recent models concerning mechanical aspects of lunar regolith dynamics related to impact cratering use probabilistic approaches to account for the randomness of the meteorite environment in both space and time. Accordingly the absolute regolith thickness is strictly a function of total bombardment intensity and absolute regolith growth rate in nonlinear through geologic time. Regoliths of increasing median thickness will have larger and larger proportions of more and more deep seated materials. An especially active zone of reworking on the lunar surface of about 1 mm depth has been established. With increasing depth, the probability of excavation and regolith turnover decreases very rapidly. Thus small scale stratigraphy - observable in lunar core materials - is perfectly compatible with regolith gardening, though it is also demonstrated that any such stratigraphy does not necessarily present a complete record of the regolith's depositional history. At present, the lifetimes of exposed lunar rocks against comminution by impact processes can be modeled; it appears that catastrophic rupture dominates over single particle abrasion.

  6. Impact of Stock Market Structure on Intertrade Time and Price Dynamics

    NASA Astrophysics Data System (ADS)

    Yuen, Ainslie; Ivanov, Plamen Ch.

    2005-08-01

    The NYSE and NASDAQ stock markets have very different structures and there is continuing controversy over whether differences in stock price behaviour are due to market structure or company characteristics. As the influence of market structure on stock prices may be obscured by exogenous factors such as demand and supply, we hypothesize that modulation of the flow of transactions due to market operations may carry a stronger imprint of the internal market mechanism. We analyse times between consecutive transactions (ITT) for NYSE and NASDAQ stocks, and we relate the dynamical properties of the ITT with those of the corresponding price fluctuations. We find a robust scale-invariant temporal organisation in the ITT of stocks which is independent of individual company characteristics and industry sector, but which depends on market structure. We find that stocks registered on the NASDAQ exhibit stronger correlations in their transaction timing within a trading day, compared with NYSE stocks. Further, we find that companies that transfer from the NASDAQ to the NYSE show a reduction in the correlation strength of transaction timing within a trading day, after the move, suggesting influences of market structure. Surprisingly, we also observe that stronger power-law correlations in the ITT are coupled with stronger power-law correlations in absolute price returns and higher price volatility, suggesting a strong link between the dynamical properties of ITT and the corresponding price fluctuations over a broad range of time scales. Comparing the NYSE and NASDAQ, we demonstrate that the higher correlations we find in ITT for NASDAQ stocks are matched by higher correlations in absolute price returns and by higher volatility, suggesting that market structure may affect price behaviour through information contained in transaction timing.

  7. Population Growth, Energy Use, and Pollution: Understanding the Driving Forces of Global Change. Hands-On! Developing Active Learning Modules on the Human Dimensions of Global Change.

    ERIC Educational Resources Information Center

    Kuby, Michael

    Since the beginning of the scientific revolution in the 1700s, the absolute scale of the human economy has increased many times over, and, with it, the impact on the natural environment. This learning module's activities introduce the student to linkages among population growth, energy use, level of economic and technological development and their…

  8. Aircraft noise-induced awakenings are more reasonably predicted from relative than from absolute sound exposure levels.

    PubMed

    Fidell, Sanford; Tabachnick, Barbara; Mestre, Vincent; Fidell, Linda

    2013-11-01

    Assessment of aircraft noise-induced sleep disturbance is problematic for several reasons. Current assessment methods are based on sparse evidence and limited understandings; predictions of awakening prevalence rates based on indoor absolute sound exposure levels (SELs) fail to account for appreciable amounts of variance in dosage-response relationships and are not freely generalizable from airport to airport; and predicted awakening rates do not differ significantly from zero over a wide range of SELs. Even in conjunction with additional predictors, such as time of night and assumed individual differences in "sensitivity to awakening," nominally SEL-based predictions of awakening rates remain of limited utility and are easily misapplied and misinterpreted. Probabilities of awakening are more closely related to SELs scaled in units of standard deviates of local distributions of aircraft SELs, than to absolute sound levels. Self-selection of residential populations for tolerance of nighttime noise and habituation to airport noise environments offer more parsimonious and useful explanations for differences in awakening rates at disparate airports than assumed individual differences in sensitivity to awakening.

  9. Synthetic isotope mixtures for the calibration of isotope amount ratio measurements of carbon

    NASA Astrophysics Data System (ADS)

    Russe, K.; Valkiers, S.; Taylor, P. D. P.

    2004-07-01

    Synthetic isotope mixtures for the calibration of carbon isotope amount ratio measurements have been prepared by mixing carbon tetrafluoride highly enriched in 13C with carbon tetrafluoride depleted in 13C. Mixing procedures based on volumetry and gravimetry are described. The mixtures served as primary measurement standards for the calibration of isotope amount ratio measurements of the Isotopic Reference Materials PEF1, NBS22 and USGS24. Thus SI-traceable measurements of absolute carbon isotope amount ratios have been performed for the first time without any hypothesis needed for a correction of oxygen isotope abundances, such as is the case for measurements on carbon dioxide. As a result, "absolute" carbon isotope amount ratios determined via carbon tetrafluoride have smaller uncertainties than those published for carbon dioxide. From the measurements of the Reference Materials concerned, the absolute carbon isotope amount ratio of Vienna Pee Dee Belemnite (VPDB)--the hypothetical material upon which the scale for relative carbon isotope ratio measurements is based--was calculated to be R13(VPDB) = (11 101 +/- 16) × 10-6.

  10. Measuring Growth with Vertical Scales

    ERIC Educational Resources Information Center

    Briggs, Derek C.

    2013-01-01

    A vertical score scale is needed to measure growth across multiple tests in terms of absolute changes in magnitude. Since the warrant for subsequent growth interpretations depends upon the assumption that the scale has interval properties, the validation of a vertical scale would seem to require methods for distinguishing interval scales from…

  11. Absolute measurement of subnanometer scale vibration of cochlear partition of an excised guinea pig cochlea using spectral-domain phase-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Subhash, Hrebesh M.; Choudhury, Niloy; Jacques, Steven L.; Wang, Ruikang K.; Chen, Fangyi; Zha, Dingjun; Nuttall, Alfred L.

    2012-01-01

    Direct measurement of absolute vibration parameters from different locations within the mammalian organ of Corti is crucial for understanding the hearing mechanics such as how sound propagates through the cochlea and how sound stimulates the vibration of various structures of the cochlea, namely, basilar membrane (BM), recticular lamina, outer hair cells and tectorial membrane (TM). In this study we demonstrate the feasibility a modified phase-sensitive spectral domain optical coherence tomography system to provide subnanometer scale vibration information from multiple angles within the imaging beam. The system has the potential to provide depth resolved absolute vibration measurement of tissue microstructures from each of the delay-encoded vibration images with a noise floor of ~0.3nm at 200Hz.

  12. Relative and absolute reliability of measures of linoleic acid-derived oxylipins in human plasma.

    PubMed

    Gouveia-Figueira, Sandra; Bosson, Jenny A; Unosson, Jon; Behndig, Annelie F; Nording, Malin L; Fowler, Christopher J

    2015-09-01

    Modern analytical techniques allow for the measurement of oxylipins derived from linoleic acid in biological samples. Most validatory work has concerned extraction techniques, repeated analysis of aliquots from the same biological sample, and the influence of external factors such as diet and heparin treatment upon their levels, whereas less is known about the relative and absolute reliability of measurements undertaken on different days. A cohort of nineteen healthy males were used, where samples were taken at the same time of day on two occasions, at least 7 days apart. Relative reliability was assessed using Lin's concordance correlation coefficients (CCC) and intraclass correlation coefficients (ICC). Absolute reliability was assessed by Bland-Altman analyses. Nine linoleic acid oxylipins were investigated. ICC and CCC values ranged from acceptable (0.56 [13-HODE]) to poor (near zero [9(10)- and 12(13)-EpOME]). Bland-Altman limits of agreement were in general quite wide, ranging from ±0.5 (12,13-DiHOME) to ±2 (9(10)-EpOME; log10 scale). It is concluded that relative reliability of linoleic acid-derived oxylipins varies between lipids with compounds such as the HODEs showing better relative reliability than compounds such as the EpOMEs. These differences should be kept in mind when designing and interpreting experiments correlating plasma levels of these lipids with factors such as age, body mass index, rating scales etc. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  14. Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts' law.

    PubMed

    Willett, Francis R; Murphy, Brian A; Memberg, William D; Blabe, Christine H; Pandarinath, Chethan; Walter, Benjamin L; Sweet, Jennifer A; Miller, Jonathan P; Henderson, Jaimie M; Shenoy, Krishna V; Hochberg, Leigh R; Kirsch, Robert F; Ajiboye, A Bolu

    2017-04-01

    Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts' law: [Formula: see text] (where MT is movement time, D is target distance, R is target radius, and [Formula: see text] are parameters). Fitts' law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio [Formula: see text]) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to [Formula: see text]). Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts' law. We found that movement times were better described by the equation [Formula: see text], which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the [Formula: see text] ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user's motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts' law-like relationship to iBCI movements may require non-linear decoding strategies.

  15. Beta Pic-like Circumstellar Gas Disk Around 2 And

    NASA Technical Reports Server (NTRS)

    Cheng, Patricia

    2003-01-01

    This grant was awarded to support the data analysis and publication of results from our project entitled P Pic-like Circumstellar Gas Disk Around 2 And . We proposed to obtain FUSE observations of 2 And and study the characteristics and origin of its circumstellar gas. We observed 2 Andromedae with FUSE on 3-4 July 2001 in 11 exposures with a total exposure time of 21,289 seconds through the LWRS aperture. Our data were calibrated with Version 1.8.7 of the CALFUSE pipeline processing software. We corrected the wavelength scale for the heliocentric velocity error in this version of the CALFUSE software. The relative accuracy of the calibrated wavelength scale is +/- 9 km/s . We produced a co-added spectrum in the LiF 1B and LiF 2A channels (covering the 1100 to 1180 A region) by cross-correlating the 11 individual exposures and doing an exposure-time weighted average flux. The final co-added spectra have a signal-to-noise ratio in the stellar continuum near 1150 A of about 20. To obtain an absolute wavelength calibration, we cross-correlated our observed spectra with a model spectrum to obtain the best fit for the photospheric C I lines. Because the photospheric lines are very broad, this yields an absolute accuracy for the wavelength scale of approx.+/- 15 km/s. We then rebinned 5 original pixels to yield the optimal sampling of .033 A for each new pixel, because the calibrated spectra oversample the spectral resolution for FUSE+LWRS (R = 20,000 +/- 2,000).

  16. Quantifying the Behavior of Stock Correlations Under Market Stress

    PubMed Central

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-01-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242

  17. Absolute color scale for improved diagnostics with wavefront error mapping.

    PubMed

    Smolek, Michael K; Klyce, Stephen D

    2007-11-01

    Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.

  18. African Cenozoic hotpot tectonism: new insights from continent-scale body-wave tomography

    NASA Astrophysics Data System (ADS)

    Bastow, I. D.; Boyce, A.; Caunt, E.; Guilloud De Courbeville, J.; Desai, S.; Kounoudis, R.; Golos, E. M.; Burdick, S.; van der Hilst, R. D.

    2017-12-01

    The African plate is an ideal study locale for mantle plumes and Cenozoic hotspot tectonism. On the eastern side of the continent, the uplifted East African and Ethiopian plateaus, and the 30Ma Ethiopian Traps, are widely considered to be the result of the African Superplume: a broad thermochemical anomaly that originates below southern Africa. Precisely where and how the superplume traverses the mantle transition zone is debated however. On the western side of the continent, the Cameroon Volcanic Line is a hotspot track with no age-progression; it is less easily attributed to the effects of a mantle plume. Central to our understanding of these issues is an improved picture of mantle seismic structure. Body-wave studies of African mantle wave-speed structure are typically limited to regional relative arrival-time studies that utilize data from temporary seismograph networks of aperture less than 1000km. The resulting tomographic images are higher resolution than continent-scale surface-wave models, but anomaly amplitudes cannot be compared from region to region using the relative arrival-time approach: the 0% contour in each region refers to the regional, not global mean. The challenge is thus to incorporate the often-noisy body-wave data from temporary seismograph networks into a continent-scale absolute delay-time model. We achieve this using the new Absolute Arrival-time Recovery Method (AARM) method of Boyce et. al., (2017) and the tomographic inversion approach described by Li et. al., (2008). We invert for mantle wavespeed structure using data recorded since 1990 by temporary networks in the Atlas Mountains, Cameroon, South Africa, East African Rift system, Ethiopia and Madagascar. Our model is well resolved to lower mantle depths beneath these temporary networks, and offers the most detailed picture yet of mantle wavespeed structure beneath Africa. The contrast between East African and Cameroon mantle structure suggests multiple development mechanisms for hotspot tectonism across the African continent.

  19. Absolute calibration of the mass scale in the inverse problem of the physical theory of fireballs

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    1992-08-01

    A method of the absolute calibration of the mass scale is proposed for solving the inverse problem of the physical theory of fireballs. The method is based on data on the masses of fallen meteorites whose fireballs have been photographed in flight. The method can be applied to fireballs whose bodies have not experienced significant fragmentation during their flight in the atmosphere and have kept their shape relatively well. Data on the Lost City and Innisfree meteorites are used to calculate the calibration coefficients.

  20. Developments in the realization of diffuse reflectance scales at NPL

    NASA Astrophysics Data System (ADS)

    Chunnilall, Christopher J.; Clarke, Frank J. J.; Shaw, Michael J.

    2005-08-01

    The United Kingdom scales for diffuse reflectance are realized using two primary instruments. In the 360 nm to 2.5 μm spectral region the National Reference Reflectometer (NRR) realizes absolute measurement of reflectance and radiance factor by goniometric measurements. Hemispherical reflectance scales are obtained through the spatial integration of these goniometric measurements. In the mid-infrared region (2.5 μm - 55 μm) the hemispherical reflectance scale is realized by the Absolute Hemispherical Reflectometer (AHR). This paper describes some of the uncertainties resulting from errors in aligning the NRR and non-ideality in sample topography, together with its use to carry out measurements in the 1 - 1.6 μm region. The AHR has previously been used with grating spectrometers, and has now been coupled to a Fourier transform spectrometer.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dewitte, Steven; Nevens, Stijn

    We present the composite measurements of total solar irradiance (TSI) as measured by an ensemble of space instruments. The measurements of the individual instruments are put on a common absolute scale, and their quality is assessed by intercomparison. The composite time series is the average of all available measurements. From 1984 April to the present the TSI shows a variation in phase with the 11 yr solar cycle and no significant changes of the quiet-Sun level in between the three covered solar minima.

  2. The impact of the resolution of meteorological datasets on catchment-scale drought studies

    NASA Astrophysics Data System (ADS)

    Hellwig, Jost; Stahl, Kerstin

    2017-04-01

    Gridded meteorological datasets provide the basis to study drought at a range of scales, including catchment scale drought studies in hydrology. They are readily available to study past weather conditions and often serve real time monitoring as well. As these datasets differ in spatial/temporal coverage and spatial/temporal resolution, for most studies there is a tradeoff between these features. Our investigation examines whether biases occur when studying drought on catchment scale with low resolution input data. For that, a comparison among the datasets HYRAS (covering Central Europe, 1x1 km grid, daily data, 1951 - 2005), E-OBS (Europe, 0.25° grid, daily data, 1950-2015) and GPCC (whole world, 0.5° grid, monthly data, 1901 - 2013) is carried out. Generally, biases in precipitation increase with decreasing resolution. Most important variations are found during summer. In low mountain range of Central Europe the datasets of sparse resolution (E-OBS, GPCC) overestimate dry days and underestimate total precipitation since they are not able to describe high spatial variability. However, relative measures like the correlation coefficient reveal good consistencies of dry and wet periods, both for absolute precipitation values and standardized indices like the Standardized Precipitation Index (SPI) or Standardized Precipitation Evaporation Index (SPEI). Particularly the most severe droughts derived from the different datasets match very well. These results indicate that absolute values of sparse resolution datasets applied to catchment scale might be critical to use for an assessment of the hydrological drought at catchment scale, whereas relative measures for determining periods of drought are more trustworthy. Therefore, studies on drought, that downscale meteorological data, should carefully consider their data needs and focus on relative measures for dry periods if sufficient for the task.

  3. Application of AXUV diode detectors at ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Bernert, M.; Eich, T.; Burckhart, A.; Fuchs, J. C.; Giannone, L.; Kallenbach, A.; McDermott, R. M.; Sieglin, B.

    2014-03-01

    In the ASDEX Upgrade tokamak, a radiation measurement for a wide spectral range, based on semiconductor detectors, with 256 lines of sight and a time resolution of 5μs was recently installed. In combination with the foil based bolometry, it is now possible to estimate the absolutely calibrated radiated power of the plasma on fast timescales. This work introduces this diagnostic based on AXUV (Absolute eXtended UltraViolet) n-on-p diodes made by International Radiation Detectors, Inc. The measurement and the degradation of the diodes in a tokamak environment is shown. Even though the AXUV diodes are developed to have a constant sensitivity for all photon energies (1 eV-8 keV), degradation leads to a photon energy dependence of the sensitivity. The foil bolometry, which is restricted to a time resolution of less than 1 kHz, offers a basis for a time dependent calibration of the diodes. The measurements of the quasi-calibrated diodes are compared with the foil bolometry and found to be accurate on the kHz time scale. Therefore, it is assumed, that the corrected values are also valid for the highest time resolution (200 kHz). With this improved diagnostic setup, the radiation induced by edge localized modes is analyzed on fast timescales.

  4. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  5. Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification

    EPA Science Inventory

    Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...

  6. Time scale controversy: Accurate orbital calibration of the early Paleogene

    NASA Astrophysics Data System (ADS)

    Roehl, U.; Westerhold, T.; Laskar, J.

    2012-12-01

    Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to 54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.

  7. Time scale controversy: Accurate orbital calibration of the early Paleogene

    NASA Astrophysics Data System (ADS)

    Westerhold, Thomas; RöHl, Ursula; Laskar, Jacques

    2012-06-01

    Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to ˜54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.

  8. Phantom motor execution facilitated by machine learning and augmented reality as treatment for phantom limb pain: a single group, clinical trial in patients with chronic intractable phantom limb pain.

    PubMed

    Ortiz-Catalan, Max; Guðmundsdóttir, Rannveig A; Kristoffersen, Morten B; Zepeda-Echavarria, Alejandra; Caine-Winterberger, Kerstin; Kulbacka-Ortiz, Katarzyna; Widehammar, Cathrine; Eriksson, Karin; Stockselius, Anita; Ragnö, Christina; Pihlar, Zdenka; Burger, Helena; Hermansson, Liselotte

    2016-12-10

    Phantom limb pain is a debilitating condition for which no effective treatment has been found. We hypothesised that re-engagement of central and peripheral circuitry involved in motor execution could reduce phantom limb pain via competitive plasticity and reversal of cortical reorganisation. Patients with upper limb amputation and known chronic intractable phantom limb pain were recruited at three clinics in Sweden and one in Slovenia. Patients received 12 sessions of phantom motor execution using machine learning, augmented and virtual reality, and serious gaming. Changes in intensity, frequency, duration, quality, and intrusion of phantom limb pain were assessed by the use of the numeric rating scale, the pain rating index, the weighted pain distribution scale, and a study-specific frequency scale before each session and at follow-up interviews 1, 3, and 6 months after the last session. Changes in medication and prostheses were also monitored. Results are reported using descriptive statistics and analysed by non-parametric tests. The trial is registered at ClinicalTrials.gov, number NCT02281539. Between Sept 15, 2014, and April 10, 2015, 14 patients with intractable chronic phantom limb pain, for whom conventional treatments failed, were enrolled. After 12 sessions, patients showed statistically and clinically significant improvements in all metrics of phantom limb pain. Phantom limb pain decreased from pre-treatment to the last treatment session by 47% (SD 39; absolute mean change 1·0 [0·8]; p=0·001) for weighted pain distribution, 32% (38; absolute mean change 1·6 [1·8]; p=0·007) for the numeric rating scale, and 51% (33; absolute mean change 9·6 [8·1]; p=0·0001) for the pain rating index. The numeric rating scale score for intrusion of phantom limb pain in activities of daily living and sleep was reduced by 43% (SD 37; absolute mean change 2·4 [2·3]; p=0·004) and 61% (39; absolute mean change 2·3 [1·8]; p=0·001), respectively. Two of four patients who were on medication reduced their intake by 81% (absolute reduction 1300 mg, gabapentin) and 33% (absolute reduction 75 mg, pregabalin). Improvements remained 6 months after the last treatment. Our findings suggest potential value in motor execution of the phantom limb as a treatment for phantom limb pain. Promotion of phantom motor execution aided by machine learning, augmented and virtual reality, and gaming is a non-invasive, non-pharmacological, and engaging treatment with no identified side-effects at present. Promobilia Foundation, VINNOVA, Jimmy Dahlstens Fond, PicoSolve, and Innovationskontor Väst. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Automation of a Nile red staining assay enables high throughput quantification of microalgal lipid production.

    PubMed

    Morschett, Holger; Wiechert, Wolfgang; Oldiges, Marco

    2016-02-09

    Within the context of microalgal lipid production for biofuels and bulk chemical applications, specialized higher throughput devices for small scale parallelized cultivation are expected to boost the time efficiency of phototrophic bioprocess development. However, the increasing number of possible experiments is directly coupled to the demand for lipid quantification protocols that enable reliably measuring large sets of samples within short time and that can deal with the reduced sample volume typically generated at screening scale. To meet these demands, a dye based assay was established using a liquid handling robot to provide reproducible high throughput quantification of lipids with minimized hands-on-time. Lipid production was monitored using the fluorescent dye Nile red with dimethyl sulfoxide as solvent facilitating dye permeation. The staining kinetics of cells at different concentrations and physiological states were investigated to successfully down-scale the assay to 96 well microtiter plates. Gravimetric calibration against a well-established extractive protocol enabled absolute quantification of intracellular lipids improving precision from ±8 to ±2 % on average. Implementation into an automated liquid handling platform allows for measuring up to 48 samples within 6.5 h, reducing hands-on-time to a third compared to manual operation. Moreover, it was shown that automation enhances accuracy and precision compared to manual preparation. It was revealed that established protocols relying on optical density or cell number for biomass adjustion prior to staining may suffer from errors due to significant changes of the cells' optical and physiological properties during cultivation. Alternatively, the biovolume was used as a measure for biomass concentration so that errors from morphological changes can be excluded. The newly established assay proved to be applicable for absolute quantification of algal lipids avoiding limitations of currently established protocols, namely biomass adjustment and limited throughput. Automation was shown to improve data reliability, as well as experimental throughput simultaneously minimizing the needed hands-on-time to a third. Thereby, the presented protocol meets the demands for the analysis of samples generated by the upcoming generation of devices for higher throughput phototrophic cultivation and thereby contributes to boosting the time efficiency for setting up algae lipid production processes.

  10. Trends in income-related inequality in untreated caries among children in the United States: findings from NHANES I, NHANES III, and NHANES 1999-2004.

    PubMed

    Capurro, Diego Alberto; Iafolla, Timothy; Kingman, Albert; Chattopadhyay, Amit; Garcia, Isabel

    2015-12-01

    The goal of this analysis was to describe income-related inequality in untreated caries among children in the United States over time. The analysis focuses on children ages 2-12 years in three nationally representative U.S. surveys: the National Health and Nutrition Examination Survey (NHANES) 1971-1974, NHANES 1988-1994, and NHANES 1999-2004. The outcome of interest is untreated dental caries. Various methods are employed to measure absolute and relative inequality within each survey such as pair-wise comparisons, measures of association (odds ratios), and three summary measures of overall inequality: the slope index of inequality, the relative index of inequality, and the concentration index. Inequality trends are then assessed by comparing these estimates across the three surveys. Inequality was present in each of the three surveys analyzed. Whether measured on an absolute or relative scale, untreated caries disproportionately affected those with lower income. Trend analysis shows that, despite population-wide reductions in untreated caries between NHANES I and NHANES III, overall absolute inequality slightly increased, while overall relative inequality significantly increased. Between NHANES III and NHANES 1999-2004, both absolute and relative inequality tended to decrease; however, these changes were not statistically significant. Socioeconomic inequality in oral health is an important measure of progress in overall population health and a key input to inform health policies. This analysis shows the presence of socioeconomic inequality in oral health in the American child population, as well as changes in its magnitude over time. Further research is needed to determine the factors related to these changes and their relative contribution to inequality trends. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  12. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  13. The learning effect of intraoperative video-enhanced surgical procedure training.

    PubMed

    van Det, M J; Meijerink, W J H J; Hoff, C; Middel, L J; Koopal, S A; Pierie, J P E N

    2011-07-01

    The transition from basic skills training in a skills lab to procedure training in the operating theater using the traditional master-apprentice model (MAM) lacks uniformity and efficiency. When the supervising surgeon performs parts of a procedure, training opportunities are lost. To minimize this intervention by the supervisor and maximize the actual operating time for the trainee, we created a new training method called INtraoperative Video-Enhanced Surgical Training (INVEST). Ten surgical residents were trained in laparoscopic cholecystectomy either by the MAM or with INVEST. Each trainee performed six cholecystectomies that were objectively evaluated on an Objective Structured Assessment of Technical Skills (OSATS) global rating scale. Absolute and relative improvements during the training curriculum were compared between the groups. A questionnaire evaluated the trainee's opinion on this new training method. Skill improvement on the OSATS global rating scale was significantly greater for the trainees in the INVEST curriculum compared to the MAM, with mean absolute improvement 32.6 versus 14.0 points and mean relative improvement 59.1 versus 34.6% (P=0.02). INVEST significantly enhances technical and procedural skill development during the early learning curve for laparoscopic cholecystectomy. Trainees were positive about the content and the idea of the curriculum.

  14. Radiometric calibration updates to the Landsat collection

    USGS Publications Warehouse

    Micijevic, Esad; Haque, Md. Obaidul; Mishra, Nischal

    2016-01-01

    The Landsat Project is planning to implement a new collection management strategy for Landsat products generated at the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center. The goal of the initiative is to identify a collection of consistently geolocated and radiometrically calibrated images across the entire Landsat archive that is readily suitable for time-series analyses. In order to perform an accurate land change analysis, the data from all Landsat sensors must be on the same radiometric scale. Landsat 7 Enhanced Thematic Mapper Plus (ETM+) is calibrated to a radiance standard and all previous sensors are cross-calibrated to its radiometric scale. Landsat 8 Operational Land Imager (OLI) is calibrated to both radiance and reflectance standards independently. The Landsat 8 OLI reflectance calibration is considered to be most accurate. To improve radiometric calibration accuracy of historical data, Landsat 1-7 sensors also need to be cross-calibrated to the OLI reflectance scale. Results of that effort, as well as other calibration updates including the absolute and relative radiometric calibration and saturated pixel replacement for Landsat 8 OLI and absolute calibration for Landsat 4 and 5 Thematic Mappers (TM), will be implemented into Landsat products during the archive reprocessing campaign planned within the new collection management strategy. This paper reports on the planned radiometric calibration updates to the solar reflective bands of the new Landsat collection.

  15. Risk of intracerebral haemorrhage with alteplase after acute ischaemic stroke: a secondary analysis of an individual patient data meta-analysis.

    PubMed

    Whiteley, William N; Emberson, Jonathan; Lees, Kennedy R; Blackwell, Lisa; Albers, Gregory; Bluhmki, Erich; Brott, Thomas; Cohen, Geoff; Davis, Stephen; Donnan, Geoffrey; Grotta, James; Howard, George; Kaste, Markku; Koga, Masatoshi; von Kummer, Rüdiger; Lansberg, Maarten G; Lindley, Richard I; Lyden, Patrick; Olivot, Jean Marc; Parsons, Mark; Toni, Danilo; Toyoda, Kazunori; Wahlgren, Nils; Wardlaw, Joanna; Del Zoppo, Gregory J; Sandercock, Peter; Hacke, Werner; Baigent, Colin

    2016-08-01

    Randomised trials have shown that alteplase improves the odds of a good outcome when delivered within 4·5 h of acute ischaemic stroke. However, alteplase also increases the risk of intracerebral haemorrhage; we aimed to determine the proportional and absolute effects of alteplase on the risks of intracerebral haemorrhage, mortality, and functional impairment in different types of patients. We used individual patient data from the Stroke Thrombolysis Trialists' (STT) meta-analysis of randomised trials of alteplase versus placebo (or untreated control) in patients with acute ischaemic stroke. We prespecified assessment of three classifications of intracerebral haemorrhage: type 2 parenchymal haemorrhage within 7 days; Safe Implementation of Thrombolysis in Stroke Monitoring Study's (SITS-MOST) haemorrhage within 24-36 h (type 2 parenchymal haemorrhage with a deterioration of at least 4 points on National Institutes of Health Stroke Scale [NIHSS]); and fatal intracerebral haemorrhage within 7 days. We used logistic regression, stratified by trial, to model the log odds of intracerebral haemorrhage on allocation to alteplase, treatment delay, age, and stroke severity. We did exploratory analyses to assess mortality after intracerebral haemorrhage and examine the absolute risks of intracerebral haemorrhage in the context of functional outcome at 90-180 days. Data were available from 6756 participants in the nine trials of intravenous alteplase versus control. Alteplase increased the odds of type 2 parenchymal haemorrhage (occurring in 231 [6·8%] of 3391 patients allocated alteplase vs 44 [1·3%] of 3365 patients allocated control; odds ratio [OR] 5·55 [95% CI 4·01-7·70]; absolute excess 5·5% [4·6-6·4]); of SITS-MOST haemorrhage (124 [3·7%] of 3391 vs 19 [0·6%] of 3365; OR 6·67 [4·11-10·84]; absolute excess 3·1% [2·4-3·8]); and of fatal intracerebral haemorrhage (91 [2·7%] of 3391 vs 13 [0·4%] of 3365; OR 7·14 [3·98-12·79]; absolute excess 2·3% [1·7-2·9]). However defined, the proportional increase in intracerebral haemorrhage was similar irrespective of treatment delay, age, or baseline stroke severity, but the absolute excess risk of intracerebral haemorrhage increased with increasing stroke severity: for SITS-MOST intracerebral haemorrhage the absolute excess risk ranged from 1·5% (0·8-2·6%) for strokes with NIHSS 0-4 to 3·7% (2·1-6·3%) for NIHSS 22 or more (p=0·0101). For patients treated within 4·5 h, the absolute increase in the proportion (6·8% [4·0% to 9·5%]) achieving a modified Rankin Scale of 0 or 1 (excellent outcome) exceeded the absolute increase in risk of fatal intracerebral haemorrhage (2·2% [1·5% to 3·0%]) and the increased risk of any death within 90 days (0·9% [-1·4% to 3·2%]). Among patients given alteplase, the net outcome is predicted both by time to treatment (with faster time increasing the proportion achieving an excellent outcome) and stroke severity (with a more severe stroke increasing the absolute risk of intracerebral haemorrhage). Although, within 4·5 h of stroke, the probability of achieving an excellent outcome with alteplase treatment exceeds the risk of death, early treatment is especially important for patients with severe stroke. UK Medical Research Council, British Heart Foundation, University of Glasgow, University of Edinburgh. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Scale covariant gravitation. V - Kinetic theory. VI - Stellar structure and evolution

    NASA Technical Reports Server (NTRS)

    Hsieh, S.-H.; Canuto, V. M.

    1981-01-01

    A scale covariant kinetic theory for particles and photons is developed. The mathematical framework of the theory is given by the tangent bundle of a Weyl manifold. The Liouville equation is derived, and solutions to corresponding equilibrium distributions are presented and shown to yield thermodynamic results identical to the ones obtained previously. The scale covariant theory is then used to derive results of interest to stellar structure and evolution. A radiative transfer equation is derived that can be used to study stellar evolution with a variable gravitational constant. In addition, it is shown that the sun's absolute luminosity scales as L approximately equal to GM/kappa, where kappa is the stellar opacity. Finally, a formula is derived for the age of globular clusters as a function of the gravitational constant using a previously derived expression for the absolute luminosity.

  17. Laterality, spatial abilities, and accident proneness.

    PubMed

    Voyer, Susan D; Voyer, Daniel

    2015-01-01

    Although handedness as a measure of cerebral specialization has been linked to accident proneness, more direct measures of laterality are rarely considered. The present study aimed to fill that gap in the existing research. In addition, individual difference factors in accident proneness were further examined with the inclusion of mental rotation and navigation abilities measures. One hundred and forty participants were asked to complete the Mental Rotations Test, the Santa Barbara Sense of Direction scale, the Greyscales task, the Fused Dichotic Word Test, the Waterloo Handedness Questionnaire, and a grip strength task before answering questions related to number of accidents in five areas. Results indicated that handedness scores, absolute visual laterality score, absolute response time on the auditory laterality index, and navigation ability were significant predictors of the total number of accidents. Results are discussed with respect to cerebral hemispheric specialization and risk-taking attitudes and behavior.

  18. Absolute Spatially- and Temporally-Resolved Optical Emission Measurements of rf Glow Discharges in Argon

    PubMed Central

    Djurović, S.; Roberts, J. R.; Sobolewski, M. A.; Olthoff, J. K.

    1993-01-01

    Spatially- and temporally-resolved measurements of optical emission intensities are presented from rf discharges in argon over a wide range of pressures (6.7 to 133 Pa) and applied rf voltages (75 to 200 V). Results of measurements of emission intensities are presented for both an atomic transition (Ar I, 750.4 nm) and an ionic transition (Ar II, 434.8 nm). The absolute scale of these optical emissions has been determined by comparison with the optical emission from a calibrated standard lamp. All measurements were made in a well-defined rf reactor. They provide detailed characterization of local time-resolved plasma conditions suitable for the comparison with results from other experiments and theoretical models. These measurements represent a new level of detail in diagnostic measurements of rf plasmas, and provide insight into the electron transport properties of rf discharges. PMID:28053464

  19. Quantitative ptychographic reconstruction by applying a probe constraint

    NASA Astrophysics Data System (ADS)

    Reinhardt, J.; Schroer, C. G.

    2018-04-01

    The coherent scanning technique X-ray ptychography has become a routine tool for high-resolution imaging and nanoanalysis in various fields of research such as chemistry, biology or materials science. Often the ptychographic reconstruction results are analysed in order to yield absolute quantitative values for the object transmission and illuminating probe function. In this work, we address a common ambiguity encountered in scaling the object transmission and probe intensity via the application of an additional constraint to the reconstruction algorithm. A ptychographic measurement of a model sample containing nanoparticles is used as a test data set against which to benchmark in the reconstruction results depending on the type of constraint used. Achieving quantitative absolute values for the reconstructed object transmission is essential for advanced investigation of samples that are changing over time, e.g., during in-situ experiments or in general when different data sets are compared.

  20. Magnetic resonance cinematography of the fingers: a 3.0 Tesla feasibility study with comparison of incremental and continuous dynamic protocols.

    PubMed

    Bayer, Thomas; Adler, Werner; Janka, Rolf; Uder, Michael; Roemer, Frank

    2017-12-01

    To study the feasibility of magnetic resonance cinematography of the fingers (MRCF) with comparison of image quality of different protocols for depicting the finger anatomy during motion. MRCF was performed during a full flexion and extension movement in 14 healthy volunteers using a finger-gating device. Three real-time sequences (frame rates 17-59 images/min) and one proton density (PD) sequence (3 images/min) were acquired during incremental and continuous motion. Analyses were performed independently by three readers. Qualitative image analysis included Likert-scale grading from 0 (useless) to 5 (excellent) and specific visual analog scale (VAS) grading from 0 (insufficient) to 100 (excellent). Signal-to-noise calculation was performed. Overall percentage agreement and mean absolute disagreement were calculated. Within the real-time sequences a high frame-rate true fast imaging with steady-state free precession (TRUFI) yielded the best image quality with Likert and overall VAS scores of 3.0 ± 0.2 and 60.4 ± 25.3, respectively. The best sequence regarding image quality was an incremental PD with mean values of 4.8 ± 0.2 and 91.2 ± 9.4, respectively. Overall percentage agreement and mean absolute disagreement were 47.9 and 0.7, respectively. No statistically significant SNR differences were found between continuous and incremental motion for the real-time protocols. MRCF is feasible with appropriate image quality during continuous motion using a finger-gating device. Almost perfect image quality is achievable with incremental PD imaging, which represents a compromise for MRCF with the drawback of prolonged scanning time.

  1. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  2. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  3. Reassessing the ratio of glyoxal to formaldehyde as an indicator of hydrocarbon precursor speciation

    NASA Astrophysics Data System (ADS)

    Kaiser, J.; Wolfe, G. M.; Min, K. E.; Brown, S. S.; Miller, C. C.; Jacob, D. J.; deGouw, J. A.; Graus, M.; Hanisco, T. F.; Holloway, J.; Peischl, J.; Pollack, I. B.; Ryerson, T. B.; Warneke, C.; Washenfelder, R. A.; Keutsch, F. N.

    2015-07-01

    The yield of formaldehyde (HCHO) and glyoxal (CHOCHO) from oxidation of volatile organic compounds (VOCs) depends on precursor VOC structure and the concentration of NOx (NOx = NO + NO2). Previous work has proposed that the ratio of CHOCHO to HCHO (RGF) can be used as an indicator of precursor VOC speciation, and absolute concentrations of the CHOCHO and HCHO as indicators of NOx. Because this metric is measurable by satellite, it is potentially useful on a global scale; however, absolute values and trends in RGF have differed between satellite and ground-based observations. To investigate potential causes of previous discrepancies and the usefulness of this ratio, we present measurements of CHOCHO and HCHO over the southeastern United States (SE US) from the 2013 SENEX (Southeast Nexus) flight campaign, and compare these measurements with OMI (Ozone Monitoring Instrument) satellite retrievals. High time-resolution flight measurements show that high RGF is associated with monoterpene emissions, low RGF is associated with isoprene oxidation, and emissions associated with oil and gas production can lead to small-scale variation in regional RGF. During the summertime in the SE US, RGF is not a reliable diagnostic of anthropogenic VOC emissions, as HCHO and CHOCHO production are dominated by isoprene oxidation. Our results show that the new CHOCHO retrieval algorithm reduces the previous disagreement between satellite and in situ RGF observations. As the absolute values and trends in RGF observed during SENEX are largely reproduced by OMI observations, we conclude that satellite-based observations of RGF can be used alongside knowledge of land use as a global diagnostic of dominant hydrocarbon speciation.

  4. Radiometric calibration of the Landsat MSS sensor series

    USGS Publications Warehouse

    Helder, Dennis L.; Karki, Sadhana; Bhatt, Rajendra; Micijevik, Esad; Aaron, David; Jasinski, Benjamin

    2012-01-01

    Multispectral remote sensing of the Earth using Landsat sensors was ushered on July 23, 1972, with the launch of Landsat-1. Following that success, four more Landsat satellites were launched, and each of these carried the Multispectral Scanner System (MSS). These five sensors provided the only consistent multispectral space-based imagery of the Earth's surface from 1972 to 1982. This work focuses on developing both a consistent and absolute radiometric calibration of this sensor system. Cross-calibration of the MSS was performed through the use of pseudoinvariant calibration sites (PICSs). Since these sites have been shown to be stable for long periods of time, changes in MSS observations of these sites were attributed to changes in the sensors themselves. In addition, simultaneous data collections were available for some MSS sensor pairs, and these were also used for cross-calibration. Results indicated substantial differences existed between instruments, up to 16%, and these were reduced to 5% or less across all MSS sensors and bands. Lastly, this paper takes the calibration through the final step and places the MSS sensors on an absolute radiometric scale. The methodology used to achieve this was based on simultaneous data collections by the Landsat-5 MSS and Thematic Mapper (TM) instruments. Through analysis of image data from a PICS location and through compensating for the spectral differences between the two instruments, the Landsat-5 MSS sensor was placed on an absolute radiometric scale based on the Landsat-5 TM sensor. Uncertainties associated with this calibration are considered to be less than 5%.

  5. Characterization of exchange rate regimes based on scaling and correlation properties of volatility for ASEAN-5 countries

    NASA Astrophysics Data System (ADS)

    Muniandy, Sithi V.; Uning, Rosemary

    2006-11-01

    Foreign currency exchange rate policies of ASEAN member countries have undergone tremendous changes following the 1997 Asian financial crisis. In this paper, we study the fractal and long-memory characteristics in the volatility of five ASEAN founding members’ exchange rates with respect to US dollar. The impact of exchange rate policies implemented by the ASEAN-5 countries on the currency fluctuations during pre-, mid- and post-crisis are briefly discussed. The time series considered are daily price returns, absolute returns and aggregated absolute returns, each partitioned into three segments based on the crisis regimes. These time series are then modeled using fractional Gaussian noise, fractionally integrated ARFIMA (0,d,0) and generalized Cauchy process. The first two stationary models provide the description of long-range dependence through Hurst and fractional differencing parameter, respectively. Meanwhile, the generalized Cauchy process offers independent estimation of fractal dimension and long memory exponent. In comparison, among the three models we found that the generalized Cauchy process showed greater sensitivity to transition of exchange rate regimes that were implemented by ASEAN-5 countries.

  6. Electron Emission from Amorphous Solid Water Induced by Passage of Energetic Protons and Fluorine Ions

    PubMed Central

    Toburen, L. H.; McLawhorn, S. L.; McLawhorn, R. A.; Carnes, K. D.; Dingfelder, M.; Shinpaugh, J. L.

    2013-01-01

    Absolute doubly differential electron emission yields were measured from thin films of amorphous solid water (ASW) after the transmission of 6 MeV protons and 19 MeV (1 MeV/nucleon) fluorine ions. The ASW films were frozen on thin (1-μm) copper foils cooled to approximately 50 K. Electrons emitted from the films were detected as a function of angle in both the forward and backward direction and as a function of the film thickness. Electron energies were determined by measuring the ejected electron time of flight, a technique that optimizes the accuracy of measuring low-energy electron yields, where the effects of molecular environment on electron transport are expected to be most evident. Relative electron emission yields were normalized to an absolute scale by comparison of the integrated total yields for proton-induced electron emission from the copper substrate to values published previously. The absolute doubly differential yields from ASW are presented along with integrated values, providing single differential and total electron emission yields. These data may provide benchmark tests of Monte Carlo track structure codes commonly used for assessing the effects of radiation quality on biological effectiveness. PMID:20681805

  7. Integrated poly(dimethysiloxane) with an intrinsic nonfouling property approaching "absolute" zero background in immunoassays.

    PubMed

    Ma, Hongwei; Wu, Yuanzi; Yang, Xiaoli; Liu, Xing; He, Jianan; Fu, Long; Wang, Jie; Xu, Hongke; Shi, Yi; Zhong, Renqian

    2010-08-01

    The key to achieve a highly sensitive and specific protein microarray assay is to prevent nonspecific protein adsorption to an "absolute" zero level because any signal amplification method will simultaneously amplify signal and noise. Here, we develop a novel solid supporting material, namely, polymer coated initiator integrated poly(dimethysiloxane) (iPDMS), which was able to achieve such "absolute" zero (i.e., below the detection limit of instrument). The implementation of this iPDMS enables practical and high-quality multiplexed enzyme-linked immunosorbent assay (ELISA) of 11 tumor markers. This iPDMS does not need any blocking steps and only require mild washing conditions. It also uses on an average 8-fold less capture antibodies compared with the mainstream nitrocellulose (NC) film. Besides saving time and materials, iPDMS achieved a limit-of-detection (LOD) as low as 19 pg mL(-1), which is sufficiently low for most current clinical diagnostic applications. We expect to see an immediate impact of this iPDMS on the realization of the great potential of protein microarray in research and practical uses such as large scale and high-throughput screening, clinical diagnosis, inspection, and quarantine.

  8. Absolute brightness temperature measurements at 3.5-mm wavelength. [of sun, Venus, Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Ulich, B. L.; Rhodes, P. J.; Davis, J. H.; Hollis, J. M.

    1980-01-01

    Careful observations have been made at 86.1 GHz to derive the absolute brightness temperatures of the sun (7914 + or - 192 K), Venus (357.5 + or - 13.1 K), Jupiter (179.4 + or - 4.7 K), and Saturn (153.4 + or - 4.8 K) with a standard error of about three percent. This is a significant improvement in accuracy over previous results at millimeter wavelengths. A stable transmitter and novel superheterodyne receiver were constructed and used to determine the effective collecting area of the Millimeter Wave Observatory (MWO) 4.9-m antenna relative to a previously calibrated standard gain horn. The thermal scale was set by calibrating the radiometer with carefully constructed and tested hot and cold loads. The brightness temperatures may be used to establish an absolute calibration scale and to determine the antenna aperture and beam efficiencies of other radio telescopes at 3.5-mm wavelength.

  9. Microcounseling Skill Discrimination Scale: A Methodological Note

    ERIC Educational Resources Information Center

    Stokes, Joseph; Romer, Daniel

    1977-01-01

    Absolute ratings on the Microcounseling Skill Discrimination Scale (MSDS) confound the individual's use of the rating scale and actual ability to discriminate effective and ineffective counselor behaviors. This note suggests methods of scoring the MSDS that will eliminate variability attributable to response language and improve the validity of…

  10. Cultural Differences in Justificatory Reasoning

    ERIC Educational Resources Information Center

    Soong, Hannah; Lee, Richard; John, George

    2012-01-01

    Justificatory reasoning, the ability to justify one's beliefs and actions, is an important goal of education. We develop a scale to measure the three forms of justificatory reasoning--absolutism, relativism, and evaluativism--before validating the scale across two cultures and domains. The results show that the scale possessed validity and…

  11. Local and Catchment-Scale Water Storage Changes in Northern Benin Deduced from Gravity Monitoring at Various Time-Scales

    NASA Astrophysics Data System (ADS)

    Hinderer, J.; Hector, B.; Séguis, L.; Descloitres, M.; Cohard, J.; Boy, J.; Calvo, M.; Rosat, S.; Riccardi, U.; Galle, S.

    2013-12-01

    Water storage changes (WSC) are investigated by the mean of gravity monitoring in Djougou, northern Benin, in the frame of the GHYRAF (Gravity and Hydrology in Africa) project. In this area, WSC are 1) part of the control system for evapotranspiration (ET) processes, a key variable of the West-African monsoon cycle and 2) the state variable for resource management, a critical issue in storage-poor hard rock basement contexts such as in northern Benin. We show the advantages of gravity monitoring for analyzing different processes in the water cycle involved at various time and space scales, using the main gravity sensors available today (FG5 absolute gravimeter, superconducting gravimeter -SG- and CG5 micro-gravimeter). The study area is also part of the long-term observing system AMMA-Catch, and thus under intense hydro-meteorological monitoring (rain, soil moisture, water table level, ET ...). Gravity-derived WSC are compared at all frequencies to hydrological data and to hydrological models calibrated on these data. Discrepancies are analyzed to discuss the pros and cons of each approach. Fast gravity changes (a few hours) are significant when rain events occur, and involve different contributions: rainfall itself, runoff, fast subsurface water redistribution, screening effect of the gravimeter building and local topography. We investigate these effects and present the statistical results of a set of rain events recorded with the SG installed in Djougou since July 2010. The intermediate time scale of gravity changes (a few days) is caused by ET and both vertical and horizontal water redistribution. The integrative nature of gravity measurements does not allow to separate these different contributions, and the screening from the shelter reduces our ability to retrieve ET values. Also, atmospheric corrections are critical at such frequencies, and deserve some specific attention. However, a quick analysis of gravity changes following rain events shows that the values are in accordance with expected ET values (up to about 5mm/day). Seasonal WSC are analyzed since 2008 using FG5 absolute gravity measurements four times a year and since 2010 using the continuous SG time series. They can reach up to 12 microGal (≈270mm) and show a clear interannual variability, as can be expected from rainfall variability in the area. This data set allows some estimates of an average specific yield for the local aquifer, together with a scaling factor for Magnetic Resonance Soundings-derived water content.

  12. Macro-scale Tectonics of the Eastern North American Shield: Insights from a new Absolute P-wave Tomographic Model for North America.

    NASA Astrophysics Data System (ADS)

    Boyce, A.; Bastow, I. D.; Golos, E. M.; Burdick, S.; van der Hilst, R. D.; Rondenay, S.

    2017-12-01

    The Grenville orogen is a 1Ga old, 4000km long tectonic collision zone that bounds the North American Shield to the east, often drawing comparisons to the modern-day Himalayas in collisional style and extent. Local studies of the Grenville province are legion, however it remains enigmatic at the macro scale due to its large spatial footprint (from Labrador to Mexico), interaction with Phanerozoic tectonics and present-day sedimentary cover. Recently, the USArray Transportable Array seismic stations have gone someway to addressing this issue but station coverage remains sparse in global absolute wavespeed models in the shield regions further north. However, the newly published method of Boyce et. al., (2017) enables data from regional seismic networks to be incorporated into these global models. Here we use this method to add 13000 new P-wave arrivals from stations in Canada to the continental portion of the global absolute wavespeed tomographic model of Burdick et. al., (2017). Thus we are able to seismically illuminate, for the first time, mantle seismic structure for the entire footprint of the Grenville Orogen. Recent work suggests that in SE Canada the edge of the Superior craton has undergone post formation modification. Using these images it will be possible to investigate whether craton edge modification is ubiquitous along the entire Grenville front and whether oblique or direct "head-on" shortening was dominant during the collision of Laurentia and Amazonia at 1Ga. Through further comparison with the GLimER 2D receiver function profiles (Rondenay et. al., 2017), we aim to unify theories from local scale studies for evolution of the eastern portion of stable North America. Furthermore, we will be able to constrain the morphology of the North American keel and assess to what extent this may influence present day asthenospheric flow fields and the resulting implications for modification of the cratonic root.

  13. Lunch-time food choices in preschoolers: Relationships between absolute and relative intakes of different food categories, and appetitive characteristics and weight.

    PubMed

    Carnell, S; Pryor, K; Mais, L A; Warkentin, S; Benson, L; Cheng, R

    2016-08-01

    Children's appetitive characteristics measured by parent-report questionnaires are reliably associated with body weight, as well as behavioral tests of appetite, but relatively little is known about relationships with food choice. As part of a larger preloading study, we served 4-5year olds from primary school classes five school lunches at which they were presented with the same standardized multi-item meal. Parents completed Child Eating Behavior Questionnaire (CEBQ) sub-scales assessing satiety responsiveness (CEBQ-SR), food responsiveness (CEBQ-FR) and enjoyment of food (CEBQ-EF), and children were weighed and measured. Despite differing preload conditions, children showed remarkable consistency of intake patterns across all five meals with day-to-day intra-class correlations in absolute and percentage intake of each food category ranging from 0.78 to 0.91. Higher CEBQ-SR was associated with lower mean intake of all food categories across all five meals, with the weakest association apparent for snack foods. Higher CEBQ-FR was associated with higher intake of white bread and fruits and vegetables, and higher CEBQ-EF was associated with greater intake of all categories, with the strongest association apparent for white bread. Analyses of intake of each food group as a percentage of total intake, treated here as an index of the child's choice to consume relatively more or relatively less of each different food category when composing their total lunch-time meal, further suggested that children who were higher in CEBQ-SR ate relatively more snack foods and relatively less fruits and vegetables, while children with higher CEBQ-EF ate relatively less snack foods and relatively more white bread. Higher absolute intakes of white bread and snack foods were associated with higher BMI z score. CEBQ sub-scale associations with food intake variables were largely unchanged by controlling for daily metabolic needs. However, descriptive comparisons of lunch intakes with expected amounts based on metabolic needs suggested that overweight/obese boys were at particularly high risk of overeating. Parents' reports of children's appetitive characteristics on the CEBQ are associated with differential patterns of food choice as indexed by absolute and relative intake of various food categories assessed on multiple occasions in a naturalistic, school-based setting, without parents present. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Lunch-time food choices in preschoolers: relationships between absolute and relative intake of different food categories, and appetitive characteristics and weight

    PubMed Central

    Carnell, S; Pryor, K; Mais, LA; Warkentin, S; Benson, L; Cheng, R

    2016-01-01

    Children’s appetitive characteristics measured by parent-report questionnaires are reliably associated with body weight, as well as behavioral tests of appetite, but relatively little is known about relationships with food choice. As part of a larger preloading study, we served 4-5y olds from primary school classes five school lunches at which they were presented with the same standardized multi-item meal. Parents completed Child Eating Behavior Questionnaire (CEBQ) sub-scales assessing satiety responsiveness (CEBQ-SR), food responsiveness (CEBQ-FR) and enjoyment of food (CEBQ-EF), and children were weighed and measured. Despite differing preload conditions, children showed remarkable consistency of intake patterns across all five meals with day-to-day intra-class correlations in absolute and percentage intake of each food category ranging from .78 to .91. Higher CEBQ-SR was associated with lower mean intake of all food categories across all five meals, with the weakest association apparent for snack foods. Higher CEBQ-FR was associated with higher intake of white bread and fruits and vegetables, and higher CEBQ-EF was associated with greater intake of all categories, with the strongest association apparent for white bread. Analyses of intake of each food group as a percentage of total intake, treated here as an index of the child’s choice to consume relatively more or relatively less of each different food category when composing their total lunch-time meal, further suggested that children who were higher in CEBQ-SR ate relatively more snack foods and relatively less fruits and vegetables, while children with higher CEBQ-EF ate relatively less snack foods and relatively more white bread. Higher absolute intakes of white bread and snack foods were associated with higher BMI z score. CEBQ sub-scale associations with food intake variables were largely unchanged by controlling for daily metabolic needs. However, descriptive comparisons of lunch intakes with expected amounts based on metabolic needs suggested that overweight/obese boys were at particularly high risk of overeating. Parents’ reports of children’s appetitive characteristics on the CEBQ are associated with differential patterns of food choice as indexed by absolute and relative intake of various food categories assessed on multiple occasions in a naturalistic, school-based setting, without parents present. PMID:27039281

  15. Correlations between commonly used clinical outcome scales and patient satisfaction after total knee arthroplasty.

    PubMed

    Kwon, Sae Kwang; Kang, Yeon Gwi; Kim, Sung Ju; Chang, Chong Bum; Seong, Sang Cheol; Kim, Tae Kyun

    2010-10-01

    Patient satisfaction is becoming increasingly important as a crucial outcome measure for total knee arthroplasty. We aimed to determine how well commonly used clinical outcome scales correlate with patient satisfaction after total knee arthroplasty. In particular, we sought to determine whether patient satisfaction correlates better with absolute postoperative scores or preoperative to 12-month postoperative changes. Patient satisfaction was evaluated using 4 grades (enthusiastic, satisfied, noncommittal, and disappointed) for 438 replaced knees that were followed for longer than 1 year. Outcomes scales used the American Knee Society, Western Ontario McMaster University Osteoarthritis Index scales, and Short Form-36 scores. Correlation analyses were performed to investigate the relation between patient satisfaction and the 2 different aspects of the outcome scales: postoperative scores evaluated at latest follow-ups and preoperative to postoperative changes. The Western Ontario McMaster University Osteoarthritis Index scales function score was most strongly correlated with satisfaction (correlation coefficient=0.45). Absolute postoperative scores were better correlated with satisfaction than the preoperative to postoperative changes for all scales. Level IV (retrospective case series). Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Exploring precipitation pattern scaling methodologies and robustness among CMIP5 models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kravitz, Ben; Lynch, Cary; Hartin, Corinne

    Pattern scaling is a well-established method for approximating modeled spatial distributions of changes in temperature by assuming a time-invariant pattern that scales with changes in global mean temperature. We compare two methods of pattern scaling for annual mean precipitation (regression and epoch difference) and evaluate which method is better in particular circumstances by quantifying their robustness to interpolation/extrapolation in time, inter-model variations, and inter-scenario variations. Both the regression and epoch-difference methods (the two most commonly used methods of pattern scaling) have good absolute performance in reconstructing the climate model output, measured as an area-weighted root mean square error. We decomposemore » the precipitation response in the RCP8.5 scenario into a CO 2 portion and a non-CO 2 portion. Extrapolating RCP8.5 patterns to reconstruct precipitation change in the RCP2.6 scenario results in large errors due to violations of pattern scaling assumptions when this CO 2-/non-CO 2-forcing decomposition is applied. As a result, the methodologies discussed in this paper can help provide precipitation fields to be utilized in other models (including integrated assessment models or impacts assessment models) for a wide variety of scenarios of future climate change.« less

  17. Exploring precipitation pattern scaling methodologies and robustness among CMIP5 models

    DOE PAGES

    Kravitz, Ben; Lynch, Cary; Hartin, Corinne; ...

    2017-05-12

    Pattern scaling is a well-established method for approximating modeled spatial distributions of changes in temperature by assuming a time-invariant pattern that scales with changes in global mean temperature. We compare two methods of pattern scaling for annual mean precipitation (regression and epoch difference) and evaluate which method is better in particular circumstances by quantifying their robustness to interpolation/extrapolation in time, inter-model variations, and inter-scenario variations. Both the regression and epoch-difference methods (the two most commonly used methods of pattern scaling) have good absolute performance in reconstructing the climate model output, measured as an area-weighted root mean square error. We decomposemore » the precipitation response in the RCP8.5 scenario into a CO 2 portion and a non-CO 2 portion. Extrapolating RCP8.5 patterns to reconstruct precipitation change in the RCP2.6 scenario results in large errors due to violations of pattern scaling assumptions when this CO 2-/non-CO 2-forcing decomposition is applied. As a result, the methodologies discussed in this paper can help provide precipitation fields to be utilized in other models (including integrated assessment models or impacts assessment models) for a wide variety of scenarios of future climate change.« less

  18. Absolute Position Encoders With Vertical Image Binning

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2005-01-01

    Improved optoelectronic patternrecognition encoders that measure rotary and linear 1-dimensional positions at conversion rates (numbers of readings per unit time) exceeding 20 kHz have been invented. Heretofore, optoelectronic pattern-recognition absoluteposition encoders have been limited to conversion rates <15 Hz -- too low for emerging industrial applications in which conversion rates ranging from 1 kHz to as much as 100 kHz are required. The high conversion rates of the improved encoders are made possible, in part, by use of vertically compressible or binnable (as described below) scale patterns in combination with modified readout sequences of the image sensors [charge-coupled devices (CCDs)] used to read the scale patterns. The modified readout sequences and the processing of the images thus read out are amenable to implementation by use of modern, high-speed, ultra-compact microprocessors and digital signal processors or field-programmable gate arrays. This combination of improvements makes it possible to greatly increase conversion rates through substantial reductions in all three components of conversion time: exposure time, image-readout time, and image-processing time.

  19. Investigation of Absolute and Relative Scaling Conceptions of Students in Introductory College Chemistry Courses

    ERIC Educational Resources Information Center

    Gerlach, Karrie; Trate, Jaclyn; Blecking, Anja; Geissinger, Peter; Murphy, Kristen

    2014-01-01

    Scale as a theme in science instruction is not a new idea. As early as the mid-1980s, scale was identified as an important component of a student's overall science literacy. However, the study of scale and the scale literacy of students in varying levels of education have received less attention than other science-literacy components. Foremost…

  20. A Bold Goal: More Healthy Days Through Improved Community Health.

    PubMed

    Cordier, Tristan; Song, Yongjia; Cambon, Jesse; Haugh, Gil S; Steffen, Mark; Hardy, Patty; Staehly, Marnie; Hagan, Angela; Gopal, Vipin; Tye, Pattie Dale; Renda, Andrew

    2017-11-10

    Humana, a large health care company, has set a goal of 20% improvement in health in the communities it serves by 2020. The metric chosen for the Bold Goal initiative was the HRQOL-4 version of the Centers for Disease Control and Prevention (CDC) Healthy Days survey. This paper presents the methods for measuring progress, reports results for the first year of tracking, and describes Humana's community-based interventions. Across 7 specially designated "Bold Goal" communities, mean unhealthy days declined from 10.98 in 2015 to 10.64 in 2016, which represented a 3.1% relative, or 0.34 absolute, decline. This compares with a 0.17 absolute unhealthy days decline in Humana's national population overall. The paper also describes how additional work identifying associations between social determinants of health (SDOH) and Healthy Days is influencing Humana's strategy. Lastly, a strategy of community engagement is illustrated through 2 case examples: San Antonio and Knoxville. In the San Antonio area, the community in which Humana has been involved the longest, unhealthy days dropped by 9.0% (-0.95 absolute) from a mean 10.52 to 9.57 unhealthy days. In Knoxville, one of the newer areas of engagement, mean unhealthy days declined by 4.8% (-0.61 absolute), representing declines in both physically and mentally unhealthy days. Overall, results are encouraging, and Humana expects declines to accelerate over time as initiatives are launched and scaled in Bold Goal communities.

  1. A Bold Goal: More Healthy Days Through Improved Community Health

    PubMed Central

    Cordier, Tristan; Song, Yongjia; Cambon, Jesse; Haugh, Gil S.; Steffen, Mark; Hardy, Patty; Staehly, Marnie; Hagan, Angela; Gopal, Vipin; Tye, Pattie Dale

    2018-01-01

    Abstract Humana, a large health care company, has set a goal of 20% improvement in health in the communities it serves by 2020. The metric chosen for the Bold Goal initiative was the HRQOL-4 version of the Centers for Disease Control and Prevention (CDC) Healthy Days survey. This paper presents the methods for measuring progress, reports results for the first year of tracking, and describes Humana's community-based interventions. Across 7 specially designated “Bold Goal” communities, mean unhealthy days declined from 10.98 in 2015 to 10.64 in 2016, which represented a 3.1% relative, or 0.34 absolute, decline. This compares with a 0.17 absolute unhealthy days decline in Humana's national population overall. The paper also describes how additional work identifying associations between social determinants of health (SDOH) and Healthy Days is influencing Humana's strategy. Lastly, a strategy of community engagement is illustrated through 2 case examples: San Antonio and Knoxville. In the San Antonio area, the community in which Humana has been involved the longest, unhealthy days dropped by 9.0% (−0.95 absolute) from a mean 10.52 to 9.57 unhealthy days. In Knoxville, one of the newer areas of engagement, mean unhealthy days declined by 4.8% (−0.61 absolute), representing declines in both physically and mentally unhealthy days. Overall, results are encouraging, and Humana expects declines to accelerate over time as initiatives are launched and scaled in Bold Goal communities. PMID:29125796

  2. The trading time risks of stock investment in stock price drop

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Tang, Nian-Sheng; Mei, Dong-Cheng; Li, Yun-Xian; Zhang, Wan

    2016-11-01

    This article investigates the trading time risk (TTR) of stock investment in the case of stock price drop of Dow Jones Industrial Average (ˆDJI) and Hushen300 data (CSI300), respectively. The escape time of stock price from the maximum to minimum in a data window length (DWL) is employed to measure the absolute TTR, the ratio of the escape time to data window length is defined as the relative TTR. Empirical probability density functions of the absolute and relative TTRs for the ˆDJI and CSI300 data evidence that (i) whenever the DWL increases, the absolute TTR increases, the relative TTR decreases otherwise; (ii) there is the monotonicity (or non-monotonicity) for the stability of the absolute (or relative) TTR; (iii) there is a peak distribution for shorter trading days and a two-peak distribution for longer trading days for the PDF of ratio; (iv) the trading days play an opposite role on the absolute (or relative) TTR and its stability between ˆDJI and CSI300 data.

  3. Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts’ law

    PubMed Central

    Willett, Francis R.; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Pandarinath, Chethan; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Hochberg, Leigh R.; Kirsch, Robert F.; Ajiboye, A. Bolu

    2017-01-01

    Objective Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts’ law: MT = a + b log2(D/R ) (where MT is movement time, D is target distance, R is target radius, and a,b are parameters). Fitts’ law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio D/R) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to D/R). Approach Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts’ law. Main Results We found that movement times were better described by the equation MT = a + bD + cR−2, which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the D/R ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user’s motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. Significance The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts’ law-like relationship to iBCI movements may require nonlinear decoding strategies. PMID:28177925

  4. Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts’ law

    NASA Astrophysics Data System (ADS)

    Willett, Francis R.; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Pandarinath, Chethan; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Hochberg, Leigh R.; Kirsch, Robert F.; Bolu Ajiboye, A.

    2017-04-01

    Objective. Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts’ law: \\text{MT}=a+b{{log}2}(D/R) (where MT is movement time, D is target distance, R is target radius, and a,~b are parameters). Fitts’ law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio D/R ) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to D/R ). Approach. Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts’ law. Main results. We found that movement times were better described by the equation \\text{MT}=a+bD+c{{R}-2} , which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the D/R ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user’s motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. Significance. The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts’ law-like relationship to iBCI movements may require non-linear decoding strategies.

  5. Efficient parallel algorithms for string editing and related problems

    NASA Technical Reports Server (NTRS)

    Apostolico, Alberto; Atallah, Mikhail J.; Larmore, Lawrence; Mcfaddin, H. S.

    1988-01-01

    The string editing problem for input strings x and y consists of transforming x into y by performing a series of weighted edit operations on x of overall minimum cost. An edit operation on x can be the deletion of a symbol from x, the insertion of a symbol in x or the substitution of a symbol x with another symbol. This problem has a well known O((absolute value of x)(absolute value of y)) time sequential solution (25). The efficient Program Requirements Analysis Methods (PRAM) parallel algorithms for the string editing problem are given. If m = ((absolute value of x),(absolute value of y)) and n = max((absolute value of x),(absolute value of y)), then the CREW bound is O (log m log n) time with O (mn/log m) processors. In all algorithms, space is O (mn).

  6. A Meta-Analysis of Growth Trends from Vertically Scaled Assessments

    ERIC Educational Resources Information Center

    Dadey, Nathan; Briggs, Derek C.

    2012-01-01

    A vertical scale, in principle, provides a common metric across tests with differing difficulties (e.g., spanning multiple grades) so that statements of "absolute" growth can be made. This paper compares 16 states' 2007-2008 effect size growth trends on vertically scaled reading and math assessments across grades 3 to 8. Two patterns…

  7. Hyperresonance Unifying Theory and the resulting Law

    NASA Astrophysics Data System (ADS)

    Omerbashich, Mensur

    2012-07-01

    Hyperresonance Unifying Theory (HUT) is herein conceived based on theoretical and experimental geophysics, as that absolute extension of both Multiverse and String Theories, in which all universes (the Hyperverse) - of non-prescribed energies and scales - mutually orbit as well as oscillate in tune. The motivation for this is to explain oddities of "attraction at a distance" and physical unit(s) attached to the Newtonian gravitational constant G. In order to make sure HUT holds absolutely, we operate over non-temporal, unitless and quantities with derived units only. A HUT's harmonic geophysical localization (here for the Earth-Moon system; the Georesonator) is indeed achieved for mechanist and quantum scales, in form of the Moon's Equation of Levitation (of Anti-gravity). HUT holds true for our Solar system the same as its localized equation holds down to the precision of terrestrial G-experiments, regardless of the scale: to 10^-11 and 10^-39 for mechanist and quantum scales, respectively. Due to its absolute accuracy (within NIST experimental limits), the derived equation is regarded a law. HUT can indeed be demonstrated for our entire Solar system in various albeit empirical ways. In summary, HUT shows: (i) how classical gravity can be expressed in terms of scale and the speed of light; (ii) the tuning-forks principle is universal; (iii) the body's fundamental oscillation note is not a random number as previously believed; (iv) earthquakes of about M6 and stronger arise mainly due to Earth's alignments longer than three days to two celestial objects in our Solar system, whereas M7+ earthquakes occur mostly during two simultaneous such alignments; etc. HUT indicates: (v) quantum physics is objectocentric, i.e. trivial in absolute terms so it cannot be generalized beyond classical mass-bodies; (vi) geophysics is largely due to the magnification of mass resonance; etc. HUT can be extended to multiverse (10^17) and string scales (10^-67) too, providing a constraint to String Theory. HUT is the unifying theory as it demotes classical forces to states of stringdom. The String Theory's paradigm on vibrational rather than particlegenic reality has thus been confirmed.

  8. Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions

    NASA Technical Reports Server (NTRS)

    Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina

    2002-01-01

    OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.

  9. Variable diffusion in stock market fluctuations

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2015-02-01

    We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.

  10. Measured and modelled absolute gravity in Greenland

    NASA Astrophysics Data System (ADS)

    Nielsen, E.; Forsberg, R.; Strykowski, G.

    2012-12-01

    Present day changes in the ice volume in glaciated areas like Greenland will change the load on the Earth and to this change the lithosphere will respond elastically. The Earth also responds to changes in the ice volume over a millennial time scale. This response is due to the viscous properties of the mantle and is known as Glaical Isostatic Adjustment (GIA). Both signals are present in GPS and absolute gravity (AG) measurements and they will give an uncertainty in mass balance estimates calculated from these data types. It is possible to separate the two signals if both gravity and Global Positioning System (GPS) time series are available. DTU Space acquired an A10 absolute gravimeter in 2008. One purpose of this instrument is to establish AG time series in Greenland and the first measurements were conducted in 2009. Since then are 18 different Greenland GPS Network (GNET) stations visited and six of these are visited more then once. The gravity signal consists of three signals; the elastic signal, the viscous signal and the direct attraction from the ice masses. All of these signals can be modelled using various techniques. The viscous signal is modelled by solving the Sea Level Equation with an appropriate ice history and Earth model. The free code SELEN is used for this. The elastic signal is modelled as a convolution of the elastic Greens function for gravity and a model of present day ice mass changes. The direct attraction is the same as the Newtonian attraction and is calculated as this. Here we will present the preliminary results of the AG measurements in Greenland. We will also present modelled estimates of the direct attraction, the elastic and the viscous signals.

  11. Terrestrial gravity instrumentation in the 20th Century: A brief review

    NASA Technical Reports Server (NTRS)

    Valliant, H. D.

    1989-01-01

    At the turn of the century, only pendulum apparatuses and torsion balances were available for general exploration work. Both of these early techniques were cumbersome and time-consuming. It was no wonder that the development of the gravity meter was welcomed with a universal sigh of relief. By 1935 potential field measurements with gravity meters supplanted gradient measurements with torsion balances. Potential field measurements are generally characterized by three types: absolute - measurements are made in fundamental units, traceable to national standards of length and time at each observation site; relative with absolute scale - differences in gravity are measured in fundamental units traceable to national standards of length and time; and relative - differences in gravity are measured with arbitrary scale. Improvements in the design of gravity meters since their introduction has led to a significant reduction in size and greatly increased precision. As the precision increased, applications expanded to include the measurement of crustal motion, the search for non-Newtonian forces, archeology, and civil engineering. Apart from enhancements to the astatic gravity meter, few developments in hardware were achieved. One of these was the vibrating string gravity meter which was developed in the 1950s and was employed briefly for marine and borehole applications. Another is the cryogenic gravity meter which utilizes the stability of superconducting current to achieve a relative instrument with extremely low drift suitable for tidal and secular gravity measurements. An advance in performing measurements from a moving platform was achieved with the development of the straight-line gravity meter. The latter part of the century also saw the rebirth of gradient measurements which offers advantages for observations from a moving platform. Definitive testing of the Bell gradiometer was recently reported.

  12. Overcoming Stagnation in the Levels and Distribution of Child Mortality: The Case of the Philippines.

    PubMed

    Bermejo, Raoul; Firth, Sonja; Hodge, Andrew; Jimenez-Soto, Eliana; Zeck, Willibald

    2015-01-01

    Health-related within-country inequalities continue to be a matter of great interest and concern to both policy makers and researchers. This study aims to assess the level and the distribution of child mortality outcomes in the Philippines across geographical and socioeconomic indicators. Data on 159,130 children ever borne were analysed from five waves of the Philippine Demographic and Health Survey. Direct estimation was used to construct under-five and neonatal mortality rates for the period 1980-2013. Rate differences and ratios, and where possible, slope and relative indices of inequality were calculated to measure disparities on absolute and relative scales. Stratification was undertaken by levels of rural/urban location, island groups and household wealth. National under-five and neonatal mortality rates have shown considerable albeit differential reductions since 1980. Recently released data suggests that neonatal mortality has declined following a period of stagnation. Declines in under-five mortality have been accompanied by decreases in wealth and geography-related absolute inequalities. However, relative inequalities for the same markers have remained stable over time. For neonates, mixed evidence suggests that absolute and relative inequalities have remained stable or may have risen. In addition to continued reductions in under-five mortality, new data suggests that the Philippines have achieved success in addressing the commonly observed stagnated trend in neonatal mortality. This success has been driven by economic improvement since 2006 as well as efforts to implement a nationwide universal health care campaign. Yet, such patterns, nonetheless, accorded with persistent inequalities, particularly on a relative scale. A continued focus on addressing universal coverage, the influence of decentralisation and armed conflict, and issues along the continuum of care is advocated.

  13. Overcoming Stagnation in the Levels and Distribution of Child Mortality: The Case of the Philippines

    PubMed Central

    Bermejo, Raoul; Firth, Sonja; Hodge, Andrew; Jimenez-Soto, Eliana; Zeck, Willibald

    2015-01-01

    Background Health-related within-country inequalities continue to be a matter of great interest and concern to both policy makers and researchers. This study aims to assess the level and the distribution of child mortality outcomes in the Philippines across geographical and socioeconomic indicators. Methodology Data on 159,130 children ever borne were analysed from five waves of the Philippine Demographic and Health Survey. Direct estimation was used to construct under-five and neonatal mortality rates for the period 1980–2013. Rate differences and ratios, and where possible, slope and relative indices of inequality were calculated to measure disparities on absolute and relative scales. Stratification was undertaken by levels of rural/urban location, island groups and household wealth. Findings National under-five and neonatal mortality rates have shown considerable albeit differential reductions since 1980. Recently released data suggests that neonatal mortality has declined following a period of stagnation. Declines in under-five mortality have been accompanied by decreases in wealth and geography-related absolute inequalities. However, relative inequalities for the same markers have remained stable over time. For neonates, mixed evidence suggests that absolute and relative inequalities have remained stable or may have risen. Conclusion In addition to continued reductions in under-five mortality, new data suggests that the Philippines have achieved success in addressing the commonly observed stagnated trend in neonatal mortality. This success has been driven by economic improvement since 2006 as well as efforts to implement a nationwide universal health care campaign. Yet, such patterns, nonetheless, accorded with persistent inequalities, particularly on a relative scale. A continued focus on addressing universal coverage, the influence of decentralisation and armed conflict, and issues along the continuum of care is advocated. PMID:26431409

  14. Chaotic dynamics of Comet 1P/Halley: Lyapunov exponent and survival time expectancy

    NASA Astrophysics Data System (ADS)

    Muñoz-Gutiérrez, M. A.; Reyes-Ruiz, M.; Pichardo, B.

    2015-03-01

    The orbital elements of Comet Halley are known to a very high precision, suggesting that the calculation of its future dynamical evolution is straightforward. In this paper we seek to characterize the chaotic nature of the present day orbit of Comet Halley and to quantify the time-scale over which its motion can be predicted confidently. In addition, we attempt to determine the time-scale over which its present day orbit will remain stable. Numerical simulations of the dynamics of test particles in orbits similar to that of Comet Halley are carried out with the MERCURY 6.2 code. On the basis of these we construct survival time maps to assess the absolute stability of Halley's orbit, frequency analysis maps to study the variability of the orbit, and we calculate the Lyapunov exponent for the orbit for variations in initial conditions at the level of the present day uncertainties in our knowledge of its orbital parameters. On the basis of our calculations of the Lyapunov exponent for Comet Halley, the chaotic nature of its motion is demonstrated. The e-folding time-scale for the divergence of initially very similar orbits is approximately 70 yr. The sensitivity of the dynamics on initial conditions is also evident in the self-similarity character of the survival time and frequency analysis maps in the vicinity of Halley's orbit, which indicates that, on average, it is unstable on a time-scale of hundreds of thousands of years. The chaotic nature of Halley's present day orbit implies that a precise determination of its motion, at the level of the present-day observational uncertainty, is difficult to predict on a time-scale of approximately 100 yr. Furthermore, we also find that the ejection of Halley from the Solar system or its collision with another body could occur on a time-scale as short as 10 000 yr.

  15. Absolute shielding scales for Al, Ga, and In and revised nuclear magnetic dipole moments of {sup 27}Al, {sup 69}Ga, {sup 71}Ga, {sup 113}In, and {sup 115}In nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antušek, A., E-mail: andrej.antusek@stuba.sk; Holka, F., E-mail: filip.holka@stuba.sk

    2015-08-21

    We present coupled cluster calculations of NMR shielding constants of aluminum, gallium, and indium in water-ion clusters. In addition, relativistic and dynamical corrections and the influence of the second solvation shell are evaluated. The final NMR shielding constants define new absolute shielding scales, 600.0 ± 4.1 ppm, 2044.4 ± 31.4 ppm, and 4507.7 ± 63.7 ppm for aluminum, gallium, and indium, respectively. The nuclear magnetic dipole moments for {sup 27}Al, {sup 69}Ga, {sup 71}Ga, {sup 113}In, and {sup 115}In isotopes are corrected by combining the computed shielding constants with experimental NMR frequencies. The absolute magnitude of the correction increases alongmore » the series and for indium isotopes it reaches approximately −8.0 × 10{sup −3} of the nuclear magneton.« less

  16. Separation of components from a scale mixture of Gaussian white noises

    NASA Astrophysics Data System (ADS)

    Vamoş, Călin; Crăciun, Maria

    2010-05-01

    The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.

  17. Consistent Long-Time Series of GPS Satellite Antenna Phase Center Corrections

    NASA Astrophysics Data System (ADS)

    Steigenberger, P.; Schmid, R.; Rothacher, M.

    2004-12-01

    The current IGS processing strategy disregards satellite antenna phase center variations (pcvs) depending on the nadir angle and applies block-specific phase center offsets only. However, the transition from relative to absolute receiver antenna corrections presently under discussion necessitates the consideration of satellite antenna pcvs. Moreover, studies of several groups have shown that the offsets are not homogeneous within a satellite block. Manufacturer specifications seem to confirm this assumption. In order to get best possible antenna corrections, consistent ten-year time series (1994-2004) of satellite-specific pcvs and offsets were generated. This challenging effort became possible as part of the reprocessing of a global GPS network currently performed by the Technical Universities of Munich and Dresden. The data of about 160 stations since the official start of the IGS in 1994 have been reprocessed, as today's GPS time series are mostly inhomogeneous and inconsistent due to continuous improvements in the processing strategies and modeling of global GPS solutions. An analysis of the signals contained in the time series of the phase center offsets demonstrates amplitudes on the decimeter level, at least one order of magnitude worse than the desired accuracy. The periods partly arise from the GPS orbit configuration, as the orientation of the orbit planes with regard to the inertial system repeats after about 350 days due to the rotation of the ascending nodes. In addition, the rms values of the X- and Y-offsets show a high correlation with the angle between the orbit plane and the direction to the sun. The time series of the pcvs mainly point at the correlation with the global terrestrial scale. Solutions with relative and absolute phase center corrections, with block- and satellite-specific satellite antenna corrections demonstrate the effect of this parameter group on other global GPS parameters such as the terrestrial scale, station velocities, the geocenter position or the tropospheric delays. Thus, deeper insight into the so-called `Bermuda triangle' of several highly correlated parameters is given.

  18. Ocean Basin Evolution and Global-Scale Plate Reorganization Events Since Pangea Breakup

    NASA Astrophysics Data System (ADS)

    Müller, R. Dietmar; Seton, Maria; Zahirovic, Sabin; Williams, Simon E.; Matthews, Kara J.; Wright, Nicky M.; Shephard, Grace E.; Maloney, Kayla T.; Barnett-Moore, Nicholas; Hosseinpour, Maral; Bower, Dan J.; Cannon, John

    2016-06-01

    We present a revised global plate motion model with continuously closing plate boundaries ranging from the Triassic at 230 Ma to the present day, assess differences among alternative absolute plate motion models, and review global tectonic events. Relatively high mean absolute plate motion rates of approximately 9-10 cm yr-1 between 140 and 120 Ma may be related to transient plate motion accelerations driven by the successive emplacement of a sequence of large igneous provinces during that time. An event at ˜100 Ma is most clearly expressed in the Indian Ocean and may reflect the initiation of Andean-style subduction along southern continental Eurasia, whereas an acceleration at ˜80 Ma of mean rates from 6 to 8 cm yr-1 reflects the initial northward acceleration of India and simultaneous speedups of plates in the Pacific. An event at ˜50 Ma expressed in relative, and some absolute, plate motion changes around the globe and in a reduction of global mean plate speeds from about 6 to 4-5 cm yr-1 indicates that an increase in collisional forces (such as the India-Eurasia collision) and ridge subduction events in the Pacific (such as the Izanagi-Pacific Ridge) play a significant role in modulating plate velocities.

  19. Gender differences in adult foot shape: implications for shoe design.

    PubMed

    Wunderlich, R E; Cavanagh, P R

    2001-04-01

    To analyze gender differences in foot shape in a large sample of young individuals. Univariate t-tests and multivariate discriminant analyses were used to assess 1) significant differences between men and women for each foot and leg dimension, standardized to foot length, 2) the reliability of classification into gender classes using the absolute and standardized variable sets, and 3) the relative importance of each variable to the discrimination between men and women. Men have longer and broader feet than women for a given stature. After normalization of the measurements by foot length, men and women were found to differ significantly in two calf, five ankle, and four foot shape variables. Classification by gender using absolute values was correct at least 93% of the time. Using the variables standardized to foot length, gender was correctly classified 85% of the time. This study demonstrates that female feet and legs are not simply scaled-down versions of male feet but rather differ in a number of shape characteristics, particularly at the arch, the lateral side of the foot, the first toe, and the ball of the foot. These differences should be taken into account in the design and manufacture of women's sport shoes.

  20. Dynamics of an optically confined nanoparticle diffusing normal to a surface.

    PubMed

    Schein, Perry; O'Dell, Dakota; Erickson, David

    2016-06-01

    Here we measure the hindered diffusion of an optically confined nanoparticle in the direction normal to a surface, and we use this to determine the particle-surface interaction profile in terms of the absolute height. These studies are performed using the evanescent field of an optically excited single-mode silicon nitride waveguide, where the particle is confined in a height-dependent potential energy well generated from the balance of optical gradient and surface forces. Using a high-speed cmos camera, we demonstrate the ability to capture the short time-scale diffusion dominated motion for 800-nm-diam polystyrene particles, with measurement times of only a few seconds per particle. Using established theory, we show how this information can be used to estimate the equilibrium separation of the particle from the surface. As this measurement can be made simultaneously with equilibrium statistical mechanical measurements of the particle-surface interaction energy landscape, we demonstrate the ability to determine these in terms of the absolute rather than relative separation height. This enables the comparison of potential energy landscapes of particle-surface interactions measured under different experimental conditions, enhancing the utility of this technique.

  1. Supercontinent cycles and the calculation of absolute palaeolongitude in deep time.

    PubMed

    Mitchell, Ross N; Kilian, Taylor M; Evans, David A D

    2012-02-08

    Traditional models of the supercontinent cycle predict that the next supercontinent--'Amasia'--will form either where Pangaea rifted (the 'introversion' model) or on the opposite side of the world (the 'extroversion' models). Here, by contrast, we develop an 'orthoversion' model whereby a succeeding supercontinent forms 90° away, within the great circle of subduction encircling its relict predecessor. A supercontinent aggregates over a mantle downwelling but then influences global-scale mantle convection to create an upwelling under the landmass. We calculate the minimum moment of inertia about which oscillatory true polar wander occurs owing to the prolate shape of the non-hydrostatic Earth. By fitting great circles to each supercontinent's true polar wander legacy, we determine that the arc distances between successive supercontinent centres (the axes of the respective minimum moments of inertia) are 88° for Nuna to Rodinia and 87° for Rodinia to Pangaea--as predicted by the orthoversion model. Supercontinent centres can be located back into Precambrian time, providing fixed points for the calculation of absolute palaeolongitude over billion-year timescales. Palaeogeographic reconstructions additionally constrained in palaeolongitude will provide increasingly accurate estimates of ancient plate motions and palaeobiogeographic affinities.

  2. Improved absolute calibration of LOPES measurements and its impact on the comparison with REAS 3.11 and CoREAS simulations

    NASA Astrophysics Data System (ADS)

    Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fuchs, B.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hiller, R.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Nehls, S.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zensus, J. A.

    2016-02-01

    LOPES was a digital antenna array detecting the radio emission of cosmic-ray air showers. The calibration of the absolute amplitude scale of the measurements was done using an external, commercial reference source, which emits a frequency comb with defined amplitudes. Recently, we obtained improved reference values by the manufacturer of the reference source, which significantly changed the absolute calibration of LOPES. We reanalyzed previously published LOPES measurements, studying the impact of the changed calibration. The main effect is an overall decrease of the LOPES amplitude scale by a factor of 2.6 ± 0.2, affecting all previously published values for measurements of the electric-field strength. This results in a major change in the conclusion of the paper 'Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations' published by Apel et al. (2013) : With the revised calibration, LOPES measurements now are compatible with CoREAS simulations, but in tension with REAS 3.11 simulations. Since CoREAS is the latest version of the simulation code incorporating the current state of knowledge on the radio emission of air showers, this new result indicates that the absolute amplitude prediction of current simulations now is in agreement with experimental data.

  3. Reliability of Lactation Assessment Tools Applied to Overweight and Obese Women.

    PubMed

    Chapman, Donna J; Doughty, Katherine; Mullin, Elizabeth M; Pérez-Escamilla, Rafael

    2016-05-01

    The interrater reliability of lactation assessment tools has not been evaluated in overweight/obese women. This study aimed to compare the interrater reliability of 4 lactation assessment tools in this population. A convenience sample of 45 women (body mass index > 27.0) was videotaped while breastfeeding (twice daily on days 2, 4, and 7 postpartum). Three International Board Certified Lactation Consultants independently rated each videotaped session using 4 tools (Infant Breastfeeding Assessment Tool [IBFAT], modified LATCH [mLATCH], modified Via Christi [mVC], and Riordan's Tool [RT]). For each day and tool, we evaluated interrater reliability with 1-way repeated-measures analyses of variance, intraclass correlation coefficients (ICCs), and percentage absolute agreement between raters. Analyses of variance showed significant differences between raters' scores on day 2 (all scales) and day 7 (RT). Intraclass correlation coefficient values reflected good (mLATCH) to excellent reliability (IBFAT, mVC, and RT) on days 2 and 7. All day 4 ICCs reflected good reliability. The ICC for mLATCH was significantly lower than all others on day 2 and was significantly lower than IBFAT (day 7). Percentage absolute interrater agreement for scale components ranged from 31% (day 2: observable swallowing, RT) to 92% (day 7: IBFAT, fixing; and mVC, latch time). Swallowing scores on all scales had the lowest levels of interrater agreement (31%-64%). We demonstrated differences in the interrater reliability of 4 lactation assessment tools when applied to overweight/obese women, with the lowest values observed on day 4. Swallowing assessment was particularly unreliable. Researchers and clinicians using these scales should be aware of the differences in their psychometric behavior. © The Author(s) 2015.

  4. The occultation of 28 Sgr by Saturn - Saturn pole position and astrometry

    NASA Technical Reports Server (NTRS)

    Hubbard, W. B.; Porco, C. C.; Hunten, D. M.; Rieke, G. H.; Rieke, M. J.; Mccarthy, D. W.; Haemmerle, V.; Clark, R.; Turtle, E. P.; Haller, J.

    1993-01-01

    Saturn's ring plane-defined pole position is presently derived from the geometry of Saturn's July 3, 1989 occultation of 28 Sgr, as indicated by the timings of 12 circular edges in the Saturn C-ring as well as the edges of the Encke gap and the outer edge of the Keeler gap. The edge timings are used to solve for the position angle and opening angle of the apparent ring ellipses; the internal consistency of the data set and the redundancy of stations indicates an absolute error of the order of 5 km. The pole position thus obtained is consistent with the pole and ring radius scale derived from Voyager occultation observations.

  5. Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.

  6. Validation of the 17-item Hamilton Depression Rating Scale definition of response for adults with major depressive disorder using equipercentile linking to Clinical Global Impression scale ratings: analysis of Pharmacogenomic Research Network Antidepressant Medication Pharmacogenomic Study (PGRN-AMPS) data.

    PubMed

    Bobo, William V; Angleró, Gabriela C; Jenkins, Gregory; Hall-Flavin, Daniel K; Weinshilboum, Richard; Biernacka, Joanna M

    2016-05-01

    The study aimed to define thresholds of clinically significant change in 17-item Hamilton Depression Rating Scale (HDRS-17) scores using the Clinical Global Impression-Improvement (CGI-I) Scale as a gold standard. We conducted a secondary analysis of individual patient data from the Pharmacogenomic Research Network Antidepressant Medication Pharmacogenomic Study, an 8-week, single-arm clinical trial of citalopram or escitalopram treatment of adults with major depression. We used equipercentile linking to identify levels of absolute and percent change in HDRS-17 scores that equated with scores on the CGI-I at 4 and 8 weeks. Additional analyses equated changes in the HDRS-7 and Bech-6 scale scores with CGI-I scores. A CGI-I score of 2 (much improved) corresponded to an absolute decrease (improvement) in HDRS-17 total score of 11 points and a percent decrease of 50-57%, from baseline values. Similar results were observed for percent change in HDRS-7 and Bech-6 scores. Larger absolute (but not percent) decreases in HDRS-17 scores equated with CGI-I scores of 2 in persons with higher baseline depression severity. Our results support the consensus definition of response based on HDRS-17 scores (>50% decrease from baseline). A similar definition of response may apply to the HDRS-7 and Bech-6. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Toward global mapping of river discharge using satellite images and at-many-stations hydraulic geometry

    PubMed Central

    Gleason, Colin J.; Smith, Laurence C.

    2014-01-01

    Rivers provide critical water supply for many human societies and ecosystems, yet global knowledge of their flow rates is poor. We show that useful estimates of absolute river discharge (in cubic meters per second) may be derived solely from satellite images, with no ground-based or a priori information whatsoever. The approach works owing to discovery of a characteristic scaling law uniquely fundamental to natural rivers, here termed a river’s at-many-stations hydraulic geometry. A first demonstration using Landsat Thematic Mapper images over three rivers in the United States, Canada, and China yields absolute discharges agreeing to within 20–30% of traditional in situ gauging station measurements and good tracking of flow changes over time. Within such accuracies, the door appears open for quantifying river resources globally with repeat imaging, both retroactively and henceforth into the future, with strong implications for water resource management, food security, ecosystem studies, flood forecasting, and geopolitics. PMID:24639551

  8. Tachistoscopic exposure and masking of real three-dimensional scenes

    PubMed Central

    Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.

    2010-01-01

    Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129

  9. Mathematics of quantitative kinetic PCR and the application of standard curves.

    PubMed

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  10. Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.

    PubMed

    Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra

    2014-04-01

    To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Toward global mapping of river discharge using satellite images and at-many-stations hydraulic geometry.

    PubMed

    Gleason, Colin J; Smith, Laurence C

    2014-04-01

    Rivers provide critical water supply for many human societies and ecosystems, yet global knowledge of their flow rates is poor. We show that useful estimates of absolute river discharge (in cubic meters per second) may be derived solely from satellite images, with no ground-based or a priori information whatsoever. The approach works owing to discovery of a characteristic scaling law uniquely fundamental to natural rivers, here termed a river's at-many-stations hydraulic geometry. A first demonstration using Landsat Thematic Mapper images over three rivers in the United States, Canada, and China yields absolute discharges agreeing to within 20-30% of traditional in situ gauging station measurements and good tracking of flow changes over time. Within such accuracies, the door appears open for quantifying river resources globally with repeat imaging, both retroactively and henceforth into the future, with strong implications for water resource management, food security, ecosystem studies, flood forecasting, and geopolitics.

  12. An absolute interval scale of order for point patterns

    PubMed Central

    Protonotarios, Emmanouil D.; Baum, Buzz; Johnston, Alan; Hunter, Ginger L.; Griffin, Lewis D.

    2014-01-01

    Human observers readily make judgements about the degree of order in planar arrangements of points (point patterns). Here, based on pairwise ranking of 20 point patterns by degree of order, we have been able to show that judgements of order are highly consistent across individuals and the dimension of order has an interval scale structure spanning roughly 10 just-notable-differences (jnd) between disorder and order. We describe a geometric algorithm that estimates order to an accuracy of half a jnd by quantifying the variability of the size and shape of spaces between points. The algorithm is 70% more accurate than the best available measures. By anchoring the output of the algorithm so that Poisson point processes score on average 0, perfect lattices score 10 and unit steps correspond closely to jnds, we construct an absolute interval scale of order. We demonstrate its utility in biology by using this scale to quantify order during the development of the pattern of bristles on the dorsal thorax of the fruit fly. PMID:25079866

  13. Nearshore Satellite Data as Relative Indicators of Intertidal Organism Physiological Stress

    NASA Astrophysics Data System (ADS)

    Matzelle, A.; Helmuth, B.; Lakshmi, V.

    2011-12-01

    The physiological performance of intertidal and shallow subtidal invertebrates and algae is significantly affected by water temperature, and so the ability to measure and model onshore water temperatures is critical for ecological and biogeographic studies. Because of the localized influences of processes such as upwelling, mixing, and surface heating from solar radiation, nearshore water temperatures can differ from those measured directly offshore by buoys and satellites. It remains an open question what the magnitude of the differences in these temperatures are, and whether "large pixel" measurements can serve as an effective proxy for onshore processes, particularly when extrapolating from laboratory physiological studies to field conditions. We compared 9 years of nearshore (~10km) MODIS (Terra and Aqua overpasses) SST data against in situ measurements of water temperature conducted at two intertidal sites in central Oregon- Boiler Bay and Strawberry Hill. We collapsed data into increasingly longer temporal averages to address the correlation and absolute differences between onshore and nearshore temperatures over daily, weekly and monthly timescales. Results indicate that nearshore SST is a reasonable proxy for onshore water temperature, and that the strength of the correlation increases with decreasing temporal resolution. Correlations between differences in maxima are highest, followed by average and minima, and were lower at a site with regular upwelling. While average differences ranged from ~0.199-1.353°C, absolute differences across time scales were ~0.446-6.906°C, and were highest for cold temperatures. The results suggest that, at least at these two sites, SST can be used as a relative proxy for general trends only, especially over longer time scales.

  14. Computational Modeling of Semiconductor Dynamics at Femtosecond Time Scales

    NASA Technical Reports Server (NTRS)

    Agrawal, Govind P.; Goorjian, Peter M.

    1998-01-01

    The main objective of the Joint-Research Interchange NCC2-5149 was to develop computer codes for accurate simulation of femtosecond pulse propagation in semiconductor lasers and semiconductor amplifiers [I]. The code should take into account all relevant processes such as the interband and intraband carrier relaxation mechanisms and the many-body effects arising from the Coulomb interaction among charge carriers [2]. This objective was fully accomplished. We made use of a previously developed algorithm developed at NASA Ames [3]-[5]. The new algorithm was tested on several problems of practical importance. One such problem was related to the amplification of femtosecond optical pulses in semiconductors. These results were presented in several international conferences over a period of three years. With the help of a postdoctoral fellow, we also investigated the origin of instabilities that can lead to the formation of femtosecond pulses in different kinds of lasers. We analyzed the occurrence of absolute instabilities in lasers that contain a dispersive host material with third-order nonlinearities. Starting from the Maxwell-Bloch equations, we derived general multimode equations to distinguish between convective and absolute instabilities. We find that both self-phase modulation and intensity-dependent absorption can dramatically affect the absolute stability of such lasers. In particular, the self-pulsing threshold (the so-called second laser threshold) can occur at few times the first laser threshold even in good-cavity lasers for which no self-pulsing occurs in the absence of intensity-dependent absorption. These results were presented in an international conference and published in the form of two papers.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  16. Neutrino footprint in large scale structure

    NASA Astrophysics Data System (ADS)

    Garay, Carlos Peña; Verde, Licia; Jimenez, Raul

    2017-03-01

    Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.

  17. Simulations of VLBI observations of a geodetic satellite providing co-location in space

    NASA Astrophysics Data System (ADS)

    Anderson, James M.; Beyerle, Georg; Glaser, Susanne; Liu, Li; Männel, Benjamin; Nilsson, Tobias; Heinkelmann, Robert; Schuh, Harald

    2018-02-01

    We performed Monte Carlo simulations of very-long-baseline interferometry (VLBI) observations of Earth-orbiting satellites incorporating co-located space-geodetic instruments in order to study how well the VLBI frame and the spacecraft frame can be tied using such measurements. We simulated observations of spacecraft by VLBI observations, time-of-flight (TOF) measurements using a time-encoded signal in the spacecraft transmission, similar in concept to precise point positioning, and differential VLBI (D-VLBI) observations using angularly nearby quasar calibrators to compare their relative performance. We used the proposed European Geodetic Reference Antenna in Space (E-GRASP) mission as an initial test case for our software. We found that the standard VLBI technique is limited, in part, by the present lack of knowledge of the absolute offset of VLBI time to Coordinated Universal Time at the level of microseconds. TOF measurements are better able to overcome this problem and provide frame ties with uncertainties in translation and scale nearly a factor of three smaller than those yielded from VLBI measurements. If the absolute time offset issue can be resolved by external means, the VLBI results can be significantly improved and can come close to providing 1 mm accuracy in the frame tie parameters. D-VLBI observations with optimum performance assumptions provide roughly a factor of two higher uncertainties for the E-GRASP orbit. We additionally simulated how station and spacecraft position offsets affect the frame tie performance.

  18. Forecast models for suicide: Time-series analysis with data from Italy.

    PubMed

    Preti, Antonio; Lentini, Gianluca

    2016-01-01

    The prediction of suicidal behavior is a complex task. To fine-tune targeted preventative interventions, predictive analytics (i.e. forecasting future risk of suicide) is more important than exploratory data analysis (pattern recognition, e.g. detection of seasonality in suicide time series). This study sets out to investigate the accuracy of forecasting models of suicide for men and women. A total of 101 499 male suicides and of 39 681 female suicides - occurred in Italy from 1969 to 2003 - were investigated. In order to apply the forecasting model and test its accuracy, the time series were split into a training set (1969 to 1996; 336 months) and a test set (1997 to 2003; 84 months). The main outcome was the accuracy of forecasting models on the monthly number of suicides. These measures of accuracy were used: mean absolute error; root mean squared error; mean absolute percentage error; mean absolute scaled error. In both male and female suicides a change in the trend pattern was observed, with an increase from 1969 onwards to reach a maximum around 1990 and decrease thereafter. The variances attributable to the seasonal and trend components were, respectively, 24% and 64% in male suicides, and 28% and 41% in female ones. Both annual and seasonal historical trends of monthly data contributed to forecast future trends of suicide with a margin of error around 10%. The finding is clearer in male than in female time series of suicide. The main conclusion of the study is that models taking seasonality into account seem to be able to derive information on deviation from the mean when this occurs as a zenith, but they fail to reproduce it when it occurs as a nadir. Preventative efforts should concentrate on the factors that influence the occurrence of increases above the main trend in both seasonal and cyclic patterns of suicides.

  19. Sub-nanosecond resolution electric field measurements during ns pulse breakdown in ambient air

    NASA Astrophysics Data System (ADS)

    Simeni Simeni, Marien; Goldberg, Ben; Gulko, Ilya; Frederickson, Kraig; Adamovich, Igor V.

    2018-01-01

    Electric field during ns pulse discharge breakdown in ambient air has been measured by ps four-wave mixing, with temporal resolution of 0.2 ns. The measurements have been performed in a diffuse plasma generated in a dielectric barrier discharge, in plane-to-plane geometry. Absolute calibration of the electric field in the plasma is provided by the Laplacian field measured before breakdown. Sub-nanosecond time resolution is obtained by using a 150 ps duration laser pulse, as well as by monitoring the timing of individual laser shots relative to the voltage pulse, and post-processing four-wave mixing signal waveforms saved for each laser shot, placing them in the appropriate ‘time bins’. The experimental data are compared with the analytic solution for time-resolved electric field in the plasma during pulse breakdown, showing good agreement on ns time scale. Qualitative interpretation of the data illustrates the effects of charge separation, charge accumulation/neutralization on the dielectric surfaces, electron attachment, and secondary breakdown. Comparison of the present data with more advanced kinetic modeling is expected to provide additional quantitative insight into air plasma kinetics on ~ 0.1-100 ns scales.

  20. Method and apparatus for ultra-high-sensitivity, incremental and absolute optical encoding

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    1999-01-01

    An absolute optical linear or rotary encoder which encodes the motion of an object (3) with increased resolution and encoding range and decreased sensitivity to damage to the scale includes a scale (5), which moves with the object and is illuminated by a light source (11). The scale carries a pattern (9) which is imaged by a microscope optical system (13) on a CCD array (17) in a camera head (15). The pattern includes both fiducial markings (31) which are identical for each period of the pattern and code areas (33) which include binary codings of numbers identifying the individual periods of the pattern. The image of the pattern formed on the CCD array is analyzed by an image processor (23) to locate the fiducial marking, decode the information encoded in the code area, and thereby determine the position of the object.

  1. Living beyond the edge: Higgs inflation and vacuum metastability

    DOE PAGES

    Bezrukov, Fedor; Rubio, Javier; Shaposhnikov, Mikhail

    2015-10-13

    The measurements of the Higgs mass and top Yukawa coupling indicate that we live in a very special universe, at the edge of the absolute stability of the electroweak vacuum. If fully stable, the Standard Model (SM) can be extended all the way up to the inflationary scale and the Higgs field, nonminimally coupled to gravity with strength ξ, can be responsible for inflation. We show that the successful Higgs inflation scenario can also take place if the SM vacuum is not absolutely stable. This conclusion is based on two effects that were overlooked previously. The first one is associatedmore » with the effective renormalization of the SM couplings at the energy scale M P/ξ, where M P is the Planck scale. Lastly, the second one is a symmetry restoration after inflation due to high temperature effects that leads to the (temporary) disappearance of the vacuum at Planck values of the Higgs field.« less

  2. A novel capacitive absolute positioning sensor based on time grating with nanometer resolution

    NASA Astrophysics Data System (ADS)

    Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng

    2018-05-01

    The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.

  3. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less

  4. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    DOE PAGES

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-29

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less

  5. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    NASA Astrophysics Data System (ADS)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-01

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.

  6. Absolute Humidity and the Seasonality of Influenza (Invited)

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.

    2010-12-01

    Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.

  7. Patients' preferences shed light on the murky world of guideline-based medicine.

    PubMed

    Penston, James

    2007-02-01

    Concordance--that is, shared decision-making between doctors and patients--is nowadays accepted as an integral part of good clinical practice. It is of particular importance in the case of treatments with only marginal benefits such as those recommended in guidelines for the management of common, chronic diseases. However, the implementation of guideline-based medicine conflicts with that of concordance. Studies indicate that patients are not adequately informed about their treatment. Clinical guidelines for conditions such as cardiovascular disease are based on large-scale randomized trials and the complex nature of the data limits effective communication especially in an environment characterized by time constraints. But other factors may be more relevant, notably pressures to comply with guidelines and financial rewards for meeting targets: it is simply not in the interests of doctors to disclose accurate information. Studies show that patients are far from impressed by the small benefits derived from large scale trials. Indeed, faced with absolute risk reductions, patients decline treatment promoted by guidelines. To participate in clinical decisions, patients require unbiased information concerning outcomes with and without treatment, and the absolute risk reduction; they should be told that most patients receiving long-term medication obtain no benefit despite being exposed to adverse drug reactions; furthermore, they should be made aware of the questionable validity of large-scale trials and that these studies may be influenced by those with a vested interest. Genuine concordance will inevitably lead to many patients rejecting the recommendations of guidelines and encourage a more critical approach to clinical research and guideline-based medicine.

  8. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DOE PAGES

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe; ...

    2017-04-07

    In this study, the increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course absolute quantitative metabolomics. This approach, termed “unsteady-state flux balance analysis” (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic metabolic flux states for red blood cells, platelets, and Saccharomyces cerevisiae. Notably, only uFBAmore » predicts that stored red blood cells metabolize TCA intermediates to regenerate important cofactors, such as ATP, NADH, and NADPH. These pathway usage predictions were subsequently validated through 13C isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems.« less

  9. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe

    In this study, the increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course absolute quantitative metabolomics. This approach, termed “unsteady-state flux balance analysis” (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic metabolic flux states for red blood cells, platelets, and Saccharomyces cerevisiae. Notably, only uFBAmore » predicts that stored red blood cells metabolize TCA intermediates to regenerate important cofactors, such as ATP, NADH, and NADPH. These pathway usage predictions were subsequently validated through 13C isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems.« less

  10. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Investigating the Luminous Environment of SDSS Data Release 4 Mg II Absorption Line Systems

    NASA Astrophysics Data System (ADS)

    Caler, Michelle A.; Ravi, Sheth K.

    2018-01-01

    We investigate the luminous environment within a few hundred kiloparsecs of 3760 Mg II absorption line systems. These systems lie along 3760 lines of sight to Sloan Digital Sky Survey (SDSS) Data Release 4 QSOs, have redshifts that range between 0.37 ≤ z ≤ 0.82, and have rest equivalent widths greater than 0.18 Å. We use the SDSS Catalog Archive Server to identify galaxies projected near 3 arcminutes of the absorbing QSO’s position, and a background subtraction technique to estimate the absolute magnitude distribution and luminosity function of galaxies physically associated with these Mg II absorption line systems. The Mg II absorption system sample is split into two parts, with the split occurring at rest equivalent width 0.8 Å, and the resulting absolute magnitude distributions and luminosity functions compared on scales ranging from 50 h-1 kpc to 880 h-1 kpc. We find that, on scales of 100 h-1 kpc and smaller, the two distributions differ: the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width ≥ 0.8 Å (2750 lines of sight) seems to be approximated by that of elliptical-Sa type galaxies, whereas the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width < 0.8 Å (1010 lines of sight) seems to be approximated by that of Sa-Sbc type galaxies. However, on larger scales greater than 200 h-1 kpc, both distributions are broadly consistent with that of elliptical-Sa type galaxies. We note that, in a broader context, these results represent an estimate of the bright end of the galaxy luminosity function at a median redshift of z ˜ 0.65.

  12. Si-Traceable Scale for Measurements of Radiocarbon Concentration

    NASA Astrophysics Data System (ADS)

    Hodges, Joseph T.; Fleisher, Adam J.; Liu, Qingnan; Long, David A.

    2017-06-01

    Radiocarbon (^{14}C) dating of organic materials is based on measuring the ^{14}C/^{12}C atomic fraction relative to the nascent value that existed when the material was formed by photosynthetic conversion of carbon dioxide present in the atmosphere. This field of measurement has numerous applications including source apportionment of anthropogenic and biogenic fuels and combustion emissions, carbon cycle dynamics, archaeology, and forensics. Accelerator mass spectrometry (AMS) is the most widely used method for radiocarbon detection because it can measure extremely small amounts of radiocarbon (background of nominally 1.2 parts-per-trillion) with high relative precision (0.4 %). AMS measurements of radiocarbon are typically calibrated by reference to standard oxalic-acid (C_2H_2O_4) samples of known radiocativity that are derived from plant matter. Specifically, the internationally accepted absolute dating reference for so-called "modern-equivalent" radiocarbon is 95 % of the specific radioactivity in AD 1950 of the National Bureau of Standards (NBS) oxalic acid standard reference material and normalized to δ^{13}C_{VPDB} = 19 per mil. With this definition, a "modern-equivalent" corresponds to 1.176(70) parts-per-trillion of ^{14}C relative to total carbon content. As an alternative radiocarbon scale, we propose an SI-traceable method to determine ^{14}C absolute concentration which is based on linear Beer-Lambert-law absorption measurements of selected ^{14}C^{16}O_2 ν_3-band line areas. This approach is attractive because line intensities of chosen radiocarbon dioxide transitions can be determined by ab initio calculations with relative uncertainties below 0.5 %. This assumption is justified by the excellent agreement between theoretical values of line intensities and measurements for stable isotopologues of CO_2. In the case of cavity ring-down spectroscopy (CRDS) measurements of ^{14}C^{16}O_2 peak areas, we show that absolute, SI-traceable concentrations of radiocarbon can be determined through measurements of time, frequency, pressure and temperature. Notably, this approach will not require knowledge of the radiocarbon half-life and is expected to provide a stable scale that does not require an artifact standard. M. Stuiver and H. A. Polach, Radiocarbon 19, (1977) 355 O. L. Polyansky et al., Phys. Rev. Lett. 114, (2015) 243001

  13. United time-frequency spectroscopy for dynamics and global structure.

    PubMed

    Marian, Adela; Stowe, Matthew C; Lawall, John R; Felinto, Daniel; Ye, Jun

    2004-12-17

    Ultrashort laser pulses have thus far been used in two distinct modes. In the time domain, the pulses have allowed probing and manipulation of dynamics on a subpicosecond time scale. More recently, phase stabilization has produced optical frequency combs with absolute frequency reference across a broad bandwidth. Here we combine these two applications in a spectroscopic study of rubidium atoms. A wide-bandwidth, phase-stabilized femtosecond laser is used to monitor the real-time dynamic evolution of population transfer. Coherent pulse accumulation and quantum interference effects are observed and well modeled by theory. At the same time, the narrow linewidth of individual comb lines permits a precise and efficient determination of the global energy-level structure, providing a direct connection among the optical, terahertz, and radio-frequency domains. The mechanical action of the optical frequency comb on the atomic sample is explored and controlled, leading to precision spectroscopy with an appreciable reduction in systematic errors.

  14. Accuracy evaluation of an ASTER-Derived Global Digital Elevation Model (GDEM) Version 1 and Version 2 for two sites in western Africa

    USGS Publications Warehouse

    Chirico, Peter G.; Malpeli, Katherine C.; Trimble, Sarah M.

    2012-01-01

    This study compares the ASTER Global DEM version 1 (GDEMv1) and version 2 (GDEMv2) for two study sites with distinct terrain and land cover characteristics in western Africa. The effects of land cover, slope, relief, and stack number are evaluated through both absolute and relative DEM statistical comparisons. While GDEMv2 at times performed better than GDEMv1, this improvement was not consistent, revealing the complex nature and interaction of terrain and land cover characteristics, which influences the accuracy of GDEM tiles on local and regional scales.

  15. Measurement of the Am 242 m neutron-induced reaction cross sections

    DOE PAGES

    Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; ...

    2017-02-17

    The neutron-induced reaction cross sections of 242mAm were measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. A new neutron-capture cross section was determined, and the absolute scale was set according to a concurrent measurement of the well-known 242mAm(n,f) cross section. The (n,γ) cross section was measured from thermal energy to an incident energy of 1 eV at which point the data quality was limited by the reaction yield in the laboratory. Our new 242mAm fission cross section was normalized to ENDF/B-VII.1 tomore » set the absolute scale, and it agreed well with the (n,f) cross section from thermal energy to 1 keV. Lastly, the average absolute capture-to-fission ratio was determined from thermal energy to E n = 0.1 eV, and it was found to be 26(4)% as opposed to the ratio of 19% from the ENDF/B-VII.1 evaluation.« less

  16. Mapping the microvascular and the associated absolute values of oxy-hemoglobin concentration through turbid media via local off-set diffuse optical imaging

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Klämpfl, Florian; Stelzle, Florian; Schmidt, Michael

    2014-11-01

    An imging resolution of micron-scale has not yet been discovered by diffuse optical imaging (DOI), while a superficial response was eliminated. In this work, we report on a new approach of DOI with a local off-set alignment to subvert the common boundary conditions of the modified Beer-Lambert Law (MBLL). It can resolve a superficial target in micron scale under a turbid media. To validate both major breakthroughs, this system was used to recover a subsurface microvascular mimicking structure under an skin equivalent phantom. This microvascular was included with oxy-hemoglobin solution in variant concentrations to distiguish the absolute values of CtRHb and CtHbO2 . Experimental results confirmed the feasibility of recovering the target vascular of 50 µm in diameter, and graded the values of the concentrations of oxy-hemoglobin from 10 g/L to 50 g/L absolutely. Ultimately, this approach could evolve into a non-invasive imaging system to map the microvascular pattern and the associated oximetry under a human skin in-vivo.

  17. Multiscale Auroral Emission Statistics as Evidence of Turbulent Reconnection in Earth's Midtail Plasma Sheet

    NASA Technical Reports Server (NTRS)

    Klimas, Alex; Uritsky, Vadim; Donovan, Eric

    2010-01-01

    We provide indirect evidence for turbulent reconnection in Earth's midtail plasma sheet by reexamining the statistical properties of bright, nightside auroral emission events as observed by the UVI experiment on the Polar spacecraft and discussed previously by Uritsky et al. The events are divided into two groups: (1) those that map to absolute value of (X(sub GSM)) < 12 R(sub E) in the magnetotail and do not show scale-free statistics and (2) those that map to absolute value of (X(sub GSM)) > 12 R(sub E) and do show scale-free statistics. The absolute value of (X(sub GSM)) dependence is shown to most effectively organize the events into these two groups. Power law exponents obtained for group 2 are shown to validate the conclusions of Uritsky et al. concerning the existence of critical dynamics in the auroral emissions. It is suggested that the auroral dynamics is a reflection of a critical state in the magnetotail that is based on the dynamics of turbulent reconnection in the midtail plasma sheet.

  18. A simple microstructure return model explaining microstructure noise and Epps effects

    NASA Astrophysics Data System (ADS)

    Saichev, A.; Sornette, D.

    2014-01-01

    We present a novel simple microstructure model of financial returns that combines (i) the well-known ARFIMA process applied to tick-by-tick returns, (ii) the bid-ask bounce effect, (iii) the fat tail structure of the distribution of returns and (iv) the non-Poissonian statistics of inter-trade intervals. This model allows us to explain both qualitatively and quantitatively important stylized facts observed in the statistics of both microstructure and macrostructure returns, including the short-ranged correlation of returns, the long-ranged correlations of absolute returns, the microstructure noise and Epps effects. According to the microstructure noise effect, volatility is a decreasing function of the time-scale used to estimate it. The Epps effect states that cross correlations between asset returns are increasing functions of the time-scale at which the returns are estimated. The microstructure noise is explained as the result of the negative return correlations inherent in the definition of the bid-ask bounce component (ii). In the presence of a genuine correlation between the returns of two assets, the Epps effect is due to an average statistical overlap of the momentum of the returns of the two assets defined over a finite time-scale in the presence of the long memory process (i).

  19. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields

    PubMed Central

    Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.

    2016-01-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599

  20. Time: the enigma of space

    NASA Astrophysics Data System (ADS)

    Yu, Francis T. S.

    2017-08-01

    In this article we have based on the laws of physics to illustrate the enigma time as creating our physical space (i.e., the universe). We have shown that without time there would be no physical substances, no space and no life. In reference to Einstein's energy equation, we see that energy and mass can be traded, and every mass can be treated as an Energy Reservoir. We have further shown that physical space cannot be embedded in absolute empty space and cannot have any absolute empty subspace in it. Since all physical substances existed with time, our cosmos is created by time and every substance including our universe is coexisted with time. Although time initiates the creation, it is the physical substances which presented to us the existence of time. We are not alone with almost absolute certainty. Someday we may find a right planet, once upon a time, had harbored a civilization for a short period of light years.

  1. Convective blueshifts in the solar atmosphere. I. Absolute measurements with LARS of the spectral lines at 6302 Å

    NASA Astrophysics Data System (ADS)

    Löhner-Böttcher, J.; Schmidt, W.; Stief, F.; Steinmetz, T.; Holzwarth, R.

    2018-03-01

    Context. The solar convection manifests as granulation and intergranulation at the solar surface. In the photosphere, convective motions induce differential Doppler shifts to spectral lines. The observed convective blueshift varies across the solar disk. Aim. We focus on the impact of solar convection on the atmosphere and aim to resolve its velocity stratification in the photosphere. Methods: We performed high-resolution spectroscopic observations of the solar spectrum in the 6302 Å range with the Laser Absolute Reference Spectrograph at the Vacuum Tower Telescope. A laser frequency comb enabled the calibration of the spectra to an absolute wavelength scale with an accuracy of 1 m s-1. We systematically scanned the quiet Sun from the disk center to the limb at ten selected heliocentric positions. The analysis included 99 time sequences of up to 20 min in length. By means of ephemeris and reference corrections, we translated wavelength shifts into absolute line-of-sight velocities. A bisector analysis on the line profiles yielded the shapes and convective shifts of seven photospheric lines. Results: At the disk center, the bisector profiles of the iron lines feature a pronounced C-shape with maximum convective blueshifts of up to -450 m s-1 in the spectral line wings. Toward the solar limb, the bisectors change into a "\\"-shape with a saturation in the line core at a redshift of +100 m s-1. The center-to-limb variation of the line core velocities shows a slight increase in blueshift when departing the disk center for larger heliocentric angles. This increase in blueshift is more pronounced for the magnetically less active meridian than for the equator. Toward the solar limb, the blueshift decreases and can turn into a redshift. In general, weaker lines exhibit stronger blueshifts. Conclusions: Best spectroscopic measurements enabled the accurate determination of absolute convective shifts in the solar photosphere. We convolved the results to lower spectral resolution to permit a comparison with observations from other instruments.

  2. The Perceived Efficacy and Goal Setting System (PEGS), part II: evaluation of test-retest reliability and differences between child and parental reports in the Swedish version.

    PubMed

    Vroland-Nordstrand, Kristina; Krumlinde-Sundholm, Lena

    2012-11-01

    to evaluate the test-retest reliability of children's perceptions of their own competence in performing daily tasks and of their choice of goals for intervention using the Swedish version of the perceived efficacy and goal setting system (PEGS). A second aim was to evaluate agreement between children's and parents' perceptions of the child's competence and choices of intervention goals. Forty-four children with disabilities and their parents completed the Swedish version of the PEGS. Thirty-six of the children completed a retest session allocated into one of two groups: (A) for evaluation of perceived competence and (B) for evaluation of choice of goals. Cohen's kappa, weighted kappa and absolute agreement were calculated. Test-retest reliability for children's perceived competence showed good agreement for the dichotomized scale of competent/non-competent performance; however, using the four-point scale the agreement varied. The children's own goals were relatively stable over time; 78% had an absolute agreement ranging from 50% to 100%. There was poor agreement between the children's and their parents' ratings. Goals identified by the children differed from those identified by their parents, with 48% of the children having no goals identical to those chosen by their parents. These results indicate that the Swedish version of the PEGS produces reliable outcomes comparable to the original version.

  3. Time-resolved measurements of the hot-electron population in ignition-scale experiments on the National Ignition Facility (invited)

    NASA Astrophysics Data System (ADS)

    Hohenberger, M.; Albert, F.; Palmer, N. E.; Lee, J. J.; Döppner, T.; Divol, L.; Dewald, E. L.; Bachmann, B.; MacPhee, A. G.; LaCaille, G.; Bradley, D. K.; Stoeckl, C.

    2014-11-01

    In laser-driven inertial confinement fusion, hot electrons can preheat the fuel and prevent fusion-pellet compression to ignition conditions. Measuring the hot-electron population is key to designing an optimized ignition platform. The hot electrons in these high-intensity, laser-driven experiments, created via laser-plasma interactions, can be inferred from the bremsstrahlung generated by hot electrons interacting with the target. At the National Ignition Facility (NIF) [G. H. Miller, E. I. Moses, and C. R. Wuest, Opt. Eng. 43, 2841 (2004)], the filter-fluorescer x-ray (FFLEX) diagnostic-a multichannel, hard x-ray spectrometer operating in the 20-500 keV range-has been upgraded to provide fully time-resolved, absolute measurements of the bremsstrahlung spectrum with ˜300 ps resolution. Initial time-resolved data exhibited significant background and low signal-to-noise ratio, leading to a redesign of the FFLEX housing and enhanced shielding around the detector. The FFLEX x-ray sensitivity was characterized with an absolutely calibrated, energy-dispersive high-purity germanium detector using the high-energy x-ray source at NSTec Livermore Operations over a range of K-shell fluorescence energies up to 111 keV (U Kβ). The detectors impulse response function was measured in situ on NIF short-pulse (˜90 ps) experiments, and in off-line tests.

  4. Temporal Dynamics of Microbial Rhodopsin Fluorescence Reports Absolute Membrane Voltage

    PubMed Central

    Hou, Jennifer H.; Venkatachalam, Veena; Cohen, Adam E.

    2014-01-01

    Plasma membrane voltage is a fundamentally important property of a living cell; its value is tightly coupled to membrane transport, the dynamics of transmembrane proteins, and to intercellular communication. Accurate measurement of the membrane voltage could elucidate subtle changes in cellular physiology, but existing genetically encoded fluorescent voltage reporters are better at reporting relative changes than absolute numbers. We developed an Archaerhodopsin-based fluorescent voltage sensor whose time-domain response to a stepwise change in illumination encodes the absolute membrane voltage. We validated this sensor in human embryonic kidney cells. Measurements were robust to variation in imaging parameters and in gene expression levels, and reported voltage with an absolute accuracy of 10 mV. With further improvements in membrane trafficking and signal amplitude, time-domain encoding of absolute voltage could be applied to investigate many important and previously intractable bioelectric phenomena. PMID:24507604

  5. Dissemination of optical-comb-based ultra-broadband frequency reference through a fiber network.

    PubMed

    Nagano, Shigeo; Kumagai, Motohiro; Li, Ying; Ido, Tetsuya; Ishii, Shoken; Mizutani, Kohei; Aoki, Makoto; Otsuka, Ryohei; Hanado, Yuko

    2016-08-22

    We disseminated an ultra-broadband optical frequency reference based on a femtosecond (fs)-laser optical comb through a kilometer-scale fiber link. Its spectrum ranged from 1160 nm to 2180 nm without additional fs-laser combs at the end of the link. By employing a fiber-induced phase noise cancellation technique, the linewidth and fractional frequency instability attained for all disseminated comb modes were of order 1 Hz and 10-18 in a 5000 s averaging time. The ultra-broad optical frequency reference, for which absolute frequency is traceable to Japan Standard Time, was applied in the frequency stabilization of an injection-seeded Q-switched 2051 nm pulse laser for a coherent light detection and ranging LIDAR system.

  6. Absolute pitch among American and Chinese conservatory students: prevalence differences, and evidence for a speech-related critical period.

    PubMed

    Deutsch, Diana; Henthorn, Trevor; Marvin, Elizabeth; Xu, HongShuai

    2006-02-01

    Absolute pitch is extremely rare in the U.S. and Europe; this rarity has so far been unexplained. This paper reports a substantial difference in the prevalence of absolute pitch in two normal populations, in a large-scale study employing an on-site test, without self-selection from within the target populations. Music conservatory students in the U.S. and China were tested. The Chinese subjects spoke the tone language Mandarin, in which pitch is involved in conveying the meaning of words. The American subjects were nontone language speakers. The earlier the age of onset of musical training, the greater the prevalence of absolute pitch; however, its prevalence was far greater among the Chinese than the U.S. students for each level of age of onset of musical training. The findings suggest that the potential for acquiring absolute pitch may be universal, and may be realized by enabling infants to associate pitches with verbal labels during the critical period for acquisition of features of their native language.

  7. Characterizing absolute piezoelectric microelectromechanical system displacement using an atomic force microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, J., E-mail: radiant@ferrodevices.com; Chapman, S., E-mail: radiant@ferrodevices.com

    Piezoresponse Force Microscopy (PFM) is a popular tool for the study of ferroelectric and piezoelectric materials at the nanometer level. Progress in the development of piezoelectric MEMS fabrication is highlighting the need to characterize absolute displacement at the nanometer and Ångstrom scales, something Atomic Force Microscopy (AFM) might do but PFM cannot. Absolute displacement is measured by executing a polarization measurement of the ferroelectric or piezoelectric capacitor in question while monitoring the absolute vertical position of the sample surface with a stationary AFM cantilever. Two issues dominate the execution and precision of such a measurement: (1) the small amplitude ofmore » the electrical signal from the AFM at the Ångstrom level and (2) calibration of the AFM. The authors have developed a calibration routine and test technique for mitigating the two issues, making it possible to use an atomic force microscope to measure both the movement of a capacitor surface as well as the motion of a micro-machine structure actuated by that capacitor. The theory, procedures, pitfalls, and results of using an AFM for absolute piezoelectric measurement are provided.« less

  8. Absolute and relative educational inequalities in depression in Europe.

    PubMed

    Dudal, Pieter; Bracke, Piet

    2016-09-01

    To investigate (1) the size of absolute and relative educational inequalities in depression, (2) their variation between European countries, and (3) their relationship with underlying prevalence rates. Analyses are based on the European Social Survey, rounds three and six (N = 57,419). Depression is measured using the shortened Centre of Epidemiologic Studies Depression Scale. Education is coded by use of the International Standard Classification of Education. Country-specific logistic regressions are applied. Results point to an elevated risk of depressive symptoms among the lower educated. The cross-national patterns differ between absolute and relative measurements. For men, large relative inequalities are found for countries including Denmark and Sweden, but are accompanied by small absolute inequalities. For women, large relative and absolute inequalities are found in Belgium, Bulgaria, and Hungary. Results point to an empirical association between inequalities and the underlying prevalence rates. However, the strength of the association is only moderate. This research stresses the importance of including both measurements for comparative research and suggests the inclusion of the level of population health in research into inequalities in health.

  9. [Prognostic value of absolute monocyte count in chronic lymphocytic leukaemia].

    PubMed

    Szerafin, László; Jakó, János; Riskó, Ferenc

    2015-04-01

    The low peripheral absolute lymphocyte and high monocyte count have been reported to correlate with poor clinical outcome in various lymphomas and other cancers. However, a few data known about the prognostic value of absolute monocyte count in chronic lymphocytic leukaemia. The aim of the authors was to investigate the impact of absolute monocyte count measured at the time of diagnosis in patients with chronic lymphocytic leukaemia on the time to treatment and overal survival. Between January 1, 2005 and December 31, 2012, 223 patients with newly-diagnosed chronic lymphocytic leukaemia were included. The rate of patients needing treatment, time to treatment, overal survival and causes of mortality based on Rai stages, CD38, ZAP-70 positivity and absolute monocyte count were analyzed. Therapy was necessary in 21.1%, 57.4%, 88.9%, 88.9% and 100% of patients in Rai stage 0, I, II, III an IV, respectively; in 61.9% and 60.8% of patients exhibiting CD38 and ZAP-70 positivity, respectively; and in 76.9%, 21.2% and 66.2% of patients if the absolute monocyte count was <0.25 G/l, between 0.25-0.75 G/l and >0.75 G/l, respectively. The median time to treatment and the median overal survival were 19.5, 65, and 35.5 months; and 41.5, 65, and 49.5 months according to the three groups of monocyte counts. The relative risk of beginning the therapy was 1.62 (p<0.01) in patients with absolute monocyte count <0.25 G/l or >0.75 G/l, as compared to those with 0.25-0.75 G/l, and the risk of overal survival was 2.41 (p<0.01) in patients with absolute monocyte count lower than 0.25 G/l as compared to those with higher than 0.25 G/l. The relative risks remained significant in Rai 0 patients, too. The leading causes of mortality were infections (41.7%) and the chronic lymphocytic leukaemia (58.3%) in patients with low monocyte count, while tumours (25.9-35.3%) and other events (48.1 and 11.8%) occurred in patients with medium or high monocyte counts. Patients with low and high monocyte counts had a shorter time to treatment compared to patients who belonged to the intermediate monocyte count group. The low absolute monocyte count was associated with increased mortality caused by infectious complications and chronic lymphocytic leukaemia. The absolute monocyte count may give additional prognostic information in Rai stage 0, too.

  10. Limits on the Time Evolution of Space Dimensions from Newton's Constant

    NASA Astrophysics Data System (ADS)

    Nasseri, Forough

    Limits are imposed upon the possible rate of change of extra spatial dimensions in a decrumpling model Universe with time variable spatial dimensions (TVSD) by considering the time variation of (1+3)-dimensional Newton's constant. Previous studies on the time variation of (1+3)-dimensional Newton's constant in TVSD theory had not include the effects of the volume of the extra dimensions and the effects of the surface area of the unit sphere in D-space dimensions. Our main result is that the absolute value of the present rate of change of spatial dimensions to be less than about 10-14 yr-1. Our results would appear to provide a prima facie case for ruling the TVSD model out. We show that based on observational bounds on the present variation of Newton's constant, one would have to conclude that the spatial dimension of the Universe when the Universe was "at the Planck scale" to be less than or equal to 3.09. If the dimension of space when the Universe was "at the Planck scale" is constrained to be fractional and very close to 3, then the whole edifice of TVSD model loses credibility.

  11. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    PubMed

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  12. The Austrian absolute gravity base net: 27 years of spatial and temporal acquisition of gravity data

    NASA Astrophysics Data System (ADS)

    Ullrich, Christian; Ruess, Diethard

    2014-05-01

    Since 1987 the BEV (Federal Office of Metrology and Surveying) has been operating the absolute gravimeters JILAg-6 and FG5 which are used for basic measurements to determine or review fundamental gravity stations in Austria and abroad. Overall more than 70 absolute gravity stations were installed in Austria and neighbouring countries and some of them have been regularly monitored. A few stations are part of international projects like ECGN (European Combined Geodetic network) and UNIGRACE (Unification of Gravity System in Central and Eastern Europe). As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Thus the BEV maintains the national standard for gravimetry in Austria, which is validated and confirmed by international comparisons. Since 1989 the Austrian absolute gravimeters participated seven times in the ICAG's (International Comparison of Absolute Gravimeters) at the BIPM in Paris and Luxemburg and as well participated three times at the ECAG (European Comparison of Absolute Gravimeters) in Luxemburg. The results of these ICAG's and especially the performance of the Austrian absolute gravimeter are reported in this presentation. We also present some examples and interpretation of long time monitoring stations of absolute gravity in several Austrian locations. Some stations are located in large cities like Vienna and Graz and some others are situated in mountainous regions. Mountain stations are at the Conrad Observatory where a SG (Superconducting Gravimeter) is permanently monitoring and in Obergurgl (Tyrolia) at an elevation of approx. 2000 m which is very strong influenced from the glacier retreat.

  13. An Integrated Model of Choices and Response Times in Absolute Identification

    ERIC Educational Resources Information Center

    Brown, Scott D.; Marley, A. A. J.; Donkin, Christopher; Heathcote, Andrew

    2008-01-01

    Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a…

  14. A review of the different techniques for solid surface acid-base characterization.

    PubMed

    Sun, Chenhang; Berg, John C

    2003-09-18

    In this work, various techniques for solid surface acid-base (AB) characterization are reviewed. Different techniques employ different scales to rank acid-base properties. Based on the results from literature and the authors' own investigations for mineral oxides, these scales are compared. The comparison shows that Isoelectric Point (IEP), the most commonly used AB scale, is not a description of the absolute basicity or acidity of a surface, but a description of their relative strength. That is, a high IEP surface shows more basic functionality comparing with its acidic functionality, whereas a low IEP surface shows less basic functionality comparing with its acidic functionality. The choice of technique and scale for AB characterization depends on the specific application. For the cases in which the overall AB property is of interest, IEP (by electrokinetic titration) and H(0,max) (by indicator dye adsorption) are appropriate. For the cases in which the absolute AB property is of interest such as in the study of adhesion, it is more pertinent to use chemical shift (by XPS) and the heat of adsorption of probe gases (by calorimetry or IGC).

  15. Comparison of evidence on harms of medical interventions in randomized and nonrandomized studies

    PubMed Central

    Papanikolaou, Panagiotis N.; Christidi, Georgia D.; Ioannidis, John P.A.

    2006-01-01

    Background Information on major harms of medical interventions comes primarily from epidemiologic studies performed after licensing and marketing. Comparison with data from large-scale randomized trials is occasionally feasible. We compared evidence from randomized trials with that from epidemiologic studies to determine whether they give different estimates of risk for important harms of medical interventions. Methods We targeted well-defined, specific harms of various medical interventions for which data were already available from large-scale randomized trials (> 4000 subjects). Nonrandomized studies involving at least 4000 subjects addressing these same harms were retrieved through a search of MEDLINE. We compared the relative risks and absolute risk differences for specific harms in the randomized and nonrandomized studies. Results Eligible nonrandomized studies were found for 15 harms for which data were available from randomized trials addressing the same harms. Comparisons of relative risks between the study types were feasible for 13 of the 15 topics, and of absolute risk differences for 8 topics. The estimated increase in relative risk differed more than 2-fold between the randomized and nonrandomized studies for 7 (54%) of the 13 topics; the estimated increase in absolute risk differed more than 2-fold for 5 (62%) of the 8 topics. There was no clear predilection for randomized or nonrandomized studies to estimate greater relative risks, but usually (75% [6/8]) the randomized trials estimated larger absolute excess risks of harm than the nonrandomized studies did. Interpretation Nonrandomized studies are often conservative in estimating absolute risks of harms. It would be useful to compare and scrutinize the evidence on harms obtained from both randomized and nonrandomized studies. PMID:16505459

  16. Prediction of relative and absolute permeabilities for gas and water from soil water retention curves using a pore-scale network model

    NASA Astrophysics Data System (ADS)

    Fischer, Ulrich; Celia, Michael A.

    1999-04-01

    Functional relationships for unsaturated flow in soils, including those between capillary pressure, saturation, and relative permeabilities, are often described using analytical models based on the bundle-of-tubes concept. These models are often limited by, for example, inherent difficulties in prediction of absolute permeabilities, and in incorporation of a discontinuous nonwetting phase. To overcome these difficulties, an alternative approach may be formulated using pore-scale network models. In this approach, the pore space of the network model is adjusted to match retention data, and absolute and relative permeabilities are then calculated. A new approach that allows more general assignments of pore sizes within the network model provides for greater flexibility to match measured data. This additional flexibility is especially important for simultaneous modeling of main imbibition and drainage branches. Through comparisons between the network model results, analytical model results, and measured data for a variety of both undisturbed and repacked soils, the network model is seen to match capillary pressure-saturation data nearly as well as the analytical model, to predict water phase relative permeabilities equally well, and to predict gas phase relative permeabilities significantly better than the analytical model. The network model also provides very good estimates for intrinsic permeability and thus for absolute permeabilities. Both the network model and the analytical model lost accuracy in predicting relative water permeabilities for soils characterized by a van Genuchten exponent n≲3. Overall, the computational results indicate that reliable predictions of both relative and absolute permeabilities are obtained with the network model when the model matches the capillary pressure-saturation data well. The results also indicate that measured imbibition data are crucial to good predictions of the complete hysteresis loop.

  17. Heat Transfer Measurements of the First Experimental Layer of the Fire II Reentry Vehicle in Expansion Tubes

    NASA Astrophysics Data System (ADS)

    Capra, B. R.; Morgan, R. G.; Leyland, P.

    2005-02-01

    The present study focused on simulating a trajectory point towards the end of the first experimental heatshield of the FIRE II vehicle, at a total flight time of 1639.53s. Scale replicas were sized according to binary scaling and instrumented with thermocouples for testing in the X1 expansion tube, located at The University of Queensland. Correlation of flight to experimental data was achieved through the separation, and independent treatment of the heat modes. Preliminary investigation indicates that the absolute value of radiant surface flux is conserved between two binary scaled models, whereas convective heat transfer increases with the length scale. This difference in the scaling techniques result in the overall contribution of radiative heat transfer diminishing to less than 1% in expansion tubes from a flight value of approximately 9-17%. From empirical correlation's it has been shown that the St √ Re number decreases, under special circumstances, in expansion tubes by the percentage radiation present on the flight vehicle. Results obtained in this study give a strong indication that the relative radiative heat transfer contribution in the expansion tube tests is less than that in flight, supporting the analysis that the absolute value remains constant with binary scaling. Key words: Heat Transfer, Fire II Flight Vehicle, Expansion Tubes, Binary Scaling. NOMENCLATURE dA elemental surface area, m2 H0 stagnation enthalpy, MJ/kg L arbitrary length, m ls scale factor equal to Lf /Le M Mach Number ˙m mass flow rate, kg/s p pressure, kPa ˙q heat transfer rate, W/m2 ¯q averaged heat transfer rate W/m2 RN nose radius m Re Reynolds number, equal to ρURN µ s/RD radial distance from symmetry axis St Stanton number, equal to ˙q ρUH0 St √ Re = ˙qR 1/2 N (ρU)1/2 µ1/2H0 over radius of forebody (D/2) T temperature, K U velocity, m/s Ue equivalent velocity m/s, equal to √ 2H0 U1 primary shock speed m/s U2 secondary shock speed m/s ρ density, kg/m3 ρL binary scaling parameter, kg/m2 subscripts c convective exp experiment f flight r radiative s post shock T total ∞ freestream

  18. Age-corrected reference values for the Heidelberg multi-color anomaloscope.

    PubMed

    Rüfer, Florian; Sauter, Benno; Klettner, Alexa; Göbel, Katja; Flammer, Josef; Erb, Carl

    2012-09-01

    To determine reference values for the HMC anomaloscope (Heidelberg multi-color anomaloscope) of healthy subjects. One hundred and thirteen healthy subjects were divided into four age groups: <20 years of age (ten female, five male), 20-39 years of age (23 female, 15 male), 40-59 years of age (23 female, ten male) and >60 years of age (nine female, 18 male). Match midpoint, matching range (MR) and anomaly quotient (AQ), according to the Moreland equation [blue (436 nm) + blue-green (490 nm) = cyan (480 nm) + yellow (589 nm)] and according to the Rayleigh equation [green (546 nm) + red (671 nm) = yellow (589 nm)] were determined. The neutral adaptation was done showing white light every 5 seconds in absolute mode and every 15 seconds in relative mode. The mean match midpoint according to the Rayleigh equation was 43.9 ± 2.6 scale units in absolute mode. It was highest between 20-39 years (45.2 ± 2.2) and lowest in subjects >60 years of age (42.2 ± 2.2). The mean MR in absolute mode was 3.1 ± 3.5 scale units with a maximum >60 years (4.4 ± 4.4). The MR in relative mode was between 1.6 ± 1.9 (20-39 years) and 4.4 ± 3.8 (>60 years). The resulting mean AQ was 1.01 ± 0.15 in both modes. The mean match midpoint of the Moreland equation was 51.0 ± 5.2 scale units in absolute mode. It was highest between 20-39 years (52.5 ± 5.7), and lowest in subjects >60 years of age (48.7 ± 3.6). The mean MR according to the Moreland equation was lower in absolute mode (13.4 ± 15.6) than in relative mode (16.2 ± 15.2). The mean resulting AQ was 1.02 ± 0.21 in both modes. The values of this study can be used as references for the diagnosis of red-green and blue perception impairment with the HMC anomaloscope.

  19. Minimum clinically important differences in chronic pain vary considerable by baseline pain and methodological factors: systematic review of empirical studies.

    PubMed

    Frahm Olsen, Mette; Bjerre, Eik; Hansen, Maria Damkjær; Tendal, Britta; Hilden, Jørgen; Hróbjartsson, Asbjørn

    2018-05-21

    The minimum clinically important difference (MCID) is used to interpret the relevance of treatment effects, e.g., when developing clinical guidelines, evaluating trial results or planning sample sizes. There is currently no agreement on an appropriate MCID in chronic pain and little is known about which contextual factors cause variation. This is a systematic review. We searched PubMed, EMBASE, and Cochrane Library. Eligible studies determined MCID for chronic pain based on a one-dimensional pain scale, a patient-reported transition scale of perceived improvement, and either a mean change analysis (mean difference in pain among minimally improved patients) or a threshold analysis (pain reduction associated with best sensitivity and specificity for identifying minimally improved patients). Main results were descriptively summarized due to considerable heterogeneity, which were quantified using meta-analyses and explored using subgroup analyses and metaregression. We included 66 studies (31.254 patients). Median absolute MCID was 23 mm on a 0-100 mm scale (interquartile range [IQR] 12-39) and median relative MCID was 34% (IQR 22-45) among studies using the mean change approach. In both cases, heterogeneity was very high: absolute MCID I 2  = 99% and relative MCID I 2  = 96%. High variation was also seen among studies using the threshold approach: median absolute MCID was 20 mm (IQR 15-30) and relative MCID was 32% (IQR 15-41). Absolute MCID was strongly associated with baseline pain, explaining approximately two-thirds of the variation, and to a lesser degree with the operational definition of minimum pain relief and clinical condition. A total of 15 clinical and methodological factors were assessed as possible causes for variation in MCID. MCID for chronic pain relief vary considerably. Baseline pain is strongly associated with absolute, but not relative, measures. To a much lesser degree, MCID is also influenced by the operational definition of relevant pain relief and possibly by clinical condition. Explicit and conscientious reflections on the choice of an MCID are required when classifying effect sizes as clinically important or trivial. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Spectral irradiance calibration in the infrared. I - Ground-based and IRAS broadband calibrations

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Walker, Russell G.; Barlow, Michael J.; Deacon, John R.

    1992-01-01

    Absolutely calibrated versions of realistic model atmosphere calculations for Sirius and Vega by Kurucz (1991) are presented and used as a basis to offer a new absolute calibration of infrared broad and narrow filters. In-band fluxes for Vega are obtained and defined to be zero magnitude at all wavelengths shortward of 20 microns. Existing infrared photometry is used differentially to establish an absolute scale of the new Sirius model, yielding an angular diameter within 1 sigma of the mean determined interferometrically by Hanbury Brown et al. (1974). The use of Sirius as a primary infrared stellar standard beyond the 20 micron region is suggested. Isophotal wavelengths and monochromatic flux densities for both Vega and Sirius are tabulated.

  1. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  2. The Lifetimes of Phases in High-mass Star-forming Regions

    NASA Astrophysics Data System (ADS)

    Battersby, Cara; Bally, John; Svoboda, Brian

    2017-02-01

    High-mass stars form within star clusters from dense, molecular regions (DMRs), but is the process of cluster formation slow and hydrostatic or quick and dynamic? We link the physical properties of high-mass star-forming regions with their evolutionary stage in a systematic way, using Herschel and Spitzer data. In order to produce a robust estimate of the relative lifetimes of these regions, we compare the fraction of DMRs above a column density associated with high-mass star formation, N(H2) > 0.4-2.5 × 1022 cm-2, in the “starless” (no signature of stars ≳10 {M}⊙ forming) and star-forming phases in a 2° × 2° region of the Galactic Plane centered at ℓ = 30°. Of regions capable of forming high-mass stars on ˜1 pc scales, the starless (or embedded beyond detection) phase occupies about 60%-70% of the DMR lifetime, and the star-forming phase occupies about 30%-40%. These relative lifetimes are robust over a wide range of thresholds. We outline a method by which relative lifetimes can be anchored to absolute lifetimes from large-scale surveys of methanol masers and UCHII regions. A simplistic application of this method estimates the absolute lifetime of the starless phase to be 0.2-1.7 Myr (about 0.6-4.1 fiducial cloud free-fall times) and the star-forming phase to be 0.1-0.7 Myr (about 0.4-2.4 free-fall times), but these are highly uncertain. This work uniquely investigates the star-forming nature of high column density gas pixel by pixel, and our results demonstrate that the majority of high column density gas is in a starless or embedded phase.

  3. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  4. Transversal Fluctuations of the ASEP, Stochastic Six Vertex Model, and Hall-Littlewood Gibbsian Line Ensembles

    NASA Astrophysics Data System (ADS)

    Corwin, Ivan; Dimitrov, Evgeni

    2018-05-01

    We consider the ASEP and the stochastic six vertex model started with step initial data. After a long time, T, it is known that the one-point height function fluctuations for these systems are of order T 1/3. We prove the KPZ prediction of T 2/3 scaling in space. Namely, we prove tightness (and Brownian absolute continuity of all subsequential limits) as T goes to infinity of the height function with spatial coordinate scaled by T 2/3 and fluctuations scaled by T 1/3. The starting point for proving these results is a connection discovered recently by Borodin-Bufetov-Wheeler between the stochastic six vertex height function and the Hall-Littlewood process (a certain measure on plane partitions). Interpreting this process as a line ensemble with a Gibbsian resampling invariance, we show that the one-point tightness of the top curve can be propagated to the tightness of the entire curve.

  5. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  6. Regions of absolute ultimate boundedness for discrete-time systems.

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Weissenberger, S.

    1972-01-01

    This paper considers discrete-time systems of the Lur'e-Postnikov class where the linear part is not asymptotically stable and the nonlinear characteristic satisfies only partially the usual sector condition. Estimates of the resulting finite regions of absolute ultimate boundedness are calculated by means of a quadratic Liapunov function.

  7. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  8. The Use of the Time Average Visibility for Analyzing HERA-19 Commissioning Data

    NASA Astrophysics Data System (ADS)

    Gallardo, Samavarti; Benefo, Roshan; La Plante, Paul; Aguirre, James; HERA Collaboration

    2018-01-01

    The Hydrogen Epoch of Reionization Array (HERA) is a radio telescope that will be observing large structure throughout the cosmic reionzation epoch. This will allow us to characterize the evolution of the 21 cm power spectrum to constrain the timing and morphology of reionization, the properties of the first galaxies, the evolution of large-scale structure, and the early sources of heating. We develop a simple and robust observable for the HERA-19 commissioning data, the Time Average Visibility (TAV). We compare both redundantly and absolutely calibrated visibilities to detailed instrument simulations and to analytical expectations, and explore the signal present in the TAV. The TAV has already been demonstrated as a method to reject poorly performing antennas, and may be improved with this work to allow a simple cross-check of the calibration solutions without imaging.

  9. New experimental methodology, setup and LabView program for accurate absolute thermoelectric power and electrical resistivity measurements between 25 and 1600 K: application to pure copper, platinum, tungsten, and nickel at very high temperatures.

    PubMed

    Abadlia, L; Gasser, F; Khalouk, K; Mayoufi, M; Gasser, J G

    2014-09-01

    In this paper we describe an experimental setup designed to measure simultaneously and very accurately the resistivity and the absolute thermoelectric power, also called absolute thermopower or absolute Seebeck coefficient, of solid and liquid conductors/semiconductors over a wide range of temperatures (room temperature to 1600 K in present work). A careful analysis of the existing experimental data allowed us to extend the absolute thermoelectric power scale of platinum to the range 0-1800 K with two new polynomial expressions. The experimental device is controlled by a LabView program. A detailed description of the accurate dynamic measurement methodology is given in this paper. We measure the absolute thermoelectric power and the electrical resistivity and deduce with a good accuracy the thermal conductivity using the relations between the three electronic transport coefficients, going beyond the classical Wiedemann-Franz law. We use this experimental setup and methodology to give new very accurate results for pure copper, platinum, and nickel especially at very high temperatures. But resistivity and absolute thermopower measurement can be more than an objective in itself. Resistivity characterizes the bulk of a material while absolute thermoelectric power characterizes the material at the point where the electrical contact is established with a couple of metallic elements (forming a thermocouple). In a forthcoming paper we will show that the measurement of resistivity and absolute thermoelectric power characterizes advantageously the (change of) phase, probably as well as DSC (if not better), since the change of phases can be easily followed during several hours/days at constant temperature.

  10. New experimental methodology, setup and LabView program for accurate absolute thermoelectric power and electrical resistivity measurements between 25 and 1600 K: Application to pure copper, platinum, tungsten, and nickel at very high temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abadlia, L.; Mayoufi, M.; Gasser, F.

    2014-09-15

    In this paper we describe an experimental setup designed to measure simultaneously and very accurately the resistivity and the absolute thermoelectric power, also called absolute thermopower or absolute Seebeck coefficient, of solid and liquid conductors/semiconductors over a wide range of temperatures (room temperature to 1600 K in present work). A careful analysis of the existing experimental data allowed us to extend the absolute thermoelectric power scale of platinum to the range 0-1800 K with two new polynomial expressions. The experimental device is controlled by a LabView program. A detailed description of the accurate dynamic measurement methodology is given in thismore » paper. We measure the absolute thermoelectric power and the electrical resistivity and deduce with a good accuracy the thermal conductivity using the relations between the three electronic transport coefficients, going beyond the classical Wiedemann-Franz law. We use this experimental setup and methodology to give new very accurate results for pure copper, platinum, and nickel especially at very high temperatures. But resistivity and absolute thermopower measurement can be more than an objective in itself. Resistivity characterizes the bulk of a material while absolute thermoelectric power characterizes the material at the point where the electrical contact is established with a couple of metallic elements (forming a thermocouple). In a forthcoming paper we will show that the measurement of resistivity and absolute thermoelectric power characterizes advantageously the (change of) phase, probably as well as DSC (if not better), since the change of phases can be easily followed during several hours/days at constant temperature.« less

  11. Electron line shape and transmission function of the KATRIN monitor spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slezák, M.

    Knowledge of the neutrino mass is of particular interest in modern neutrino physics. Besides the neutrinoless double beta decay and cosmological observation information about the neutrino mass is obtained from single beta decay by observing the shape of the electron spectrum near the endpoint. The KATRIN β decay experiment aims to push the limit on the effective electron antineutrino mass down to 0.2 eV/c{sup 2}. To reach this sensitivity several systematic effects have to be under control. One of them is the fluctuations of the absolute energy scale, which therefore has to be continuously monitored at very high precision. Thismore » paper shortly describes KATRIN, the technique for continuous monitoring of the absolute energy scale and recent improvements in analysis of the monitoring data.« less

  12. Empirical Prediction of Aircraft Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Guo, Yue-Ping

    2005-01-01

    This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.

  13. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  14. Effects of repetitive or intensified instructions in telephone assisted, bystander cardiopulmonary resuscitation: an investigator-blinded, 4-armed, randomized, factorial simulation trial.

    PubMed

    van Tulder, R; Roth, D; Krammel, M; Laggner, R; Heidinger, B; Kienbacher, C; Novosad, H; Chwojka, C; Havel, C; Sterz, F; Schreiber, W; Herkner, H

    2014-01-01

    Compression depth is frequently suboptimal in cardiopulmonary resuscitation (CPR). We investigated effects of intensified wording and/or repetitive target depth instructions on compression depth in telephone-assisted, protocol driven, bystander CPR on a simulation manikin. Thirty-two volunteers performed 10 min of compression only-CPR in a prospective, investigator-blinded, 4-armed, factorial setting. Participants were randomized either to standard wording ("push down firmly 5 cm"), intensified wording ("it is very important to push down 5 cm every time") or standard or intensified wording repeated every 20s. Three dispatchers were randomized to give these instructions. Primary outcome was relative compression depth (absolute compression depth minus leaning depth). Secondary outcomes were absolute distance, hands-off times as well as BORG-scale and nine-hole peg test (NHPT), pulse rate and blood pressure to reflect physical exertion. We applied a random effects linear regression model. Relative compression depth was 35 ± 10 mm (standard) versus 31 ± 11 mm (intensified wording) versus 25 ± 8 mm (repeated standard) and 31 ± 14 mm (repeated intensified wording). Adjusted for design, body mass index and female sex, intensified wording and repetition led to decreased compression depth of 13 (95%CI -25 to -1) mm (p=0.04) and 9 (95%CI -21 to 3) mm (p=0.13), respectively. Secondary outcomes regarding intensified wording showed significant differences for absolute distance (43 ± 2 versus 20 (95%CI 3-37) mm; p=0.01) and hands-off times (60 ± 40 versus 157 (95%CI 63-251) s; p=0.04). In protocol driven, telephone-assisted, bystander CPR, intensified wording and/or repetitive target depth instruction will not improve compression depth compared to the standard instruction. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    NASA Astrophysics Data System (ADS)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  16. Linking Comparisons of Absolute Gravimeters: A Proof of Concept for a new Global Absolute Gravity Reference System.

    NASA Astrophysics Data System (ADS)

    Wziontek, H.; Palinkas, V.; Falk, R.; Vaľko, M.

    2016-12-01

    Since decades, absolute gravimeters are compared on a regular basis on an international level, starting at the International Bureau for Weights and Measures (BIPM) in 1981. Usually, these comparisons are based on constant reference values deduced from all accepted measurements acquired during the comparison period. Temporal changes between comparison epochs are usually not considered. Resolution No. 2, adopted by IAG during the IUGG General Assembly in Prague 2015, initiates the establishment of a Global Absolute Gravity Reference System based on key comparisons of absolute gravimeters (AG) under the International Committee for Weights and Measures (CIPM) in order to establish a common level in the microGal range. A stable and unique reference frame can only be achieved, if different AG are taking part in different kind of comparisons. Systematic deviations between the respective comparison reference values can be detected, if the AG can be considered stable over time. The continuous operation of superconducting gravimeters (SG) on selected stations further supports the temporal link of comparison reference values by establishing a reference function over time. By a homogenous reprocessing of different comparison epochs and including AG and SG time series at selected stations, links between several comparisons will be established and temporal comparison reference functions will be derived. By this, comparisons on a regional level can be traced to back to the level of key comparisons, providing a reference for other absolute gravimeters. It will be proved and discussed, how such a concept can be used to support the future absolute gravity reference system.

  17. Fluctuation theorems in feedback-controlled open quantum systems: Quantum coherence and absolute irreversibility

    NASA Astrophysics Data System (ADS)

    Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito

    2017-10-01

    The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.

  18. Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after Intensity-Modulated Radiotherapy for Head and Neck Cancer

    PubMed Central

    Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT. PMID:24586971

  19. Using multivariate regression model with least absolute shrinkage and selection operator (LASSO) to predict the incidence of Xerostomia after intensity-modulated radiotherapy for head and neck cancer.

    PubMed

    Lee, Tsair-Fwu; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3(+) xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R(2), chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R(2) was satisfactory and corresponded well with the expected values. Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT.

  20. Scaling digital radiographs for templating in total hip arthroplasty using conventional acetate templates independent of calibration markers.

    PubMed

    Brew, Christopher J; Simpson, Philip M; Whitehouse, Sarah L; Donnelly, William; Crawford, Ross W; Hubble, Matthew J W

    2012-04-01

    We describe a scaling method for templating digital radiographs using conventional acetate templates independent of template magnification without the need for a calibration marker. The mean magnification factor for the radiology department was determined (119.8%; range, 117%-123.4%). This fixed magnification factor was used to scale the radiographs by the method described. Thirty-two femoral heads on postoperative total hip arthroplasty radiographs were then measured and compared with the actual size. The mean absolute accuracy was within 0.5% of actual head size (range, 0%-3%) with a mean absolute difference of 0.16 mm (range, 0-1 mm; SD, 0.26 mm). Intraclass correlation coefficient showed excellent reliability for both interobserver and intraobserver measurements with intraclass correlation coefficient scores of 0.993 (95% CI, 0.988-0.996) for interobserver measurements and intraobserver measurements ranging between 0.990 and 0.993 (95% CI, 0.980-0.997). Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.

  1. Determination of quality factors by microdosimetry

    NASA Astrophysics Data System (ADS)

    Al-Affan, I. A. M.; Watt, D. E.

    1987-03-01

    The application of microdose parameters for the specification of a revised scale of quality factors which would be applicable at low doses and dose rates is examined in terms of an original proposal by Rossi. Two important modifications are suggested to enable an absolute scale of quality factors to be constructed. Allowance should be made to allow for the dependence of the saturation threshold of lineal energy on the type of heavy charged particle. Also, an artificial saturation threshold should be introduced for electron tracks as a mean of modifying the measurements made in the microdosimeter to the more realistic site sizes of nanometer dimensions. The proposed absolute scale of quality factors nicely encompasses the high RBEs of around 3 observed at low doses for tritium β rays and is consistent with the recent recommendation of the ICRP that the quality factor for fast neutrons be increased by a factor of two, assuming that there is no biological repair for the reference radiation.

  2. A comparison of phone-based and onsite-based fidelity for Assertive Community Treatment in Indiana

    PubMed Central

    McGrew, John H.; Stull, Laura G.; Rollins, Angela L.; Salyers, Michelle P.; Hicks, Lia J.

    2014-01-01

    Objective This study investigated the reliability, validity, and role of rater expertise in phone-administered fidelity assessment instrument based on the Dartmouth Assertive Community Treatment Scale (DACTS). Methods An experienced rater paired with a research assistant without fidelity assessment experience or a consultant familiar with the treatment site conducted phone based assessments of 23 teams providing assertive community treatment in Indiana. Using the DACTS, consultants conducted on-site evaluations of the programs. Results The pairs of phone raters revealed high levels of consistency [intraclass correlation coefficient (ICC)=.92] and consensus (mean absolute difference of .07). Phone and on-site assessment showed strong agreement (ICC=.87) and consensus (mean absolute difference of .07) and agreed within .1 scale point, or 2% of the scoring range, for 83% of sites and within .15 scale point for 91% of sites. Results were unaffected by the expertise level of the rater. Conclusions Phone based assessment could help agencies monitor faithful implementation of evidence-based practices. PMID:21632738

  3. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)].

    PubMed

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-07

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  4. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-01

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  5. Efficiency at Sorting Cards in Compressed Air

    PubMed Central

    Poulton, E. C.; Catton, M. J.; Carpenter, A.

    1964-01-01

    At a site where compressed air was being used in the construction of a tunnel, 34 men sorted cards twice, once at normal atmospheric pressure and once at 3½, 2½, or 2 atmospheres absolute pressure. An additional six men sorted cards twice at normal atmospheric pressure. When the task was carried out for the first time, all the groups of men performing at raised pressure were found to yield a reliably greater proportion of very slow responses than the group of men performing at normal pressure. There was reliably more variability in timing at 3½ and 2½ atmospheres absolute than at normal pressure. At 3½ atmospheres absolute the average performance was also reliably slower. When the task was carried out for the second time, exposure to 3½ atmospheres absolute pressure had no reliable effect. Thus compressed air affected performance only while the task was being learnt; it had little effect after practice. No reliable differences were found related to age, to length of experience in compressed air, or to the duration of the exposure to compressed air, which was never less than 10 minutes at 3½ atmospheres absolute pressure. PMID:14180485

  6. Improvement of absolute positioning of precision stage based on cooperation the zero position pulse signal and incremental displacement signal

    NASA Astrophysics Data System (ADS)

    Wang, H. H.; Shi, Y. P.; Li, X. H.; Ni, K.; Zhou, Q.; Wang, X. H.

    2018-03-01

    In this paper, a scheme to measure the position of precision stages, with a high precision, is presented. The encoder is composed of a scale grating and a compact two-probe reading head, to read the zero position pulse signal and continuous incremental displacement signal. The scale grating contains different codes, multiple reference codes with different spacing superimposed onto the incremental grooves with an equal spacing structure. The codes of reference mask in the reading head is the same with the reference codes on the scale grating, and generate pulse signal to locate the reference position primarily when the reading head moves along the scale grating. After locating the reference position in a section by means of the pulse signal, the reference position can be located precisely with the amplitude of the incremental displacement signal. A kind of reference codes and scale grating were designed, and experimental results show that the primary precision of the design achieved is 1 μ m. The period of the incremental signal is 1μ m, and 1000/N nm precision can be achieved by subdivide the incremental signal in N times.

  7. Duration of the common cold and similar continuous outcomes should be analyzed on the relative scale: a case study of two zinc lozenge trials.

    PubMed

    Hemilä, Harri

    2017-05-12

    The relative scale has been used for decades in analysing binary data in epidemiology. In contrast, there has been a long tradition of carrying out meta-analyses of continuous outcomes on the absolute, original measurement, scale. The biological rationale for using the relative scale in the analysis of binary outcomes is that it adjusts for baseline variations; however, similar baseline variations can occur in continuous outcomes and relative effect scale may therefore be often useful also for continuous outcomes. The aim of this study was to determine whether the relative scale is more consistent with empirical data on treating the common cold than the absolute scale. Individual patient data was available for 2 randomized trials on zinc lozenges for the treatment of the common cold. Mossad (Ann Intern Med 125:81-8, 1996) found 4.0 days and 43% reduction, and Petrus (Curr Ther Res 59:595-607, 1998) found 1.77 days and 25% reduction, in the duration of colds. In both trials, variance in the placebo group was significantly greater than in the zinc lozenge group. The effect estimates were applied to the common cold distributions of the placebo groups, and the resulting distributions were compared with the actual zinc lozenge group distributions. When the absolute effect estimates, 4.0 and 1.77 days, were applied to the placebo group common cold distributions, negative and zero (i.e., impossible) cold durations were predicted, and the high level variance remained. In contrast, when the relative effect estimates, 43 and 25%, were applied, impossible common cold durations were not predicted in the placebo groups, and the cold distributions became similar to those of the zinc lozenge groups. For some continuous outcomes, such as the duration of illness and the duration of hospital stay, the relative scale leads to a more informative statistical analysis and more effective communication of the study findings. The transformation of continuous data to the relative scale is simple with a spreadsheet program, after which the relative scale data can be analysed using standard meta-analysis software. The option for the analysis of relative effects of continuous outcomes directly from the original data should be implemented in standard meta-analysis programs.

  8. Evaluation of the Absolute Regional Temperature Potential

    NASA Technical Reports Server (NTRS)

    Shindell, D. T.

    2012-01-01

    The Absolute Regional Temperature Potential (ARTP) is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90-28degS, 28degS-28degN, 28-60degN and 60-90degN) as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within +/-20%of the actual responses, though there are some exceptions for 90-28degS and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the +/-20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39-45% and 9-39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.

  9. How efficient is sliding-scale insulin therapy? Problems with a 'cookbook' approach in hospitalized patients.

    PubMed

    Katz, C M

    1991-04-01

    Sliding-scale insulin therapy is seldom the best way to treat hospitalized diabetic patients. In the few clinical situations in which it is appropriate, close attention to details and solidly based scientific principles is absolutely necessary. Well-organized alternative approaches to insulin therapy usually offer greater efficiency and effectiveness.

  10. Absolute and relative height-pixel accuracy of SRTM-GL1 over the South American Andean Plateau

    NASA Astrophysics Data System (ADS)

    Satge, Frédéric; Denezine, Matheus; Pillco, Ramiro; Timouk, Franck; Pinel, Sébastien; Molina, Jorge; Garnier, Jérémie; Seyler, Frédérique; Bonnet, Marie-Paule

    2016-11-01

    Previously available only over the Continental United States (CONUS), the 1 arc-second mesh size (spatial resolution) SRTM-GL1 (Shuttle Radar Topographic Mission - Global 1) product has been freely available worldwide since November 2014. With a relatively small mesh size, this digital elevation model (DEM) provides valuable topographic information over remote regions. SRTM-GL1 is assessed for the first time over the South American Andean Plateau in terms of both the absolute and relative vertical point-to-point accuracies at the regional scale and for different slope classes. For comparison, SRTM-v4 and GDEM-v2 Global DEM version 2 (GDEM-v2) generated by ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) are also considered. A total of approximately 160,000 ICESat/GLAS (Ice, Cloud and Land Elevation Satellite/Geoscience Laser Altimeter System) data are used as ground reference measurements. Relative error is often neglected in DEM assessments due to the lack of reference data. A new methodology is proposed to assess the relative accuracies of SRTM-GL1, SRTM-v4 and GDEM-v2 based on a comparison with ICESat/GLAS measurements. Slope values derived from DEMs and ICESat/GLAS measurements from approximately 265,000 ICESat/GLAS point pairs are compared using quantitative and categorical statistical analysis introducing a new index: the False Slope Ratio (FSR). Additionally, a reference hydrological network is derived from Google Earth and compared with river networks derived from the DEMs to assess each DEM's potential for hydrological applications over the region. In terms of the absolute vertical accuracy on a global scale, GDEM-v2 is the most accurate DEM, while SRTM-GL1 is more accurate than SRTM-v4. However, a simple bias correction makes SRTM-GL1 the most accurate DEM over the region in terms of vertical accuracy. The relative accuracy results generally did not corroborate the absolute vertical accuracy. GDEM-v2 presents the lowest statistical results based on the relative accuracy, while SRTM-GL1 is the most accurate. Vertical accuracy and relative accuracy are two independent components that must be jointly considered when assessing a DEM's potential. DEM accuracies increased with slope. In terms of hydrological potential, SRTM products are more accurate than GDEM-v2. However, the DEMs exhibit river extraction limitations over the region due to the low regional slope gradient.

  11. Is the Universe a white-hole?

    NASA Astrophysics Data System (ADS)

    Berman, Marcelo Samuel

    2007-10-01

    Pathria (1972) has shown, for a pressureless closed Universe, that it is inside a black (or white) hole. We show now, that the Universe with a cosmic pressure obeying Einstein’s field equations, can be inside a white-hole. In the closed case, a positive cosmological constant does the job; for the flat and open cases, the condition we find is not verified for the very early Universe, but with the growth of the scale-factor, the condition will be certainly fulfilled for a positive cosmological constant, after some time. We associate the absolute temperature of the Universe, with the temperature of the corresponding white-hole.

  12. Effect of task load and task load increment on performance and workload

    NASA Technical Reports Server (NTRS)

    Hancock, P. A.; Williams, G.

    1993-01-01

    The goal of adaptive automated task allocation is the 'seamless' transfer of work demand between human and machine. Clearly, at the present time, we are far from this objective. One of the barriers to achieving effortless human-machine symbiosis is an inadequate understanding of the way in which operators themselves seek to reallocate demand among their own personal 'resources.' The paper addresses this through an examination of workload response, which scales an individual's reaction to common levels of experienced external demand. The results indicate the primary driver of performance is the absolute level of task demand over the increment in that demand.

  13. Biological characteristics of crucian by quantitative inspection method

    NASA Astrophysics Data System (ADS)

    Chu, Mengqi

    2015-04-01

    Biological characteristics of crucian by quantitative inspection method Through quantitative inspection method , the biological characteristics of crucian was preliminary researched. Crucian , Belongs to Cypriniformes, Cyprinidae, Carassius auratus, is a kind of main plant-eating omnivorous fish,like Gregarious, selection and ranking. Crucian are widely distributed, perennial water all over the country all have production. Determine the indicators of crucian in the experiment, to understand the growth, reproduction situation of crucian in this area . Using the measured data (such as the scale length ,scale size and wheel diameter and so on) and related functional to calculate growth of crucian in any one year.According to the egg shape, color, weight ,etc to determine its maturity, with the mean egg diameter per 20 eggs and the number of eggs per 0.5 grams, to calculate the relative and absolute fecundity of the fish .Measured crucian were female puberty. Based on the relation between the scale diameter and length and the information, linear relationship between crucian scale diameter and length: y=1.530+3.0649. From the data, the fertility and is closely relative to the increase of age. The older, the more mature gonad development. The more amount of eggs. In addition, absolute fecundity increases with the pituitary gland.Through quantitative check crucian bait food intake by the object, reveals the main food, secondary foods, and chance food of crucian ,and understand that crucian degree of be fond of of all kinds of bait organisms.Fish fertility with weight gain, it has the characteristics of species and populations, and at the same tmes influenced by the age of the individual, body length, body weight, environmental conditions (especially the nutrition conditions), and breeding habits, spawning times factors and the size of the egg. After a series of studies of crucian biological character, provide the ecological basis for local crucian's feeding, breeding, proliferation, fishing, resources protection and management of specific plans.

  14. Characteristic Time Scales of Characteristic Magmatic Processes and Systems

    NASA Astrophysics Data System (ADS)

    Marsh, B. D.

    2004-05-01

    Every specific magmatic process, regardless of spatial scale, has an associated characteristic time scale. Time scales associated with crystals alone are rates of growth, dissolution, settling, aggregation, annealing, and nucleation, among others. At the other extreme are the time scales associated with the dynamics of the entire magmatic system. These can be separated into two groups: those associated with system genetics (e.g., the production and transport of magma, establishment of the magmatic system) and those due to physical characteristics of the established system (e.g., wall rock failure, solidification front propagation and instability, porous flow). The detailed geometry of a specific magmatic system is particularly important to appreciate; although generic systems are useful, care must be taken to make model systems as absolutely realistic as possible. Fuzzy models produce fuzzy science. Knowledge of specific time scales is not necessarily useful or meaningful unless the hierarchical context of the time scales for a realistic magmatic system is appreciated. The age of a specific phenocryst or ensemble of phenocrysts, as determined from isotopic or CSD studies, is not meaningful unless something can be ascertained of the provenance of the crystals. For example, crystal size multiplied by growth rate gives a meaningful crystal age only if it is from a part of the system that has experienced semi-monotonic cooling prior to chilling; crystals entrained from a long-standing cumulate bed that were mechanically sorted in ascending magma may not reveal this history. Ragged old crystals rolling about in the system for untold numbers of flushing times record specious process times, telling more about the noise in the system than the life of typical, first generation crystallization processes. The most helpful process-related time scales are those that are known well and that bound or define the temporal style of the system. Perhaps the most valuable of these times comes from the observed durations and rates of volcanism. There can be little doubt that the temporal styles of volcanism are the same as those of magmatism in general. Volcano repose times, periodicity, eruptive fluxes, acoustic emission structures, lava volumes, longevity, etc. must also be characteristic of pluton-dominated systems. We must therefore give up some classical concepts (e.g., instantaneous injection of crystal-free magma as an initial condition) for any plutonic/chambered system and move towards an integrated concept of magmatism. Among the host of process-related time scales, probably the three most fundamental of any magmatic system are (1) the time scale associated with crystal nucleation (J) and growth (G) (tx}=C{1(G3 J)-{1}/4; Zieg & Marsh, J. Pet. 02') along with the associated scales for mean crystal size (L) and population (N), (2) the time scale associated with conductive cooling controlled by a local length scale (d) (tc}=C{2 d2/K; K is thermal diffusivity), and (3) the time scale associated with intra-crystal diffusion (td}=C{3 L2/D; D is chemical diffusivity). It is the subtle, clever, and insightful application of time scales, dovetailed with realistic system geometry and attention paid to the analogous time scales of volcanism, that promises to reveal the true dynamic integration of magmatic systems.

  15. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  16. Astronomical variation experiments with a Mars general circulation model

    NASA Technical Reports Server (NTRS)

    Pollack, J. B.; Haberle, R. M.; Murphy, J. R.; Schaeffer, J.; Lee, H.

    1992-01-01

    In time scales of a hundred thousand to a million years, the eccentricity of Mars orbit varies in a quasi-periodic manner between extremes as large as 0.14 and as small as 0 and the tilt of its axis of rotation with respect to the orbit normal also varies quasi-periodically between extremes as large as 35 deg and as small as 15 deg. In addition, the orientation of the axis precesses on comparable time scales. These astronomical variations are much more extreme than those experienced by the Earth. These variations are thought to have strongly modulated the seasonal cycles of dust, carbon dioxide, and water. One manifestation of the induced quasiperiodic climate changes may be the layered terrain of the polar regions, with individual layers perhaps recording variations in the absolute and/or relative deposition rates of dust and water in the polar regions, most likely in association with the winter time deposition of carbon dioxide ice. In an attempt to understand the manner in which atmospheric temperatures and winds respond to the astronomical forcings, we have initiated a series of numerical experiments with the NASA/Ames general circulation model of the Martian Atmosphere.

  17. Sea-ice deformation in a coupled ocean-sea-ice model and in satellite remote sensing data

    NASA Astrophysics Data System (ADS)

    Spreen, Gunnar; Kwok, Ron; Menemenlis, Dimitris; Nguyen, An T.

    2017-07-01

    A realistic representation of sea-ice deformation in models is important for accurate simulation of the sea-ice mass balance. Simulated sea-ice deformation from numerical simulations with 4.5, 9, and 18 km horizontal grid spacing and a viscous-plastic (VP) sea-ice rheology are compared with synthetic aperture radar (SAR) satellite observations (RGPS, RADARSAT Geophysical Processor System) for the time period 1996-2008. All three simulations can reproduce the large-scale ice deformation patterns, but small-scale sea-ice deformations and linear kinematic features (LKFs) are not adequately reproduced. The mean sea-ice total deformation rate is about 40 % lower in all model solutions than in the satellite observations, especially in the seasonal sea-ice zone. A decrease in model grid spacing, however, produces a higher density and more localized ice deformation features. The 4.5 km simulation produces some linear kinematic features, but not with the right frequency. The dependence on length scale and probability density functions (PDFs) of absolute divergence and shear for all three model solutions show a power-law scaling behavior similar to RGPS observations, contrary to what was found in some previous studies. Overall, the 4.5 km simulation produces the most realistic divergence, vorticity, and shear when compared with RGPS data. This study provides an evaluation of high and coarse-resolution viscous-plastic sea-ice simulations based on spatial distribution, time series, and power-law scaling metrics.

  18. Acellular pertussis vaccines effectiveness over time: A systematic review, meta-analysis and modeling study

    PubMed Central

    Chit, Ayman; Zivaripiran, Hossein; Shin, Thomas; Lee, Jason K. H.; Tomovici, Antigona; Macina, Denis; Johnson, David R.; Decker, Michael D.; Wu, Jianhong

    2018-01-01

    Background Acellular pertussis vaccine studies postulate that waning protection, particularly after the adolescent booster, is a major contributor to the increasing US pertussis incidence. However, these studies reported relative (ie, vs a population given prior doses of pertussis vaccine), not absolute (ie, vs a pertussis vaccine naïve population) efficacy following the adolescent booster. We aim to estimate the absolute protection offered by acellular pertussis vaccines. Methods We conducted a systematic review of acellular pertussis vaccine effectiveness (VE) publications. Studies had to comply with the US schedule, evaluate clinical outcomes, and report VE over discrete time points. VE after the 5-dose childhood series and after the adolescent sixth-dose booster were extracted separately and pooled. All relative VE estimates were transformed to absolute estimates. VE waning was estimated using meta-regression modeling. Findings Three studies reported VE after the childhood series and four after the adolescent booster. All booster studies reported relative VE (vs acellular pertussis vaccine-primed population). We estimate initial childhood series absolute VE is 91% (95% CI: 87% to 95%) and declines at 9.6% annually. Initial relative VE after adolescent boosting is 70% (95% CI: 54% to 86%) and declines at 45.3% annually. Initial absolute VE after adolescent boosting is 85% (95% CI: 84% to 86%) and declines at 11.7% (95% CI: 11.1% to 12.3%) annually. Interpretation Acellular pertussis vaccine efficacy is initially high and wanes over time. Observational VE studies of boosting failed to recognize that they were measuring relative, not absolute, VE and the absolute VE in the boosted population is better than appreciated. PMID:29912887

  19. Stability of alexithymia and its relationships with the 'big five' factors, temperament, character, and attachment style.

    PubMed

    Picardi, Angelo; Toni, Alessandro; Caroppo, Emanuele

    2005-01-01

    Controversy still exists concerning the stability of the alexithymia construct. Also, although alexithymia has been found to be related in a theoretically meaningful way to other personality constructs such as the 'Big Five' factors, few studies have investigated its relationship with influential constructs such as temperament and character, and attachment security. Two hundred twenty-one undergraduate and graduate students were administered the Toronto Alexithymia Scale (TAS-20), the State-Trait Anxiety Inventory (STAI), the Zung Depression Scale (ZDS), the Temperament and Character Inventory (TCI-125), the Big Five Questionnaire (BFQ), and the Experiences in Close Relationships (ECR) questionnaire. After 1 month, 115 participants completed again the TAS-20, STAI, and ZDS. Alexithymia was only moderately correlated with depression and anxiety. Both the absolute and relative stability of TAS-20 total and subscale scores was high, and a negligible portion of their change over time was accounted for by changes in depression or anxiety. In separate multiple regression models including also gender, age, depression and anxiety, TAS-20 total and subscale scores were correlated with low energy/extraversion, low emotional stability, openness, low friendliness/agreeableness; harm avoidance, low self-directedness, low cooperativeness, low reward dependence; attachment-related avoidance and anxiety. Our findings lend support for both absolute and relative stability of alexithymia, corroborate an association between alexithymia and insecure attachment, and contribute to a coherent placing of alexithymia in the broader theoretical network of personality constructs. Copyright (c) 2005 S. Karger AG, Basel.

  20. Deriving a geocentric reference frame for satellite positioning and navigation

    NASA Technical Reports Server (NTRS)

    Malla, R. P.; Wu, S.-C.

    1988-01-01

    With the advent of Earth-orbiting geodetic satellites, nongeocentric datums or reference frames have become things of the past. Accurate geocentric three-dimensional positioning is now possible and is of great importance for various geodetic and oceanographic applications. While relative positioning accuracy of a few centimeters has become a reality using very long baseline interferometry (VLBI), the uncertainty in the offset of the adopted coordinate system origin from the geocenter is still believed to be on the order of 1 meter. Satellite laser ranging (SLR), however, is capable of determining this offset to better than 10 cm, but this is possible only after years of measurements. Global Positioning System (GPS) measurements provide a powerful tool for an accurate determination of this origin offset. Two strategies are discussed. The first strategy utilizes the precise relative positions that were predetermined by VLBI to fix the frame orientation and the absolute scaling, while the offset from the geocenter is determined from GPS measurements. Three different cases are presented under this strategy. The reference frame thus adopted will be consistent with the VLBI coordinate system. The second strategy establishes a reference frame by holding only the longitude of one of the tracking sites fixed. The absolute scaling is determined by the adopted gravitational constant (GM) of the Earth; and the latitude is inferred from the time signature of the Earth rotation in the GPS measurements. The coordinate system thus defined will be a geocentric Earth-fixed coordinate system.

  1. The Impact of Strategy Instruction and Timing of Estimates on Low and High Working-Memory Capacity Readers' Absolute Monitoring Accuracy

    ERIC Educational Resources Information Center

    Linderholm, Tracy; Zhao, Qin

    2008-01-01

    Working-memory capacity, strategy instruction, and timing of estimates were investigated for their effects on absolute monitoring accuracy, which is the difference between estimated and actual reading comprehension test performance. Participants read two expository texts under one of two randomly assigned reading strategy instruction conditions…

  2. Particle visualization in high-power impulse magnetron sputtering. II. Absolute density dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britun, Nikolay, E-mail: nikolay.britun@umons.ac.be; Palmucci, Maria; Konstantinidis, Stephanos

    2015-04-28

    Time-resolved characterization of an Ar-Ti high-power impulse magnetron sputtering discharge has been performed. The present, second, paper of the study is related to the discharge characterization in terms of the absolute density of species using resonant absorption spectroscopy. The results on the time-resolved density evolution of the neutral and singly-ionized Ti ground state atoms as well as the metastable Ti and Ar atoms during the discharge on- and off-time are presented. Among the others, the questions related to the inversion of population of the Ti energy sublevels, as well as to re-normalization of the two-dimensional density maps in terms ofmore » the absolute density of species, are stressed.« less

  3. ACCESS: Design and Sub-System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary Elizabeth; Morris, Matthew J.; McCandliss, Stephan R.; Rasucher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Pelton, Russell; Mott, D. Brent; Wen, Hiting; Foltz, Roger; hide

    2012-01-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 -1.7 micrometer bandpass.

  4. Human amygdala activation during rapid eye movements of rapid eye movement sleep: an intracranial study.

    PubMed

    Corsi-Cabrera, María; Velasco, Francisco; Del Río-Portilla, Yolanda; Armony, Jorge L; Trejo-Martínez, David; Guevara, Miguel A; Velasco, Ana L

    2016-10-01

    The amygdaloid complex plays a crucial role in processing emotional signals and in the formation of emotional memories. Neuroimaging studies have shown human amygdala activation during rapid eye movement sleep (REM). Stereotactically implanted electrodes for presurgical evaluation in epileptic patients provide a unique opportunity to directly record amygdala activity. The present study analysed amygdala activity associated with REM sleep eye movements on the millisecond scale. We propose that phasic activation associated with rapid eye movements may provide the amygdala with endogenous excitation during REM sleep. Standard polysomnography and stereo-electroencephalograph (SEEG) were recorded simultaneously during spontaneous sleep in the left amygdala of four patients. Time-frequency analysis and absolute power of gamma activity were obtained for 250 ms time windows preceding and following eye movement onset in REM sleep, and in spontaneous waking eye movements in the dark. Absolute power of the 44-48 Hz band increased significantly during the 250 ms time window after REM sleep rapid eye movements onset, but not during waking eye movements. Transient activation of the amygdala provides physiological support for the proposed participation of the amygdala in emotional expression, in the emotional content of dreams and for the reactivation and consolidation of emotional memories during REM sleep, as well as for next-day emotional regulation, and its possible role in the bidirectional interaction between REM sleep and such sleep disorders as nightmares, anxiety and post-traumatic sleep disorder. These results provide unique, direct evidence of increased activation of the human amygdala time-locked to REM sleep rapid eye movements. © 2016 European Sleep Research Society.

  5. High-resolution absolute position detection using a multiple grating

    NASA Astrophysics Data System (ADS)

    Schilling, Ulrich; Drabarek, Pawel; Kuehnle, Goetz; Tiziani, Hans J.

    1996-08-01

    To control electro-mechanical engines, high-resolution linear and rotary encoders are needed. Interferometric methods (grating interferometers) promise a resolution of a few nanometers, but have an ambiguity range of some microns. Incremental encoders increase the absolute measurement range by counting the signal periods starting from a defined initial point. In many applications, however, it is not possible to move to this initial point, so that absolute encoders have to be used. Absolute encoders generally have a scale with two or more tracks placed next to each other. Therefore, they use a two-dimensional grating structure to measure a one-dimensional position. We present a new method, which uses a one-dimensional structure to determine the position in one dimension. It is based on a grating with a large grating period up to some millimeters, having the same diffraction efficiency in several predefined diffraction orders (multiple grating). By combining the phase signals of the different diffraction orders, it is possible to establish the position in an absolute range of the grating period with a resolution like incremental grating interferometers. The principal functionality was demonstrated by applying the multiple grating in a heterodyne grating interferometer. The heterodyne frequency was generated by a frequency modulated laser in an unbalanced interferometer. In experimental measurements an absolute range of 8 mm was obtained while achieving a resolution of 10 nm.

  6. The Rational Zero Point on Incentive-Object Preference Scales: A Developmental Study

    ERIC Educational Resources Information Center

    Haaf, Robert A.

    1971-01-01

    Preference judgments made by 20 males and 20 females (grades K-4) about the incentive value of 10 objects (i.e. bubble gum, Chiclet, candy corn, dried lima bean) helped determine relative and absolute scales for use of these objects as rewards. The assumption that the same object is equally rewarding at different age levels may be unwarranted.…

  7. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  8. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  9. A general derivation and quantification of the third law of thermodynamics.

    PubMed

    Masanes, Lluís; Oppenheim, Jonathan

    2017-03-14

    The most accepted version of the third law of thermodynamics, the unattainability principle, states that any process cannot reach absolute zero temperature in a finite number of steps and within a finite time. Here, we provide a derivation of the principle that applies to arbitrary cooling processes, even those exploiting the laws of quantum mechanics or involving an infinite-dimensional reservoir. We quantify the resources needed to cool a system to any temperature, and translate these resources into the minimal time or number of steps, by considering the notion of a thermal machine that obeys similar restrictions to universal computers. We generally find that the obtainable temperature can scale as an inverse power of the cooling time. Our results also clarify the connection between two versions of the third law (the unattainability principle and the heat theorem), and place ultimate bounds on the speed at which information can be erased.

  10. A general derivation and quantification of the third law of thermodynamics

    PubMed Central

    Masanes, Lluís; Oppenheim, Jonathan

    2017-01-01

    The most accepted version of the third law of thermodynamics, the unattainability principle, states that any process cannot reach absolute zero temperature in a finite number of steps and within a finite time. Here, we provide a derivation of the principle that applies to arbitrary cooling processes, even those exploiting the laws of quantum mechanics or involving an infinite-dimensional reservoir. We quantify the resources needed to cool a system to any temperature, and translate these resources into the minimal time or number of steps, by considering the notion of a thermal machine that obeys similar restrictions to universal computers. We generally find that the obtainable temperature can scale as an inverse power of the cooling time. Our results also clarify the connection between two versions of the third law (the unattainability principle and the heat theorem), and place ultimate bounds on the speed at which information can be erased. PMID:28290452

  11. Impact of Aortic Valve Calcification, as Measured by MDCT, on Survival in Patients With Aortic Stenosis

    PubMed Central

    Clavel, Marie-Annick; Pibarot, Philippe; Messika-Zeitoun, David; Capoulade, Romain; Malouf, Joseph; Aggarval, Shivani; Araoz, Phillip A.; Michelena, Hector I.; Cueff, Caroline; Larose, Eric; Miller, Jordan D.; Vahanian, Alec; Enriquez-Sarano, Maurice

    2014-01-01

    BACKGROUND Aortic valve calcification (AVC) load measures lesion severity in aortic stenosis (AS) and is useful for diagnostic purposes. Whether AVC predicts survival after diagnosis, independent of clinical and Doppler echocardiographic AS characteristics, has not been studied. OBJECTIVES This study evaluated the impact of AVC load, absolute and relative to aortic annulus size (AVCdensity), on overall mortality in patients with AS under conservative treatment and without regard to treatment. METHODS In 3 academic centers, we enrolled 794 patients (mean age, 73 ± 12 years; 274 women) diagnosed with AS by Doppler echocardiography who underwent multidetector computed tomography (MDCT) within the same episode of care. Absolute AVC load and AVCdensity (ratio of absolute AVC to cross-sectional area of aortic annulus) were measured, and severe AVC was separately defined in men and women. RESULTS During follow-up, there were 440 aortic valve implantations (AVIs) and 194 deaths (115 under medical treatment). Univariate analysis showed strong association of absolute AVC and AVCdensity with survival (both, p < 0.0001) with a spline curve analysis pattern of threshold and plateau of risk. After adjustment for age, sex, coronary artery disease, diabetes, symptoms, AS severity on hemodynamic assessment, and LV ejection fraction, severe absolute AVC (adjusted hazard ratio [HR]: 1.75; 95% confidence interval [CI]: 1.04 to 2.92; p = 0.03) or severe AVCdensity (adjusted HR: 2.44; 95% CI: 1.37 to 4.37; p = 0.002) independently predicted mortality under medical treatment, with additive model predictive value (all, p ≤ 0.04) and a net reclassification index of 12.5% (p = 0.04). Severe absolute AVC (adjusted HR: 1.71; 95% CI: 1.12 to 2.62; p = 0.01) and severe AVCdensity (adjusted HR: 2.22; 95% CI: 1.40 to 3.52; p = 0.001) also independently predicted overall mortality, even with adjustment for time-dependent AVI. CONCLUSIONS This large-scale, multicenter outcomes study of quantitative Doppler echocardiographic and MDCT assessment of AS shows that measuring AVC load provides incremental prognostic value for survival beyond clinical and Doppler echocardiographic assessment. Severe AVC independently predicts excess mortality after AS diagnosis, which is greatly alleviated by AVI. Thus, measurement of AVC by MDCT should be considered for not only diagnostic but also risk-stratification purposes in patients with AS. PMID:25236511

  12. Cubesat-Based Dtv Receiver Constellation for Ionospheric Tomography

    NASA Astrophysics Data System (ADS)

    Bahcivan, H.; Leveque, K.; Doe, R. A.

    2013-12-01

    The Radio Aurora Explorer mission, funded by NSF's Space Weather and Atmospheric Research program, has demonstrated the utility of CubeSat-based radio receiver payloads for ionospheric research. RAX has primarily been an investigation of microphysics of meter-scale ionospheric structures; however, the data products are also suitable for research on ionospheric effects on radio propagation. To date, the spacecraft has acquired (1) ground-based UHF radar signals that are backscattered from meter-scale ionospheric irregularities, which have been used to measure the dispersion properties of meter-scale plasma waves and (2) ground-based signals, directly on the transmitter-spacecraft path, which have been used to measure radio propagation disturbances (scintillations). Herein we describe the application of a CubeSat constellation of UHF receivers to expand the latter research topic for global-scale ionospheric tomography. The enabling factor for this expansion is the worldwide availability of ground-based digital television (DTV) broadcast signals whose characteristics are optimal for scintillation analysis. A significant part of the populated world have transitioned, or soon to be transitioned, to DTV. The DTV signal has a standard format that contains a highly phase-stable pilot carrier that can be readily adapted for propagation diagnostics. A multi-frequency software-defined radar receiver, similar to the RAX payload, can measure these signals at a large number of pilot carrier frequencies to make radio ray and diffraction tomographic measurements of the ionosphere and the irregularities contained in it. A constellation of CubeSats, launched simultaneously, or in sequence over years, similar to DMSPs, can listen to the DTV stations, providing a vast and dense probing of the ionosphere. Each spacecraft can establish links to a preprogrammed list of DTV stations and cycle through them using time-division frequency multiplexing (TDFM) method. An on board program can sort the frequencies and de-trend the phase variations due to spacecraft motion. For a single channel and a spacecraft-DTV transmitter path scan, TEC can be determined from the incremental phase variations for each channel. Determination of the absolute TEC requires knowledge of the absolute phase, i.e., including the number of 2π cycles. The absolute TEC can be determined in the case of multi-channel transmissions from a single tower (most towers house multiple television stations). A CubeSat constellation using DTV transmissions as signals of opportunity is a composite instrument for frontier ionospheric research. It is a novel application of CubeSats to understand the ionospheric response to solar, magnetospheric and upper atmospheric forcing. Combined tomographic measurements of ionospheric density can be used to study the global-scale ionospheric circulation and small-scale ionospheric structures that cause scintillation of trans-ionospheric signals. The data can support a wide range of studies, including Sub-auroral Polarization Streams (SAPS), low latitude plasma instabilities and the generation of equatorial spread F bubbles, and the role of atmospheric waves and layers and sudden stratospheric warming (SSW) events in traveling ionospheric disturbances (TID).

  13. Estimating a just-noticeable difference for ocular comfort in contact lens wearers.

    PubMed

    Papas, Eric B; Keay, Lisa; Golebiowski, Blanka

    2011-06-21

    To estimate the just-noticeable difference (JND) in ocular comfort rating by human, contact lens-wearing subjects using 1 to 100 numerical scales. Ostensibly identical, new contact lenses were worn simultaneously in both eyes by 40 subjects who made individual comfort ratings for each eye using a 100-point numerical ratings scale (NRS). Concurrently, interocular preference was indicated on a five-point Likert scale (1 to 5: strongly prefer right, slightly prefer right, no preference, slightly prefer left, strongly prefer left, respectively). Differences in NRS comfort score (ΔC) between the right and left eyes were determined for each Likert scale preference criteria. The distribution of group ΔC scores was examined relative to alternative definitions of JND as a means of estimating its value. For Likert scores indicating the presence of a slight interocular preference, absolute ΔC ranged from 1 to 30 units with a mean of 7.4 ± 1.3 (95% confidence interval) across all lenses and trials. When there was no Likert scale preference expressed between the eyes, absolute ΔC did not exceed 5 units. For ratings of comfort using a 100-point numerical rating scale, the inter-ocular JND is unlikely to be less than 5 units. The estimate for the average value in the population was approximately 7 to 8 units. These numbers indicate the lowest level at which changes in comfort measured with such scales are likely to be clinically significant.

  14. The interactive role of income (material position) and income rank (psychosocial position) in psychological distress: a 9-year longitudinal study of 30,000 UK parents.

    PubMed

    Garratt, Elisabeth A; Chandola, Tarani; Purdam, Kingsley; Wood, Alex M

    2016-10-01

    Parents face an increased risk of psychological distress compared with adults without children, and families with children also have lower average household incomes. Past research suggests that absolute income (material position) and income status (psychosocial position) influence psychological distress, but their combined effects on changes in psychological distress have not been examined. Whether absolute income interacts with income status to influence psychological distress are also key questions. We used fixed-effects panel models to examine longitudinal associations between psychological distress (measured on the Kessler scale) and absolute income, distance from the regional mean income, and regional income rank (a proxy for status) using data from 29,107 parents included in the UK Millennium Cohort Study (2003-2012). Psychological distress was determined by an interaction between absolute income and income rank: higher absolute income was associated with lower psychological distress across the income spectrum, while the benefits of higher income rank were evident only in the highest income parents. Parents' psychological distress was, therefore, determined by a combination of income-related material and psychosocial factors. Both material and psychosocial factors contribute to well-being. Higher absolute incomes were associated with lower psychological distress across the income spectrum, demonstrating the importance of material factors. Conversely, income status was associated with psychological distress only at higher absolute incomes, suggesting that psychosocial factors are more relevant to distress in more advantaged, higher income parents. Clinical interventions could, therefore, consider both the material and psychosocial impacts of income on psychological distress.

  15. Measurement of absolute frequency of continuous-wave terahertz radiation in real time using a free-running, dual-wavelength mode-locked, erbium-doped fibre laser

    PubMed Central

    Hu, Guoqing; Mizuguchi, Tatsuya; Zhao, Xin; Minamikawa, Takeo; Mizuno, Takahiko; Yang, Yuli; Li, Cui; Bai, Ming; Zheng, Zheng; Yasui, Takeshi

    2017-01-01

    A single, free-running, dual-wavelength mode-locked, erbium-doped fibre laser was exploited to measure the absolute frequency of continuous-wave terahertz (CW-THz) radiation in real time using dual THz combs of photo-carriers (dual PC-THz combs). Two independent mode-locked laser beams with different wavelengths and different repetition frequencies were generated from this laser and were used to generate dual PC-THz combs having different frequency spacings in photoconductive antennae. Based on the dual PC-THz combs, the absolute frequency of CW-THz radiation was determined with a relative precision of 1.2 × 10−9 and a relative accuracy of 1.4 × 10−9 at a sampling rate of 100 Hz. Real-time determination of the absolute frequency of CW-THz radiation varying over a few tens of GHz was also demonstrated. Use of a single dual-wavelength mode-locked fibre laser, in place of dual mode-locked lasers, greatly reduced the size, complexity, and cost of the measurement system while maintaining the real-time capability and high measurement precision. PMID:28186148

  16. No Absolutism Here: Harm Predicts Moral Judgment 30× Better Than Disgust-Commentary on Scott, Inbar, & Rozin (2016).

    PubMed

    Gray, Kurt; Schein, Chelsea

    2016-05-01

    Moral absolutism is the idea that people's moral judgments are insensitive to considerations of harm. Scott, Inbar, and Rozin (2016, this issue) claim that most moral opponents to genetically modified organisms are absolutely opposed-motivated by disgust and not harm. Yet there is no evidence for moral absolutism in their data. Perceived risk/harm is the most significant predictor of moral judgments for "absolutists," accounting for 30 times more variance than disgust. Reanalyses suggest that disgust is not even a significant predictor of the moral judgments of absolutists once accounting for perceived harm and anger. Instead of revealing actual moral absolutism, Scott et al. find only empty absolutism: hypothetical, forecasted, self-reported moral absolutism. Strikingly, the moral judgments of so-called absolutists are somewhat more sensitive to consequentialist concerns than those of nonabsolutists. Mediation reanalyses reveal that moral judgments are most proximally predicted by harm and not disgust, consistent with dyadic morality. © The Author(s) 2016.

  17. Optoelectronic device for the measurement of the absolute linear position in the micrometric displacement range

    NASA Astrophysics Data System (ADS)

    Morlanes, Tomas; de la Pena, Jose L.; Sanchez-Brea, Luis M.; Alonso, Jose; Crespo, Daniel; Saez-Landete, Jose B.; Bernabeu, Eusebio

    2005-07-01

    In this work, an optoelectronic device that provides the absolute position of a measurement element with respect to a pattern scale upon switch-on is presented. That means that there is not a need to perform any kind of transversal displacement after the startup of the system. The optoelectronic device is based on the process of light propagation passing through a slit. A light source with a definite size guarantees the relation of distances between the different elements that constitute our system and allows getting a particular optical intensity profile that can be measured by an electronic post-processing device providing the absolute location of the system with a resolution of 1 micron. The accuracy of this measuring device is restricted to the same limitations of any incremental position optical encoder.

  18. Accurate quantification of local changes for carotid arteries in 3D ultrasound images using convex optimization-based deformable registration

    NASA Astrophysics Data System (ADS)

    Cheng, Jieyu; Qiu, Wu; Yuan, Jing; Fenster, Aaron; Chiu, Bernard

    2016-03-01

    Registration of longitudinally acquired 3D ultrasound (US) images plays an important role in monitoring and quantifying progression/regression of carotid atherosclerosis. We introduce an image-based non-rigid registration algorithm to align the baseline 3D carotid US with longitudinal images acquired over several follow-up time points. This algorithm minimizes the sum of absolute intensity differences (SAD) under a variational optical-flow perspective within a multi-scale optimization framework to capture local and global deformations. Outer wall and lumen were segmented manually on each image, and the performance of the registration algorithm was quantified by Dice similarity coefficient (DSC) and mean absolute distance (MAD) of the outer wall and lumen surfaces after registration. In this study, images for 5 subjects were registered initially by rigid registration, followed by the proposed algorithm. Mean DSC generated by the proposed algorithm was 79:3+/-3:8% for lumen and 85:9+/-4:0% for outer wall, compared to 73:9+/-3:4% and 84:7+/-3:2% generated by rigid registration. Mean MAD of 0:46+/-0:08mm and 0:52+/-0:13mm were generated for lumen and outer wall respectively by the proposed algorithm, compared to 0:55+/-0:08mm and 0:54+/-0:11mm generated by rigid registration. The mean registration time of our method per image pair was 143+/-23s.

  19. Absolute colorimetric characterization of a DSLR camera

    NASA Astrophysics Data System (ADS)

    Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo

    2014-03-01

    A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.

  20. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.

  1. Improvement of forecast skill for severe weather by merging radar-based extrapolation and storm-scale NWP corrected forecast

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming

    2015-03-01

    The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.

  2. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  3. Entrenched geographical and socioeconomic disparities in child mortality: trends in absolute and relative inequalities in Cambodia.

    PubMed

    Jimenez-Soto, Eliana; Durham, Jo; Hodge, Andrew

    2014-01-01

    Cambodia has made considerable improvements in mortality rates for children under the age of five and neonates. These improvements may, however, mask considerable disparities between subnational populations. In this paper, we examine the extent of the country's child mortality inequalities. Mortality rates for children under-five and neonates were directly estimated using the 2000, 2005 and 2010 waves of the Cambodian Demographic Health Survey. Disparities were measured on both absolute and relative scales using rate differences and ratios, and where applicable, slope and relative indices of inequality by levels of rural/urban location, regions and household wealth. Since 2000, considerable reductions in under-five and to a lesser extent in neonatal mortality rates have been observed. This mortality decline has, however, been accompanied by an increase in relative inequality in both rates of child mortality for geography-related stratifying markers. For absolute inequality amongst regions, most trends are increasing, particularly for neonatal mortality, but are not statistically significant. The only exception to this general pattern is the statistically significant positive trend in absolute inequality for under-five mortality in the Coastal region. For wealth, some evidence for increases in both relative and absolute inequality for neonates is observed. Despite considerable gains in reducing under-five and neonatal mortality at a national level, entrenched and increased geographical and wealth-based inequality in mortality, at least on a relative scale, remain. As expected, national progress seems to be associated with the period of political and macroeconomic stability that started in the early 2000s. However, issues of quality of care and potential non-inclusive economic growth might explain remaining disparities, particularly across wealth and geography markers. A focus on further addressing key supply and demand side barriers to accessing maternal and child health care and on the social determinants of health will be essential in narrowing inequalities.

  4. Origins and Scaling of Hot-Electron Preheat in Ignition-Scale Direct-Drive Inertial Confinement Fusion Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, M. J.; Solodov, A. A.; Myatt, J. F.

    Planar laser-plasma interaction (LPI) experiments at the National Ignition Facility (NIF) have allowed access for the rst time to regimes of electron density scale length (~500 to 700 μm), electron temperature (~3 to 5 keV), and laser intensity (6 to 16 x 10 14 W/cm 2) that are relevant to direct-drive inertial confinement fusion ignition. Unlike in shorter-scale-length plasmas on OMEGA, scattered-light data on the NIF show that the near-quarter-critical LPI physics is dominated by stimulated Raman scattering (SRS) rather than by two-plasmon decay (TPD). This difference in regime is explained based on absolute SRS and TPD threshold considerations. SRSmore » sidescatter tangential to density contours and other SRS mechanisms are observed. The fraction of laser energy converted to hot electrons is ~0.7% to 2.9%, consistent with observed levels of SRS. The intensity threshold for hot-electron production is assessed, and the use of a Si ablator slightly increases this threshold from ~4 x 10 14 to ~6 x 10 14 W/cm 2. These results have significant implications for mitigation of LPI hot-electron preheat in direct-drive ignition designs.« less

  5. Origins and Scaling of Hot-Electron Preheat in Ignition-Scale Direct-Drive Inertial Confinement Fusion Experiments

    DOE PAGES

    Rosenberg, M. J.; Solodov, A. A.; Myatt, J. F.; ...

    2018-01-29

    Planar laser-plasma interaction (LPI) experiments at the National Ignition Facility (NIF) have allowed access for the rst time to regimes of electron density scale length (~500 to 700 μm), electron temperature (~3 to 5 keV), and laser intensity (6 to 16 x 10 14 W/cm 2) that are relevant to direct-drive inertial confinement fusion ignition. Unlike in shorter-scale-length plasmas on OMEGA, scattered-light data on the NIF show that the near-quarter-critical LPI physics is dominated by stimulated Raman scattering (SRS) rather than by two-plasmon decay (TPD). This difference in regime is explained based on absolute SRS and TPD threshold considerations. SRSmore » sidescatter tangential to density contours and other SRS mechanisms are observed. The fraction of laser energy converted to hot electrons is ~0.7% to 2.9%, consistent with observed levels of SRS. The intensity threshold for hot-electron production is assessed, and the use of a Si ablator slightly increases this threshold from ~4 x 10 14 to ~6 x 10 14 W/cm 2. These results have significant implications for mitigation of LPI hot-electron preheat in direct-drive ignition designs.« less

  6. Origins and Scaling of Hot-Electron Preheat in Ignition-Scale Direct-Drive Inertial Confinement Fusion Experiments

    NASA Astrophysics Data System (ADS)

    Rosenberg, M. J.; Solodov, A. A.; Myatt, J. F.; Seka, W.; Michel, P.; Hohenberger, M.; Short, R. W.; Epstein, R.; Regan, S. P.; Campbell, E. M.; Chapman, T.; Goyon, C.; Ralph, J. E.; Barrios, M. A.; Moody, J. D.; Bates, J. W.

    2018-01-01

    Planar laser-plasma interaction (LPI) experiments at the National Ignition Facility (NIF) have allowed access for the first time to regimes of electron density scale length (˜500 to 700 μ m ), electron temperature (˜3 to 5 keV), and laser intensity (6 to 16 ×1014 W /cm2 ) that are relevant to direct-drive inertial confinement fusion ignition. Unlike in shorter-scale-length plasmas on OMEGA, scattered-light data on the NIF show that the near-quarter-critical LPI physics is dominated by stimulated Raman scattering (SRS) rather than by two-plasmon decay (TPD). This difference in regime is explained based on absolute SRS and TPD threshold considerations. SRS sidescatter tangential to density contours and other SRS mechanisms are observed. The fraction of laser energy converted to hot electrons is ˜0.7 % to 2.9%, consistent with observed levels of SRS. The intensity threshold for hot-electron production is assessed, and the use of a Si ablator slightly increases this threshold from ˜4×10 14 to ˜6 ×1014 W /cm2 . These results have significant implications for mitigation of LPI hot-electron preheat in direct-drive ignition designs.

  7. Communication: The absolute shielding scales of oxygen and sulfur revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komorovsky, Stanislav; Repisky, Michal; Malkin, Elena

    2015-03-07

    We present an updated semi-experimental absolute shielding scale for the {sup 17}O and {sup 33}S nuclei. These new shielding scales are based on accurate rotational microwave data for the spin–rotation constants of H{sub 2}{sup 17}O [Puzzarini et al., J. Chem. Phys. 131, 234304 (2009)], C{sup 17}O [Cazzoli et al., Phys. Chem. Chem. Phys. 4, 3575 (2002)], and H{sub 2}{sup 33}S [Helgaker et al., J. Chem. Phys. 139, 244308 (2013)] corrected both for vibrational and temperature effects estimated at the CCSD(T) level of theory as well as for the relativistic corrections to the relation between the spin–rotation constant and the absolutemore » shielding constant. Our best estimate for the oxygen shielding constants of H{sub 2}{sup 17}O is 328.4(3) ppm and for C{sup 17}O −59.05(59) ppm. The relativistic correction for the sulfur shielding of H{sub 2}{sup 33}S amounts to 3.3%, and the new sulfur shielding constant for this molecule is 742.9(4.6) ppm.« less

  8. Absolute fragmentation cross sections in atom-molecule collisions: Scaling laws for non-statistical fragmentation of polycyclic aromatic hydrocarbon molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.; Gatchell, M.; Stockett, M. H.

    2014-06-14

    We present scaling laws for absolute cross sections for non-statistical fragmentation in collisions between Polycyclic Aromatic Hydrocarbons (PAH/PAH{sup +}) and hydrogen or helium atoms with kinetic energies ranging from 50 eV to 10 keV. Further, we calculate the total fragmentation cross sections (including statistical fragmentation) for 110 eV PAH/PAH{sup +} + He collisions, and show that they compare well with experimental results. We demonstrate that non-statistical fragmentation becomes dominant for large PAHs and that it yields highly reactive fragments forming strong covalent bonds with atoms (H and N) and molecules (C{sub 6}H{sub 5}). Thus nonstatistical fragmentation may be an effectivemore » initial step in the formation of, e.g., Polycyclic Aromatic Nitrogen Heterocycles (PANHs). This relates to recent discussions on the evolution of PAHNs in space and the reactivities of defect graphene structures.« less

  9. Factor analysis of responses to the Irrational Beliefs Scale in a sample of Iraqi university students.

    PubMed

    Hassan, Namir; Ismail, Hairul Nizam

    2004-06-01

    In a study of irrational beliefs within a university population, 282 male and 238 female students responded to the 33-item Students' Irrational Beliefs Scale, and their responses were factor analyzed. Analysis suggested six dimensions could explain 39.5% of the variance. These dimensions were Perfectionism, Negativism, Blame Proneness, Escapism, Anxious Over Concern, and Absolute Demands.

  10. The performance of different propensity score methods for estimating absolute effects of treatments on survival outcomes: A simulation study.

    PubMed

    Austin, Peter C; Schuster, Tibor

    2016-10-01

    Observational studies are increasingly being used to estimate the effect of treatments, interventions and exposures on outcomes that can occur over time. Historically, the hazard ratio, which is a relative measure of effect, has been reported. However, medical decision making is best informed when both relative and absolute measures of effect are reported. When outcomes are time-to-event in nature, the effect of treatment can also be quantified as the change in mean or median survival time due to treatment and the absolute reduction in the probability of the occurrence of an event within a specified duration of follow-up. We describe how three different propensity score methods, propensity score matching, stratification on the propensity score and inverse probability of treatment weighting using the propensity score, can be used to estimate absolute measures of treatment effect on survival outcomes. These methods are all based on estimating marginal survival functions under treatment and lack of treatment. We then conducted an extensive series of Monte Carlo simulations to compare the relative performance of these methods for estimating the absolute effects of treatment on survival outcomes. We found that stratification on the propensity score resulted in the greatest bias. Caliper matching on the propensity score and a method based on earlier work by Cole and Hernán tended to have the best performance for estimating absolute effects of treatment on survival outcomes. When the prevalence of treatment was less extreme, then inverse probability of treatment weighting-based methods tended to perform better than matching-based methods. © The Author(s) 2014.

  11. The absolute threshold of cone vision

    PubMed Central

    Koeing, Darran; Hofer, Heidi

    2013-01-01

    We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115

  12. Surgical decompression for space-occupying cerebral infarction: outcomes at 3 years in the randomized HAMLET trial.

    PubMed

    Geurts, Marjolein; van der Worp, H Bart; Kappelle, L Jaap; Amelink, G Johan; Algra, Ale; Hofmeijer, Jeannette

    2013-09-01

    We assessed whether the effects of surgical decompression for space-occupying hemispheric infarction, observed at 1 year, are sustained at 3 years. Patients with space-occupying hemispheric infarction, who were enrolled in the Hemicraniectomy After Middle cerebral artery infarction with Life-threatening Edema Trial within 4 days after stroke onset, were followed up at 3 years. Outcome measures included functional outcome (modified Rankin Scale), death, quality of life, and place of residence. Poor functional outcome was defined as modified Rankin Scale >3. Of 64 included patients, 32 were randomized to decompressive surgery and 32 to best medical treatment. Just as at 1 year, surgery had no effect on the risk of poor functional outcome at 3 years (absolute risk reduction, 1%; 95% confidence interval, -21 to 22), but it reduced case fatality (absolute risk reduction, 37%; 95% confidence interval, 14-60). Sixteen surgically treated patients and 8 controls lived at home (absolute risk reduction, 27%; 95% confidence interval, 4-50). Quality of life improved between 1 and 3 years in patients treated with surgery. In patients with space-occupying hemispheric infarction, the effects of decompressive surgery on case fatality and functional outcome observed at 1 year are sustained at 3 years. http://www.controlled-trials.com. Unique identifier: ISRCTN94237756.

  13. Application of interleaving models for the description of intrusive layering at the fronts of deep polar water in the Eurasian Basin (Arctic)

    NASA Astrophysics Data System (ADS)

    Kuzmina, N. P.; Zhurbas, N. V.; Emelianov, M. V.; Pyzhevich, M. L.

    2014-09-01

    Interleaving models of pure thermohaline and baroclinic frontal zones are applied to describe intrusions at the fronts found in the upper part of the Deep Polar Water (DPW) when the stratification was absolutely stable. It is assumed that differential mixing is the main mechanism of the intrusion formation. Important parameters of the interleaving such as the growth rate, vertical scale, and slope of the most unstable modes relative to the horizontal plane are calculated. It was found that the interleaving model for a pure thermohaline front satisfactory describes the important intrusion parameters observed at the frontal zone. In the case of a baroclinic front, satisfactory agreement over all the interleaving parameters is observed between the model calculations and observations provided that the vertical momentum diffusivity significantly exceeds the corresponding coefficient of mass diffusivity. Under specific (reasonable) constraints of the vertical momentum diffusivity, the most unstable mode has a vertical scale approximately two-three times smaller than the vertical scale of the observed intrusions. A thorough discussion of the results is presented.

  14. Characterization of long-scale-length plasmas produced from plastic foam targets for laser plasma instability (LPI) research

    NASA Astrophysics Data System (ADS)

    Oh, Jaechul; Weaver, J. L.; Serlin, V.; Obenschain, S. P.

    2017-10-01

    We report on an experimental effort to produce plasmas with long scale lengths for the study of parametric instabilities, such as two plasmon decay (TPD) and stimulated Raman scattering (SRS), under conditions relevant to fusion plasma. In the current experiment, plasmas are formed from low density (10-100 mg/cc) CH foam targets irradiated by Nike krypton fluoride laser pulses (λ = 248 nm, 1 nsec FWHM) with energies up to 1 kJ. This experiment is conducted with two primary diagnostics: the grid image refractometer (Nike-GIR) to measure electron density and temperature profiles of the coronas, and time-resolved spectrometers with absolute intensity calibration to examine scattered light features of TPD or SRS. Nike-GIR was recently upgraded with a 5th harmonic probe laser (λ = 213 nm) to access plasma regions near quarter critical density of 248 nm light (4.5 ×1021 cm-3). The results will be discussed with data obtained from 120 μm scale-length plasmas created on solid CH targets in previous LPI experiments at Nike. Work supported by DoE/NNSA.

  15. On the transport of emulsions in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortis, Andrea; Ghezzehei, Teamrat A.

    2007-06-27

    Emulsions appear in many subsurface applications includingbioremediation, surfactant-enhanced remediation, and enhancedoil-recovery. Modeling emulsion transport in porous media is particularlychallenging because the rheological and physical properties of emulsionsare different from averages of the components. Current modelingapproaches are based on filtration theories, which are not suited toadequately address the pore-scale permeability fluctuations and reductionof absolute permeability that are often encountered during emulsiontransport. In this communication, we introduce a continuous time randomwalk based alternative approach that captures these unique features ofemulsion transport. Calculations based on the proposed approach resultedin excellent match with experimental observations of emulsionbreakthrough from the literature. Specifically, the new approachmore » explainsthe slow late-time tailing behavior that could not be fitted using thestandard approach. The theory presented in this paper also provides animportant stepping stone toward a generalizedself-consistent modeling ofmultiphase flow.« less

  16. Diurnal Differences in OLR Climatologies and Anomaly Time Series

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Lee, Jae N.; Iredell, Lena; Loeb, Norm

    2015-01-01

    AIRS (Atmospheric Infrared Sounder) Version-6 OLR (Outgoing Long-Wave Radiation) matches CERES (Clouds and the Earth's Radiant Energy System) Edition-2.8 OLR very closely on a 1x1 latitude x longitude scale, both with regard to absolute values, and also with regard to anomalies of OLR. There is a bias of 3.5 watts per meter squared, which is nearly constant both in time and space. Contiguous areas contain large positive or negative OLR difference between AIRS and CERES are where the day-night difference of OLR is large. For AIRS, the larger the diurnal cycle, the more likely that sampling twice a day is inadequate. Lower values of OLRclr (Clear Sky OLR) and LWCRF (Longwave Cloud Radiative Forcing) in AIRS compared to CERES is at least in part a result of AIRS sampling over cold and cloudy cases.

  17. Efficient Merge and Insert Operations for Binary Heaps and Trees

    NASA Technical Reports Server (NTRS)

    Kuszmaul, Christopher Lee; Woo, Alex C. (Technical Monitor)

    2000-01-01

    Binary heaps and binary search trees merge efficiently. We introduce a new amortized analysis that allows us to prove the cost of merging either binary heaps or balanced binary trees is O(l), in the amortized sense. The standard set of other operations (create, insert, delete, extract minimum, in the case of binary heaps, and balanced binary trees, as well as a search operation for balanced binary trees) remain with a cost of O(log n). For binary heaps implemented as arrays, we show a new merge algorithm that has a single operation cost for merging two heaps, a and b, of O(absolute value of a + min(log absolute value of b log log absolute value of b. log absolute value of a log absolute value of b). This is an improvement over O(absolute value of a + log absolute value of a log absolute value of b). The cost of the new merge is so low that it can be used in a new structure which we call shadow heaps. to implement the insert operation to a tunable efficiency. Shadow heaps support the insert operation for simple priority queues in an amortized time of O(f(n)) and other operations in time O((log n log log n)/f (n)), where 1 less than or equal to f (n) less than or equal to log log n. More generally, the results here show that any data structure with operations that change its size by at most one, with the exception of a merge (aka meld) operation, can efficiently amortize the cost of the merge under conditions that are true for most implementations of binary heaps and search trees.

  18. Income-related inequalities in inadequate dentition over time in Australia, Brazil and USA adults.

    PubMed

    Peres, Marco A; Luzzi, Liana; Peres, Karen G; Sabbah, Wael; Antunes, Jose L; Do, Loc G

    2015-06-01

    To assess changes over time of the absolute and relative household income-related inequalities in inadequate dentition (ID) among Australians, Brazilians and USA adults. This study used nationwide oral health survey data from Australia (n = 1200 in 1999; n = 2729 in 2005), Brazil (n = 13 431 in 2003; n = 9779 in 2010) and USA (n = 2542 in 1999; n = 1596 in 2005). Absolute income inequalities were calculated using Absolute Concentration Index (ACI) and Slope Index of Inequality (SII), while relative inequalities were calculated using Relative Concentration Index (RCI) and Relative Index of Inequality (RII). Prevalence of ID in the studied period dropped from 8.7% to 3.1% in Australia; from 42.1% to 22.4% in Brazil; and remained stable in USA, nearly 8.0%. Absolute income inequalities were highest in Brazil, followed by the USA and Australia; relative inequalities were lower in Brazil than in Australia and the USA. ID was higher among Brazilian females (2010) and for the poorest group in all countries and periods. A remarkable reduction in absolute inequalities were found in Australia [Slope Index of Inequality (SII) and AIC 60%] and in Brazil (SII 25%; ACI 33%) while relative inequalities increased both in Australia (RCI and RII 40%) and in Brazil (RCI 24%; RII 38%). No changes in absolute and relative income inequalities were found in the USA. There were still persistent absolute and relative income inequalities in ID in all examined countries. There has been a reduction in absolute income inequalities in ID but an increase in relative income inequalities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Radiometric age determinations on Pliocene/Pleistocene formations in the lower Omo basin, Ethiopia

    USGS Publications Warehouse

    Brown, F.H.; Lajoie, K.R.

    1971-01-01

    THE potassium-argon ages presented here were obtained during 1966 to 1969 in order to provide an absolute time scale for the stratigraphic work by the international Omo Research Expedition in the Pliocene/Pleistocene formations (unpublished work of F. H. B., J. de Heinzelin and F. C. Howell) in south-west Ethiopia. Although some of these dates are not new1-3, most of the analytical procedures and data have not been presented. We also present a list of fossil localities recorded by the University of Chicago contingent of the expedition within the Shungura Formation. Preliminary descriptions of the Hominidae have been published already3,4. ?? 1971 Nature Publishing Group.

  20. Evidence for criticality in financial data

    NASA Astrophysics Data System (ADS)

    Ruiz, G.; de Marcos, A. F.

    2018-01-01

    We provide evidence that cumulative distributions of absolute normalized returns for the 100 American companies with the highest market capitalization, uncover a critical behavior for different time scales Δt. Such cumulative distributions, in accordance with a variety of complex - and financial - systems, can be modeled by the cumulative distribution functions of q-Gaussians, the distribution function that, in the context of nonextensive statistical mechanics, maximizes a non-Boltzmannian entropy. These q-Gaussians are characterized by two parameters, namely ( q, β), that are uniquely defined by Δt. From these dependencies, we find a monotonic relationship between q and β, which can be seen as evidence of criticality. We numerically determine the various exponents which characterize this criticality.

  1. A Regularized Volumetric Fusion Framework for Large-Scale 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Rajput, Asif; Funk, Eugen; Börner, Anko; Hellwich, Olaf

    2018-07-01

    Modern computational resources combined with low-cost depth sensing systems have enabled mobile robots to reconstruct 3D models of surrounding environments in real-time. Unfortunately, low-cost depth sensors are prone to produce undesirable estimation noise in depth measurements which result in either depth outliers or introduce surface deformations in the reconstructed model. Conventional 3D fusion frameworks integrate multiple error-prone depth measurements over time to reduce noise effects, therefore additional constraints such as steady sensor movement and high frame-rates are required for high quality 3D models. In this paper we propose a generic 3D fusion framework with controlled regularization parameter which inherently reduces noise at the time of data fusion. This allows the proposed framework to generate high quality 3D models without enforcing additional constraints. Evaluation of the reconstructed 3D models shows that the proposed framework outperforms state of art techniques in terms of both absolute reconstruction error and processing time.

  2. Anchoring the Gas-Phase Acidity Scale from Hydrogen Sulfide to Pyrrole. Experimental Bond Dissociation Energies of Nitromethane, Ethanethiol, and Cyclopentadiene.

    PubMed

    Ervin, Kent M; Nickel, Alex A; Lanorio, Jerry G; Ghale, Surja B

    2015-07-16

    A meta-analysis of experimental information from a variety of sources is combined with statistical thermodynamics calculations to refine the gas-phase acidity scale from hydrogen sulfide to pyrrole. The absolute acidities of hydrogen sulfide, methanethiol, and pyrrole are evaluated from literature R-H bond energies and radical electron affinities to anchor the scale. Relative acidities from proton-transfer equilibrium experiments are used in a local thermochemical network optimized by least-squares analysis to obtain absolute acidities of 14 additional acids in the region. Thermal enthalpy and entropy corrections are applied using molecular parameters from density functional theory, with explicit calculation of hindered rotor energy levels for torsional modes. The analysis reduces the uncertainties of the absolute acidities of the 14 acids to within ±1.2 to ±3.3 kJ/mol, expressed as estimates of the 95% confidence level. The experimental gas-phase acidities are compared with calculations, with generally good agreement. For nitromethane, ethanethiol, and cyclopentadiene, the refined acidities can be combined with electron affinities of the corresponding radicals from photoelectron spectroscopy to obtain improved values of the C-H or S-H bond dissociation energies, yielding D298(H-CH2NO2) = 423.5 ± 2.2 kJ mol(-1), D298(C2H5S-H) = 364.7 ± 2.2 kJ mol(-1), and D298(C5H5-H) = 347.4 ± 2.2 kJ mol(-1). These values represent the best-available experimental bond dissociation energies for these species.

  3. Quantum Bath Refrigeration towards Absolute Zero: Challenging the Unattainability Principle

    NASA Astrophysics Data System (ADS)

    Kolář, M.; Gelbwaser-Klimovsky, D.; Alicki, R.; Kurizki, G.

    2012-08-01

    A minimal model of a quantum refrigerator, i.e., a periodically phase-flipped two-level system permanently coupled to a finite-capacity bath (cold bath) and an infinite heat dump (hot bath), is introduced and used to investigate the cooling of the cold bath towards absolute zero (T=0). Remarkably, the temperature scaling of the cold-bath cooling rate reveals that it does not vanish as T→0 for certain realistic quantized baths, e.g., phonons in strongly disordered media (fractons) or quantized spin waves in ferromagnets (magnons). This result challenges Nernst’s third-law formulation known as the unattainability principle.

  4. Characterization of the Medley setup for measurements of neutron-induced fission cross sections at the GANIL-NFS facility

    NASA Astrophysics Data System (ADS)

    Tarrío, Diego; Prokofiev, Alexander V.; Gustavsson, Cecilia; Jansson, Kaj; Andersson-Sundén, Erik; Al-Adili, Ali; Pomp, Stephan

    2017-09-01

    Neutron-induced fission cross sections of 235U and 238U are widely used as standards for monitoring of neutron beams and fields. An absolute measurement of these cross sections at an absolute scale, i.e., versus the H(n,p) scattering cross section, is planned with the white neutron beam under construction at the Neutrons For Science (NFS) facility in GANIL. The experimental setup, based on PPACs and ΔE-ΔE-E telescopes containing Silicon and CsI(Tl) detectors, is described. The expected uncertainties are discussed.

  5. Parametric scaling from species relative abundances to absolute abundances in the computation of biological diversity: a first proposal using Shannon's entropy.

    PubMed

    Ricotta, Carlo

    2003-01-01

    Traditional diversity measures such as the Shannon entropy are generally computed from the species' relative abundance vector of a given community to the exclusion of species' absolute abundances. In this paper, I first mention some examples where the total information content associated with a given community may be more adequate than Shannon's average information content for a better understanding of ecosystem functioning. Next, I propose a parametric measure of statistical information that contains both Shannon's entropy and total information content as special cases of this more general function.

  6. Quantum bath refrigeration towards absolute zero: challenging the unattainability principle.

    PubMed

    Kolář, M; Gelbwaser-Klimovsky, D; Alicki, R; Kurizki, G

    2012-08-31

    A minimal model of a quantum refrigerator, i.e., a periodically phase-flipped two-level system permanently coupled to a finite-capacity bath (cold bath) and an infinite heat dump (hot bath), is introduced and used to investigate the cooling of the cold bath towards absolute zero (T=0). Remarkably, the temperature scaling of the cold-bath cooling rate reveals that it does not vanish as T→0 for certain realistic quantized baths, e.g., phonons in strongly disordered media (fractons) or quantized spin waves in ferromagnets (magnons). This result challenges Nernst's third-law formulation known as the unattainability principle.

  7. Nanoseismic sources made in the laboratory: source kinematics and time history

    NASA Astrophysics Data System (ADS)

    McLaskey, G.; Glaser, S. D.

    2009-12-01

    When studying seismic signals in the field, the analysis of source mechanisms is always obscured by propagation effects such as scattering and reflections due to the inhomogeneous nature of the earth. To get around this complication, we measure seismic waves (wavelengths from 2 mm to 300 mm) in laboratory-sized specimens of extremely homogeneous isotropic materials. We are able to study the focal mechanism and time history of nanoseismic sources produced by fracture, impact, and sliding friction, roughly six orders of magnitude smaller and more rapid than typical earthquakes. Using very sensitive broadband conical piezoelectric sensors, we are able to measure surface normal displacements down to a few pm (10^-12 m) in amplitude. Thick plate specimens of homogeneous materials such as glass, steel, gypsum, and polymethylmethacrylate (PMMA) are used as propagation media in the experiments. Recorded signals are in excellent agreement with theoretically determined Green’s functions obtained from a generalized ray theory code for an infinite plate geometry. Extremely precise estimates of the source time history are made via full waveform inversion from the displacement time histories recorded by an array of at least ten sensors. Each channel is sampled at a rate of 5 MHz. The system is absolutely calibrated using the normal impact of a tiny (~1 mm) ball on the surface of the specimen. The ball impact induces a force pulse into the specimen a few ms in duration. The amplitude, duration, and shape of the force pulse were found to be well approximated by Hertzian-derived impact theory, while the total change in momentum of the ball is independently measured from its incoming and rebound velocities. Another calibration source, the sudden fracture of a thin-walled glass capillary tube laid on its side and loaded against the surface of the specimen produces a similar point force, this time with a source function very nearly a step in time with rise time of less than 500 ns. The force at which the capillary breaks is recorded using a force sensor and is used for absolute calibration. A third set of nanoseismic sources were generated from frictional sliding. In this case, the location and spatial extent of the source along the cm-scale fault is not precisely known and must be determined. These sources are much more representative of earthquakes and the determination of their focal mechanisms is the subject of ongoing research. Sources of this type have been observed on a great range of time scales with rise times ranging from 500 ns to hundreds of ms. This study tests the generality of the seismic source representation theory. The unconventional scale, geometry, and experimental arrangement facilitates the discussion of issues such as the point source approximation, the origin of uncertainty in moment tensor inversions, the applicability of magnitude calculations for non-double-couple sources, and the relationship between momentum and seismic moment.

  8. Evaluation of a simple method for crop evapotranspiration partitioning and comparison of different water use efficiency approaches

    NASA Astrophysics Data System (ADS)

    Tallec, T.; Rivalland, V.; Jarosz, N.; Boulet, G.; Gentine, P.; Ceschia, E.

    2012-04-01

    In the current context of climate change, intra- and inter-annual variability of precipitation can lead to major modifications of water budgets and water use efficiencies (WUE). Obtaining greater insight into how climatic variability and agricultural practices affect water budgets and their components in croplands is, thus, important for adapting crop management and limiting water losses. The principal aims of this study were 1) to assess the contribution of different components to the agro-ecosystem water budget and 2) to analyze and compare the WUE calculated from ecophysiological (WUEplt), environmental (WUEeco) and agronomical (WUEagro) points of view for various crops during the growing season and for the annual time scale. Eddy covariance (EC) measurements of CO2 and water flux were performed on winter wheat, maize and sunflower crops at two sites in southwest France: Auradé and Lamasquère. To infer WUEplt, an estimation of plant transpiration (TR) is needed. We then tested a new method for partitioning evapotranspiration (ETR), measured by means of the EC method, into soil evaporation (E) and plant transpiration (TR) based on marginal distribution sampling (MDS). We compared these estimations with calibrated simulations of the ICARE-SVAT double source mechanistic model. The two partitioning methods showed good agreement, demonstrating that MDS is a convenient, simple and robust tool for estimating E with reasonable associated uncertainties. During the growing season, the proportion of E in ETR was approximately one-third and varied mainly with crop leaf area. When calculated on an annual time scale, the proportion of E in ETR reached more than 50%, depending on crop leaf area and the duration and distribution of bare soil within the year. WUEplt values ranged between -4.1 and -5.6 g C kg-1 H2O for maize and winter wheat, respectively, and were strongly dependent on meteorological conditions at the half-hourly, daily and seasonal time scales. When normalized by the vapor pressure deficit to reduce the effect of seasonal climatic variability on WUEplt, maize had the highest efficiency. Absolute WUEeco values on the ecosystem level, including water loss through evaporation and carbon release through ecosystem respiration, were consequently lower than on the stand level. This observation was even more pronounced on an annual time scale than on the growing-season time scale because of bare soil periods. Winter wheat showed the highest absolute values of WUEeco, and sunflower showed the lowest. To account for carbon input into WUE through organic fertilization and output through biomass exportation during harvest, net biome production (NBP) was considered in the calculation of an ecosystem-level WUE (WUENBP). Considering WUENBP instead of WUEeco markedly decreased the efficiency of the ecosystem, especially for crops with important carbon exports, as observed for the maize used for silaging and pointed out the profits of organic C input. From an agronomic perspective, maize showed the best WUE, with exported (marketable) carbon per unit of water used exceeding that of other crops. Thus, the environmental and agronomical WUE approaches should be considered together in the context of global climate change and sustainable development.

  9. Quantifying Treatment Benefit in Molecular Subgroups to Assess a Predictive Biomarker

    PubMed Central

    Iasonos, Alexia; Chapman, Paul B.; Satagopan, Jaya M.

    2016-01-01

    There is an increased interest in finding predictive biomarkers that can guide treatment options for both mutation carriers and non-carriers. The statistical assessment of variation in treatment benefit (TB) according to the biomarker carrier status plays an important role in evaluating predictive biomarkers. For time to event endpoints, the hazard ratio (HR) for interaction between treatment and a biomarker from a Proportional Hazards regression model is commonly used as a measure of variation in treatment benefit. While this can be easily obtained using available statistical software packages, the interpretation of HR is not straightforward. In this article, we propose different summary measures of variation in TB on the scale of survival probabilities for evaluating a predictive biomarker. The proposed summary measures can be easily interpreted as quantifying differential in TB in terms of relative risk or excess absolute risk due to treatment in carriers versus non-carriers. We illustrate the use and interpretation of the proposed measures using data from completed clinical trials. We encourage clinical practitioners to interpret variation in TB in terms of measures based on survival probabilities, particularly in terms of excess absolute risk, as opposed to HR. PMID:27141007

  10. Tropical Gravity Wave Momentum Fluxes and Latent Heating Distributions

    NASA Technical Reports Server (NTRS)

    Geller, Marvin A.; Zhou, Tiehan; Love, Peter T.

    2015-01-01

    Recent satellite determinations of global distributions of absolute gravity wave (GW) momentum fluxes in the lower stratosphere show maxima over the summer subtropical continents and little evidence of GW momentum fluxes associated with the intertropical convergence zone (ITCZ). This seems to be at odds with parameterizations forGWmomentum fluxes, where the source is a function of latent heating rates, which are largest in the region of the ITCZ in terms of monthly averages. The authors have examined global distributions of atmospheric latent heating, cloud-top-pressure altitudes, and lower-stratosphere absolute GW momentum fluxes and have found that monthly averages of the lower-stratosphere GW momentum fluxes more closely resemble the monthly mean cloud-top altitudes rather than the monthly mean rates of latent heating. These regions of highest cloud-top altitudes occur when rates of latent heating are largest on the time scale of cloud growth. This, plus previously published studies, suggests that convective sources for stratospheric GW momentum fluxes, being a function of the rate of latent heating, will require either a climate model to correctly model this rate of latent heating or some ad hoc adjustments to account for shortcomings in a climate model's land-sea differences in convective latent heating.

  11. Velocity space resolved absolute measurement of fast ion losses induced by a tearing mode in the ASDEX Upgrade tokamak

    NASA Astrophysics Data System (ADS)

    Galdon-Quiroga, J.; Garcia-Munoz, M.; Sanchis-Sanchez, L.; Mantsinen, M.; Fietz, S.; Igochine, V.; Maraschek, M.; Rodriguez-Ramos, M.; Sieglin, B.; Snicker, A.; Tardini, G.; Vezinet, D.; Weiland, M.; Eriksson, L. G.; The ASDEX Upgrade Team; The EUROfusion MST1 Team

    2018-03-01

    Absolute flux of fast ion losses induced by tearing modes have been measured by means of fast ion loss detectors (FILD) for the first time in RF heated plasmas in the ASDEX Upgrade tokamak. Up to 30 MW m-2 of fast ion losses are measured by FILD at 5 cm from the separatrix, consistent with infra-red camera measurements, with energies in the range of 250-500 keV and pitch angles corresponding to large trapped orbits. A resonant interaction between the fast ions in the high energy tail of the ICRF distribution and a m/n  =  5/4 tearing mode leads to enhanced fast ion losses. Around 9.3 +/- 0.7 % of the fast ion losses are found to be coherent with the mode and scale linearly with its amplitude, indicating the convective nature of the transport mechanism. Simulations have been carried out to estimate the contribution of the prompt losses. A good agreement is found between the simulated and the measured velocity space of the losses. The velocity space resonances that may be responsible for the enhanced fast ion losses are identified.

  12. Absolute frequency measurement of the ? optical clock transition in ? with an uncertainty of ? using a frequency link to international atomic time

    NASA Astrophysics Data System (ADS)

    Baynham, Charles F. A.; Godun, Rachel M.; Jones, Jonathan M.; King, Steven A.; Nisbet-Jones, Peter B. R.; Baynes, Fred; Rolland, Antoine; Baird, Patrick E. G.; Bongs, Kai; Gill, Patrick; Margolis, Helen S.

    2018-03-01

    The highly forbidden ? electric octupole transition in ? is a potential candidate for a redefinition of the SI second. We present a measurement of the absolute frequency of this optical transition, performed using a frequency link to International Atomic Time to provide traceability to the SI second. The ? optical frequency standard was operated for 76% of a 25-day period, with the absolute frequency measured to be 642 121 496 772 645.14(26) Hz. The fractional uncertainty of ? is comparable to that of the best previously reported measurement, which was made by a direct comparison to local caesium primary frequency standards.

  13. Genetic parameters for different growth scales in GIFT strain of Nile tilapia (Oreochromis niloticus).

    PubMed

    He, J; Gao, H; Xu, P; Yang, R

    2015-12-01

    Body weight, length, width and depth at two growth stages were observed for a total of 5015 individuals of GIFT strain, along with a pedigree including 5588 individuals from 104 sires and 162 dams was collected. Multivariate animal models and a random regression model were used to genetically analyse absolute and relative growth scales of these growth traits. In absolute growth scale, the observed growth traits had moderate heritabilities ranging from 0.321 to 0.576, while pairwise ratios between body length, width and depth were lowly inherited and maximum heritability was only 0.146 for length/depth. All genetic correlations were above 0.5 between pairwise growth traits and genetic correlation between length/width and length/depth varied between both growth stages. Based on those estimates, selection index of multiple traits of interest can be formulated in future breeding program to improve genetically body weight and morphology of the GIFT strain. In relative growth scale, heritabilities in relative growths of body length, width and depth to body weight were 0.257, 0.412 and 0.066, respectively, while genetic correlations among these allometry scalings were above 0.8. Genetic analysis for joint allometries of body weight to body length, width and depth will contribute to genetically regulate the growth rate between body shape and body weight. © 2015 Blackwell Verlag GmbH.

  14. A comparison of the cosmic-ray energy scales of Tunka-133 and KASCADE-Grande via their radio extensions Tunka-Rex and LOPES

    NASA Astrophysics Data System (ADS)

    Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bezyazeekov, P. A.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Budnev, N. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fedorov, O.; Fuchs, B.; Gemmeke, H.; Gress, O. A.; Grupen, C.; Haungs, A.; Heck, D.; Hiller, R.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kazarina, Y.; Kleifges, M.; Korosteleva, E. E.; Kostunin, D.; Krömer, O.; Kuijpers, J.; Kuzmichev, L. A.; Link, K.; Lubsandorzhiev, N.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Mirgazov, R. R.; Monkhoev, R.; Morello, C.; Oehlschläger, J.; Osipova, E. A.; Pakhorukov, A.; Palmieri, N.; Pankov, L.; Pierog, T.; Prosin, V. V.; Rautenberg, J.; Rebel, H.; Roth, M.; Rubtsov, G. I.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wischnewski, R.; Wochele, J.; Zabierowski, J.; Zagorodnikov, A.; Zensus, J. A.; Tunka-Rex; Lopes Collaborations

    2016-12-01

    The radio technique is a promising method for detection of cosmic-ray air showers of energies around 100PeV and higher with an array of radio antennas. Since the amplitude of the radio signal can be measured absolutely and increases with the shower energy, radio measurements can be used to determine the air-shower energy on an absolute scale. We show that calibrated measurements of radio detectors operated in coincidence with host experiments measuring air showers based on other techniques can be used for comparing the energy scales of these host experiments. Using two approaches, first via direct amplitude measurements, and second via comparison of measurements with air shower simulations, we compare the energy scales of the air-shower experiments Tunka-133 and KASCADE-Grande, using their radio extensions, Tunka-Rex and LOPES, respectively. Due to the consistent amplitude calibration for Tunka-Rex and LOPES achieved by using the same reference source, this comparison reaches an accuracy of approximately 10% - limited by some shortcomings of LOPES, which was a prototype experiment for the digital radio technique for air showers. In particular we show that the energy scales of cosmic-ray measurements by the independently calibrated experiments KASCADE-Grande and Tunka-133 are consistent with each other on this level.

  15. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    NASA Astrophysics Data System (ADS)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the outcrops. Our results suggest that, for the LPC at least, global scale models might be useful for highlighting the locations with higher recharge rates potential and nitrate contamination risk. Global modelling simulations appear ideal as a primary step in recognizing locations which require investigations at the plot, field and local scales.

  16. Strongly nonlinear theory of rapid solidification near absolute stability

    NASA Astrophysics Data System (ADS)

    Kowal, Katarzyna N.; Altieri, Anthony L.; Davis, Stephen H.

    2017-10-01

    We investigate the nonlinear evolution of the morphological deformation of a solid-liquid interface of a binary melt under rapid solidification conditions near two absolute stability limits. The first of these involves the complete stabilization of the system to cellular instabilities as a result of large enough surface energy. We derive nonlinear evolution equations in several limits in this scenario and investigate the effect of interfacial disequilibrium on the nonlinear deformations that arise. In contrast to the morphological stability problem in equilibrium, in which only cellular instabilities appear and only one absolute stability boundary exists, in disequilibrium the system is prone to oscillatory instabilities and a second absolute stability boundary involving attachment kinetics arises. Large enough attachment kinetics stabilize the oscillatory instabilities. We derive a nonlinear evolution equation to describe the nonlinear development of the solid-liquid interface near this oscillatory absolute stability limit. We find that strong asymmetries develop with time. For uniform oscillations, the evolution equation for the interface reduces to the simple form f''+(βf')2+f =0 , where β is the disequilibrium parameter. Lastly, we investigate a distinguished limit near both absolute stability limits in which the system is prone to both cellular and oscillatory instabilities and derive a nonlinear evolution equation that captures the nonlinear deformations in this limit. Common to all these scenarios is the emergence of larger asymmetries in the resulting shapes of the solid-liquid interface with greater departures from equilibrium and larger morphological numbers. The disturbances additionally sharpen near the oscillatory absolute stability boundary, where the interface becomes deep-rooted. The oscillations are time-periodic only for small-enough initial amplitudes and their frequency depends on a single combination of physical parameters, including the morphological number, as well as the amplitude. The critical amplitude, at which solutions loose periodicity, depends on a single combination of parameters independent of the morphological number that indicate that non-periodic growth is most commonly present for moderate disequilibrium parameters. The spatial distribution of the interface develops deepening roots at late times. Similar spatial distributions are also seen in the limit in which both the cellular and oscillatory modes are close to absolute stability, and the roots deepen with larger departures from the two absolute stability boundaries.

  17. SU-F-T-330: Characterization of the Clinically Released ScandiDos Discover Diode Array for In-Vivo Dose Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, D; Gutierrez, A

    Purpose: The ScandiDos Discover has obtained FDA clearance and is now clinically released. We studied the essential attenuation and beam hardening components as well as tested the diode array’s ability to detect changes in absolute dose and MLC leaf positions. Methods: The ScandiDos Discover was mounted on the heads of an Elekta VersaHD and a Varian 23EX. Beam attenuation measurements were made at 10 cm depth for 6 MV and 18 MV beam energies. The PDD(10) was measured as a metric for the effect on beam quality. Next, a plan consisting of two orthogonal 10 × 10 cm2 fields wasmore » used to adjust the dose per fraction by scaling monitor units to test the absolute dose detection sensitivity of the Discover. A second plan (conformal arc) was then delivered several times independently on the Elekta VersaHD. Artificially introduced MLC position errors in the four central leaves were then added. The errors were incrementally increased from 1 mm to 4 mm and back across seven control points. Results: The absolute dose measured at 10 cm depth decreased by 1.2% and 0.7% for 6 MV and 18 MV beam with the Discover, respectively. Attenuation depended slightly on the field size but only changed the attenuation by 0.1% across 5 × 5 cm{sup 2} and 20 − 20 cm{sup 2} fields. The change in PDD(10) for a 10 − 10 cm{sup 2} field was +0.1% and +0.6% for 6 MV and 18 MV, respectively. Changes in monitor units from −5.0% to 5.0% were faithfully detected. Detected leaf errors were within 1.0 mm of intended errors. Conclusion: A novel in-vivo dosimeter monitoring the radiation beam during treatment was examined through its attenuation and beam hardening characteristics. The device tracked with changes in absolute dose as well as introduced leaf position deviations.« less

  18. Effect of quartz overgrowth precipitation on the multiscale porosity of sandstone: A (U)SANS and imaging analysis

    DOE PAGES

    Anovitz, Lawrence M.; Cole, David R.; Jackson, Andrew J.; ...

    2015-06-01

    We have performed a series of experiments to understand the effects of quartz overgrowths on nanometer to centimeter scale pore structures of sandstones. Blocks from two samples of St. Peter Sandstone with different initial porosities (5.8 and 18.3%) were reacted from 3 days to 7.5 months at 100 and 200 °C in aqueous solutions supersaturated with respect to quartz by reaction with amorphous silica. Porosity in the resultant samples was analyzed using small and ultrasmall angle neutron scattering and scanning electron microscope/backscattered electron (SEM/BSE)-based image-scale processing techniques.Significant changes were observed in the multiscale pore structures. By three days much ofmore » the overgrowth in the low-porosity sample dissolved away. The reason for this is uncertain, but the overgrowths can be clearly distinguished from the original core grains in the BSE images. At longer times the larger pores are observed to fill with plate-like precipitates. As with the unreacted sandstones, porosity is a step function of size. Grain boundaries are typically fractal, but no evidence of mass fractal or fuzzy interface behavior was observed suggesting a structural difference between chemical and clastic sediments. After the initial loss of the overgrowths, image scale porosity (>~1 cm) decreases with time. Submicron porosity (typically ~25% of the total) is relatively constant or slightly decreasing in absolute terms, but the percent change is significant. Fractal dimensions decrease at larger scales, and increase at smaller scales with increased precipitation.« less

  19. On the Photometric Calibration of FORS2 and the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Bramich, D.; Moehler, S.; Coccato, L.; Freudling, W.; Garcia-Dabó, C. E.; Müller, P.; Saviane, I.

    2012-09-01

    An accurate absolute calibration of photometric data to place them on a standard magnitude scale is very important for many science goals. Absolute calibration requires the observation of photometric standard stars and analysis of the observations with an appropriate photometric model including all relevant effects. In the FORS Absolute Photometry (FAP) project, we have developed a standard star observing strategy and modelling procedure that enables calibration of science target photometry to better than 3% accuracy on photometrically stable nights given sufficient signal-to-noise. In the application of this photometric modelling to large photometric databases, we have investigated the Sloan Digital Sky Survey (SDSS) and found systematic trends in the published photometric data. The amplitudes of these trends are similar to the reported typical precision (˜1% and ˜2%) of the SDSS photometry in the griz- and u-bands, respectively.

  20. A definitive calibration record for the Landsat-5 thematic mapper anchored to the Landsat-7 radiometric scale

    USGS Publications Warehouse

    Teillet, P.M.; Helder, D.L.; Ruggles, T.A.; Landry, R.; Ahern, F.J.; Higgs, N.J.; Barsi, J.; Chander, G.; Markham, B.L.; Barker, J.L.; Thome, K.J.; Schott, J.R.; Palluconi, Frank Don

    2004-01-01

    A coordinated effort on the part of several agencies has led to the specification of a definitive radiometric calibration record for the Landsat-5 thematic mapper (TM) for its lifetime since launch in 1984. The time-dependent calibration record for Landsat-5 TM has been placed on the same radiometric scale as the Landsat-7 enhanced thematic mapper plus (ETM+). It has been implemented in the National Landsat Archive Production Systems (NLAPS) in use in North America. This paper documents the results of this collaborative effort and the specifications for the related calibration processing algorithms. The specifications include (i) anchoring of the Landsat-5 TM calibration record to the Landsat-7 ETM+ absolute radiometric calibration, (ii) new time-dependent calibration processing equations and procedures applicable to raw Landsat-5 TM data, and (iii) algorithms for recalibration computations applicable to some of the existing processed datasets in the North American context. The cross-calibration between Landsat-5 TM and Landsat-7 ETM+ was achieved using image pairs from the tandem-orbit configuration period that was programmed early in the Laridsat-7 mission. The time-dependent calibration for Landsat-5 TM is based on a detailed trend analysis of data from the on-board internal calibrator. The new lifetime radiometric calibration record for Landsat-5 will overcome problems with earlier product generation owing to inadequate maintenance and documentation of the calibration over time and will facilitate the quantitative examination of a continuous, near-global dataset at 30-m scale that spans almost two decades.

  1. Preliminary OARE absolute acceleration measurements on STS-50

    NASA Technical Reports Server (NTRS)

    Blanchard, Robert C.; Nicholson, John Y.; Ritter, James

    1993-01-01

    On-orbit Orbital Acceleration Research Experiment (OARE) data on STS-50 was examined in detail during a 2-day time period. Absolute acceleration levels were derived at the OARE location, the orbiter center-of-gravity, and at the STS-50 spacelab Crystal Growth Facility. The tri-axial OARE raw acceleration measurements (i.e., telemetered data) during the interval were filtered using a sliding trimmed mean filter in order to remove large acceleration spikes (e.g., thrusters) and reduce the noise. Twelve OARE measured biases in each acceleration channel during the 2-day interval were analyzed and applied to the filtered data. Similarly, the in situ measured x-axis scale factors in the sensor's most sensitive range were also analyzed and applied to the data. Due to equipment problem(s) on this flight, both y- and z- axis sensitive range scale factors were determined in a separate process (using the OARE maneuver data) and subsequently applied to the data. All known significant low-frequency corrections at the OARE location (i.e., both vertical and horizontal gravity-gradient, and rotational effects) were removed from the filtered data in order to produce the acceleration components at the orbiter's center-of-gravity, which are the aerodynamic signals along each body axes. Results indicate that there is a force of unknown origin being applied to the Orbiter in addition to the aerodynamic forces. The OARE instrument and all known gravitational and electromagnetic forces were reexamined, but none produce the observed effect. Thus, it is tentatively concluded that the Orbiter is creating the environment observed.

  2. A note by any other name: Intonation context rapidly changes absolute note judgments.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon L M; Uddin, Sophia; Nusbaum, Howard C

    2018-04-30

    Absolute pitch (AP) judgments, by definition, do not require a reference note, and thus might be viewed as context independent. Here, we specifically test whether short-term exposure to particular intonation contexts influences AP categorization on a rapid time scale and whether such context effects can change from moment to moment. In Experiment 1, participants heard duets in which a "lead" instrument always began before a "secondary" instrument. Both instruments independently varied on intonation (flat, in-tune, or sharp). Despite participants being instructed to judge only the intonation of the secondary instrument, we found that participants treated the lead instrument's intonation as "in-tune" and intonation judgments of the secondary instrument were relativized against this standard. In Experiment 2, participants heard a short antecedent context melody (flat, in-tune, or sharp) followed by an isolated target note (flat, in-tune, or sharp). Target note intonation judgments were once again relativized against the context melody's intonation, though only for notes that were experienced in the context or implied by the context key signature. Moreover, maximally contrastive intonation combinations of context and target engendered systematic note misclassifications. For example, a flat melody resulted in a greater likelihood of misclassifying a "sharp F-sharp" as a "G." These results highlight that both intonation and note category judgments among AP possessors are rapidly modified by the listening environment on the order of seconds, arguing against an invariant mental representation of the absolute pitches of notes. Implications for general auditory theories of perception are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  4. Observational characteristics of the tropopause inversion layer derived from CHAMP/GRACE radio occultations and MOZAIC aircraft data

    NASA Astrophysics Data System (ADS)

    Schmidt, Torsten; Cammas, Jean-Pierre; Heise, Stefan; Wickert, Jens; Haser, Antonia

    2010-05-01

    In this study we discuss characteristics of the tropopause inversion layer (TIL) based on two datasets. Temperature measurements from GPS radio occultation (RO) data (CHAMP and GRACE) for the time interval 2001-2009 are used to exhibit seasonal properties of the TIL on a global scale. In agreement with previous studies the vertical structure of the TIL is investigated using the square of the buoyancy frequency N. For the extratropics on both hemispheres N2 has an universal distribution independent from season: a local minimum about 2 km below the lapse rate tropopause height (LRTH), an absolute maximum about 1 km above the LRTH, and a local minimum about 4 km above the LRTH. In the tropics (15°N-15°S) the N2 maximum above the tropopause is 200-300 m higher compared with the extratropics and the local minimum of N2 below the tropopause appears about 4 km below the LRTH. Trace gas measurements onboard commercial aircrafts from 2001-2007 are used as a complementary dataset (MOZAIC program). We demonstrate that the mixing ratio gradients of ozone, carbon monoxide and water vapor are suitable parameters for characterizing the TIL reproducing most of the vertical structure of N2. We also show that the LRTH is strongly correlated with the absolute maxima of ozone and carbon monoxide mixing ratio gradients. Mean deviations of the heights of the absolute maxima of mixing ratio gradients from O3 and CO to the LRTH are (-0.02±1.51) km and (-0.35±1.28) km, respectively.

  5. Data mining on long-term barometric data within the ARISE2 project

    NASA Astrophysics Data System (ADS)

    Hupe, Patrick; Ceranna, Lars; Pilger, Christoph

    2016-04-01

    The Comprehensive nuclear-Test-Ban Treaty (CTBT) led to the implementation of an international infrasound array network. The International Monitoring System (IMS) network includes 48 certified stations, each providing data for up to 15 years. As part of work package 3 of the ARISE2 project (Atmospheric dynamics Research InfraStructure in Europe, phase 2) the data sets will be statistically evaluated with regard on atmospheric dynamics. The current study focusses on fluctuations of absolute air pressure. Time series have been analysed for 17 monitoring stations which are located all over the world between Greenland and Antarctica along the latitudes to represent different climate zones and characteristic atmospheric conditions. Hence this enables quantitative comparisons between those regions. Analyses are shown including wavelet power spectra, multi-annual time series of average variances with regard to long-wave scales, and spectral densities to derive characteristics and special events. Evaluations reveal periodicities in average variances on 2 to 20 day scale with a maximum in the winter months and a minimum in summer of the respective hemisphere. This basically applies to time series of IMS stations beyond the tropics where the dominance of cyclones and anticyclones changes with seasons. Furthermore, spectral density analyses illustrate striking signals for several dynamic activities within one day, e.g., the semidiurnal tide.

  6. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  7. Absolute configurations of zingiberenols isolated from ginger (Zingiber officinale) rhizomes

    USDA-ARS?s Scientific Manuscript database

    The sesquiterpene alcohol zingiberenol, or 1,10-bisaboladien-3-ol, was isolated some time ago from ginger, Zingiber officinale, rhizomes, but its absolute configuration had not been determined. With three chiral centers present in the molecule, zingiberenol can exist in eight stereoisomeric forms. ...

  8. Bio-Inspired Stretchable Absolute Pressure Sensor Network

    PubMed Central

    Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X.

    2016-01-01

    A bio-inspired absolute pressure sensor network has been developed. Absolute pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4’’ wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 times larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a time-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this absolute pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles. PMID:26729134

  9. Physical fitness and performance. Cardiorespiratory fitness in girls-change from middle to high school.

    PubMed

    Pfeiffer, Karin A; Dowda, Marsha; Dishman, Rod K; Sirard, John R; Pate, Russell R

    2007-12-01

    To determine how factors are related to change in cardiorespiratory fitness (CRF) across time in middle school girls followed through high school. Adolescent girls (N = 274, 59% African American, baseline age = 13.6 +/- 0.6 yr) performed a submaximal fitness test (PWC170) in 8th, 9th, and 12th grades. Height, weight, sports participation, and physical activity were also measured. Moderate-to-vigorous physical activity (MVPA) and vigorous physical activity (VPA) were determined by the number of blocks reported on the 3-Day Physical Activity Recall (3DPAR). Individual differences and developmental change in CRF were assessed simultaneously by calculating individual growth curves for each participant, using growth curve modeling. Both weight-relative and absolute CRF increased from 8th to 9th grade and decreased from 9th to 12th grade. On average, girls lost 0.16 kg.m.min.kg.yr in weight-relative PWC170 scores (P < 0.01) and gained 10.3 kg.m.min.yr in absolute PWC170 scores. Girls reporting two or more blocks of MVPA or one or more blocks of VPA at baseline showed an average increase in PWC170 scores of 0.40-0.52 kg.m.min.kg.yr (weight relative) and 22-28 kg.m.min.yr (absolute) in CRF. In weight-relative models, girls with higher BMI showed lower CRF (approximately 0.37 g.m.min.kg.yr), but this was not shown in absolute models. In absolute models, white girls (approximately 40 kg.m.min.yr) and sport participants (approximately 28 kg.m.min.yr) showed an increase in CRF over time. Although there were fluctuations in PWC170 scores across time, average scores decreased during 4 yr. Physical activity was related to change in CRF over time; BMI, race, and sport participation were also important factors related to change over time in CRF (depending on expression of CRF-weight-relative vs absolute). Subsequent research should focus on explaining the complex longitudinal interactions between CRF, physical activity, race, BMI, and sports participation.

  10. A distance-independent calibration of the luminosity of type Ia supernovae and the Hubble constant

    NASA Technical Reports Server (NTRS)

    Leibundgut, Bruno; Pinto, Philip A.

    1992-01-01

    The absolute magnitude of SNe Ia at maximum is calibrated here using radioactive decay models for the light curve and a minimum of assumptions. The absolute magnitude parameter space is studied using explosion models and a range of rise times, and absolute B magnitudes at maximum are used to derive a range of the H0 and the distance to the Virgo Cluster from SNe Ia. Rigorous limits for H0 of 45 and 105 km/s/Mpc are derived.

  11. A New Sclerosing Agent in the Treatment of Venous Malformations

    PubMed Central

    Sannier, K.; Dompmartin, A.; Théron, J.; Labbé, D.; Barrellier, M.T.; Leroyer, R.; Touré, P.; Leroy, D.

    2004-01-01

    Summary Absolute ethanol is the most effective agent in the treatment of venous malformation (VM) although it is quite risky to use because of the danger of diffusion beyond the target. To reduce this risk, we have developed an alcoholic sclerosing solution that is less diffusible. The viscosity of absolute ethanol was enhanced with monographic ethyl-cellulose at a concentration of 5.88% ie 0.75 g in 15 ml of absolute ethanol 95%. 23 patients with VM located on the buttock (1), hand (2), leg (1) and face (19) were treated. A mean volume of 1.99 ml of the solution was injected directly into the VM. Each patient had an average of 2.8 procedures. Sixteen patients were done under general anaesthesia and seven with local anaesthesia. Evaluation was performed by the patient, the dermatologist of the treating multidisciplinary team and a dermatological group not involved in the treatment of the patients. Patients were evaluated after a mean delay of 24.52 months. Evaluation of the cosmetic result was made with a five point scale and the global result with a three point scale. VM pain was evaluated by the patients with a Visual Analogue Scale. The aesthetic results were graded as satisfactory (> 3) for the patient and the dermatologist of the multidisciplinary team. However the results were not as good with the independent dermatological group evaluation. The pain was significantly less important after the treatment (p << 0.001). Among the 23 patients, the local adverse events were nine necrosis with or without ethylcellulose fistula followed by only two surgical procedures. There were no systemic adverse events. Sclerotherapy of VM is usually performed with absolute ethanol or ethibloc. The main advantage of our sclerosing mixture is that it expands like a balloon when injected slowly in a aqueous media. Because of the important increase in viscosity the volume of injected solution is much lower than ethanol alone and the risk of systemic reactions is lower. Contrary to ethibloc, post-sclerosing surgery is not necessary because sub-cutaneous ethylcellulose disappears secondarily. PMID:20587223

  12. Dating Tectonic Activity on Mercury’s Large-Scale Lobate-Scarp Thrust Faults

    NASA Astrophysics Data System (ADS)

    Barlow, Nadine G.; E Banks, Maria

    2017-10-01

    Mercury’s widespread large-scale lobate-scarp thrust faults reveal that the planet’s tectonic history has been dominated by global contraction, primarily due to cooling of its interior. Constraining the timing and duration of this contraction provides key insight into Mercury’s thermal and geologic evolution. We combine two techniques to enhance the statistical validity of size-frequency distribution crater analyses and constrain timing of the 1) earliest and 2) most recent detectable activity on several of Mercury’s largest lobate-scarp thrust faults. We use the sizes of craters directly transected by or superposed on the edge of the scarp face to define a count area around the scarp, a method we call the Modified Buffered Crater Counting Technique (MBCCT). We developed the MBCCT to avoid the issue of a near-zero scarp width since feature widths are included in area calculations of the commonly used Buffered Crater Counting Technique (BCCT). Since only craters directly intersecting the scarp face edge conclusively show evidence of crosscutting relations, we increase the number of craters in our analysis (and reduce uncertainties) by using the morphologic degradation state (i.e. relative age) of these intersecting craters to classify other similarly degraded craters within the count area (i.e., those with the same relative age) as superposing or transected. The resulting crater counts are divided into two categories: transected craters constrain the earliest possible activity and superposed craters constrain the most recent detectable activity. Absolute ages are computed for each population using the Marchi et al. [2009] model production function. A test of the Blossom lobate scarp indicates the MBCCT gives statistically equivalent results to the BCCT. We find that all scarps in this study crosscut surfaces Tolstojan or older in age (>~3.7 Ga). The most recent detectable activity along lobate-scarp thrust faults ranges from Calorian to Kuiperian (~3.7 Ga to present). Our results complement previous relative-age studies with absolute ages and indicate global contraction continued over the last ~3-4 Gyr. At least some thrust fault activity occurred on Mercury in relatively recent times (<280 Ma).

  13. Impact of aortic valve calcification, as measured by MDCT, on survival in patients with aortic stenosis: results of an international registry study.

    PubMed

    Clavel, Marie-Annick; Pibarot, Philippe; Messika-Zeitoun, David; Capoulade, Romain; Malouf, Joseph; Aggarval, Shivani; Araoz, Phillip A; Michelena, Hector I; Cueff, Caroline; Larose, Eric; Miller, Jordan D; Vahanian, Alec; Enriquez-Sarano, Maurice

    2014-09-23

    Aortic valve calcification (AVC) load measures lesion severity in aortic stenosis (AS) and is useful for diagnostic purposes. Whether AVC predicts survival after diagnosis, independent of clinical and Doppler echocardiographic AS characteristics, has not been studied. This study evaluated the impact of AVC load, absolute and relative to aortic annulus size (AVCdensity), on overall mortality in patients with AS under conservative treatment and without regard to treatment. In 3 academic centers, we enrolled 794 patients (mean age, 73 ± 12 years; 274 women) diagnosed with AS by Doppler echocardiography who underwent multidetector computed tomography (MDCT) within the same episode of care. Absolute AVC load and AVCdensity (ratio of absolute AVC to cross-sectional area of aortic annulus) were measured, and severe AVC was separately defined in men and women. During follow-up, there were 440 aortic valve implantations (AVIs) and 194 deaths (115 under medical treatment). Univariate analysis showed strong association of absolute AVC and AVCdensity with survival (both, p < 0.0001) with a spline curve analysis pattern of threshold and plateau of risk. After adjustment for age, sex, coronary artery disease, diabetes, symptoms, AS severity on hemodynamic assessment, and LV ejection fraction, severe absolute AVC (adjusted hazard ratio [HR]: 1.75; 95% confidence interval [CI]: 1.04 to 2.92; p = 0.03) or severe AVCdensity (adjusted HR: 2.44; 95% CI: 1.37 to 4.37; p = 0.002) independently predicted mortality under medical treatment, with additive model predictive value (all, p ≤ 0.04) and a net reclassification index of 12.5% (p = 0.04). Severe absolute AVC (adjusted HR: 1.71; 95% CI: 1.12 to 2.62; p = 0.01) and severe AVCdensity (adjusted HR: 2.22; 95% CI: 1.40 to 3.52; p = 0.001) also independently predicted overall mortality, even with adjustment for time-dependent AVI. This large-scale, multicenter outcomes study of quantitative Doppler echocardiographic and MDCT assessment of AS shows that measuring AVC load provides incremental prognostic value for survival beyond clinical and Doppler echocardiographic assessment. Severe AVC independently predicts excess mortality after AS diagnosis, which is greatly alleviated by AVI. Thus, measurement of AVC by MDCT should be considered for not only diagnostic but also risk-stratification purposes in patients with AS. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  14. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  15. First Impressions of CARTOSAT-1

    NASA Technical Reports Server (NTRS)

    Lutes, James

    2007-01-01

    CARTOSAT-1 RPCs need special handling. Absolute accuracy of uncontrolled scenes is poor (biases > 300 m). Noticeable cross-track scale error (+/- 3-4 m across stereo pair). Most errors are either biases or linear in line/sample (These are easier to correct with ground control).

  16. Volcanic synchronisation of the EPICA-DC and TALDICE ice cores for the last 42 kyr BP

    NASA Astrophysics Data System (ADS)

    Severi, M.; Udisti, R.; Becagli, S.; Stenni, B.; Traversi, R.

    2012-04-01

    An age scale synchronisation between the Talos Dome and the EPICA Dome C ice cores was carried on through the identification of several common volcanic signatures for the last 42 kyr. Using this tight stratigraphic link we transferred the EDC age scale to the Talos Dome ice core producing a new age scale for the last 12 kyr. We estimated the discrepancies between the modeled TALDICE-1 age scale and the new one during the studied period, by evaluating the ratio R of the apparent duration of temporal intervals between pairs of isochrones. Except for a very few cases, R ranges between 0.8 and 1.2 corresponding to an uncertainty of up to 20% in the estimate of the time duration in at least one of the two ice cores. At this stage our approach does not allow us unequivocally to find out which of the models is affected by errors, but, taking into account only the historically known volcanic events, we found that discrepancies up to 200 years appears in the last two millennia in the TALDICE-1 model, while our new age scale shows a much better agreement with the volcanic absolute horizons. Thus, we propose for the Talos Dome ice core a new age scale (covering the whole Holocene) obtained by a direct transfer, via our stratigraphic link, from the EDC modelled age scale by Lemieux-Dudon et al. (2010).

  17. Is Marathon Training Harder than the Ironman Training? An ECO-method Comparison.

    PubMed

    Esteve-Lanao, Jonathan; Moreno-Pérez, Diego; Cardona, Claudia A; Larumbe-Zabala, Eneko; Muñoz, Iker; Sellés, Sergio; Cejuela, Roberto

    2017-01-01

    Purpose: To compare the absolute and relative training load of the Marathon (42k) and the Ironman (IM) training in recreational trained athletes. Methods: Fifteen Marathoners and Fifteen Triathletes participated in the study. Their performance level was the same relative to the sex's absolute winner at the race. No differences were presented neither in age, nor in body weight, height, BMI, running VO 2max max, or endurance training experience ( p > 0.05). They all trained systematically for their respective event (IM or 42k). Daily training load was recorded in a training log, and the last 16 weeks were compared. Before this, gas exchange and lactate metabolic tests were conducted in order to set individual training zones. The Objective Load Scale (ECOs) training load quantification method was applied. Differences between IM and 42k athletes' outcomes were assessed using Student's test and significance level was set at p < 0.05. Results: As expected, Competition Time was significantly different (IM 11 h 45 min ± 1 h 54 min vs. 42k 3 h 6 min ± 28 min, p < 0.001). Similarly, Training Weekly Avg Time (IM 12.9 h ± 2.6 vs. 42k 5.2 ± 0.9), and Average Weekly ECOs (IM 834 ± 171 vs. 42k 526 ± 118) were significantly higher in IM ( p < 0.001). However, the Ratio between Training Load and Training Time was superior for 42k runners when comparing ECOs (IM 65.8 ± 11.8 vs. 42k 99.3 ± 6.8) ( p < 0.001). Finally, all ratios between training time or load vs. Competition Time were superior for 42k ( p < 0.001) (Training Time/Race Time: IM 1.1 ± 0.3 vs. 42k 1.7 ± 0.5), (ECOs Training Load/Race Time: IM 1.2 ± 0.3 vs. 42k 2.9 ± 1.0). Conclusions: In spite of IM athletes' superior training time and total or weekly training load, when comparing the ratios between training load and training time, and training time or training load vs. competition time, the preparation of a 42k showed to be harder.

  18. Is Marathon Training Harder than the Ironman Training? An ECO-method Comparison

    PubMed Central

    Esteve-Lanao, Jonathan; Moreno-Pérez, Diego; Cardona, Claudia A.; Larumbe-Zabala, Eneko; Muñoz, Iker; Sellés, Sergio; Cejuela, Roberto

    2017-01-01

    Purpose: To compare the absolute and relative training load of the Marathon (42k) and the Ironman (IM) training in recreational trained athletes. Methods: Fifteen Marathoners and Fifteen Triathletes participated in the study. Their performance level was the same relative to the sex's absolute winner at the race. No differences were presented neither in age, nor in body weight, height, BMI, running VO2max max, or endurance training experience (p > 0.05). They all trained systematically for their respective event (IM or 42k). Daily training load was recorded in a training log, and the last 16 weeks were compared. Before this, gas exchange and lactate metabolic tests were conducted in order to set individual training zones. The Objective Load Scale (ECOs) training load quantification method was applied. Differences between IM and 42k athletes' outcomes were assessed using Student's test and significance level was set at p < 0.05. Results: As expected, Competition Time was significantly different (IM 11 h 45 min ± 1 h 54 min vs. 42k 3 h 6 min ± 28 min, p < 0.001). Similarly, Training Weekly Avg Time (IM 12.9 h ± 2.6 vs. 42k 5.2 ± 0.9), and Average Weekly ECOs (IM 834 ± 171 vs. 42k 526 ± 118) were significantly higher in IM (p < 0.001). However, the Ratio between Training Load and Training Time was superior for 42k runners when comparing ECOs (IM 65.8 ± 11.8 vs. 42k 99.3 ± 6.8) (p < 0.001). Finally, all ratios between training time or load vs. Competition Time were superior for 42k (p < 0.001) (Training Time/Race Time: IM 1.1 ± 0.3 vs. 42k 1.7 ± 0.5), (ECOs Training Load/Race Time: IM 1.2 ± 0.3 vs. 42k 2.9 ± 1.0). Conclusions: In spite of IM athletes' superior training time and total or weekly training load, when comparing the ratios between training load and training time, and training time or training load vs. competition time, the preparation of a 42k showed to be harder. PMID:28611674

  19. The Resource Consumption Principle: Attention and Memory in Volumes of Neural Tissue

    NASA Astrophysics Data System (ADS)

    Montague, P. Read

    1996-04-01

    In the cerebral cortex, the small volume of the extracellular space in relation to the volume enclosed by synapses suggests an important functional role for this relationship. It is well known that there are atoms and molecules in the extracellular space that are absolutely necessary for synapses to function (e.g., calcium). I propose here the hypothesis that the rapid shift of these atoms and molecules from extracellular to intrasynaptic compartments represents the consumption of a shared, limited resource available to local volumes of neural tissue. Such consumption results in a dramatic competition among synapses for resources necessary for their function. In this paper, I explore a theory in which this resource consumption plays a critical role in the way local volumes of neural tissue operate. On short time scales, this principle of resource consumption permits a tissue volume to choose those synapses that function in a particular context and thereby helps to integrate the many neural signals that impinge on a tissue volume at any given moment. On longer time scales, the same principle aids in the stable storage and recall of information. The theory provides one framework for understanding how cerebral cortical tissue volumes integrate, attend to, store, and recall information. In this account, the capacity of neural tissue to attend to stimuli is intimately tied to the way tissue volumes are organized at fine spatial scales.

  20. Wavelet regression model in forecasting crude oil price

    NASA Astrophysics Data System (ADS)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  1. Multiscaling and clustering of volatility

    NASA Astrophysics Data System (ADS)

    Pasquini, Michele; Serva, Maurizio

    1999-07-01

    The dynamics of prices in stock markets has been studied intensively both experimentally (data analysis) and theoretically (models). Nevertheless, while the distribution of returns of the most important indices is known to be a truncated Lévy, the behaviour of volatility correlations is still poorly understood. What is well known is that absolute returns have memory on a long time range, this phenomenon is known in financial literature as clustering of volatility. In this paper we show that volatility correlations are power laws with a non-unique scaling exponent. This kind of multiscale phenomenology is known to be relevant in fully developed turbulence and in disordered systems and it is pointed out here for the first time for a financial series. In our study we consider the New York Stock Exchange (NYSE) daily index, from January 1966 to June 1998, for a total of 8180 working days.

  2. Asymmetric sea-floor spreading caused by ridge-plume interactions

    NASA Astrophysics Data System (ADS)

    Müller, R. Dietmar; Roest, Walter R.; Royer, Jean-Yves

    1998-12-01

    Crustal accretion at mid-ocean ridges is generally modelled as a symmetric process. Regional analyses, however, often show either small-scale asymmetries, which vary rapidly between individual spreading corridors, or large-scale asymmetries represented by consistent excess accretion on one of the two separating plates over geological time spans. In neither case is the origin of the asymmetry well understood. Here we present a comprehensive analysis of the asymmetry of crustal accretion over the past 83Myr based on a set of self-consistent digital isochrons and models of absolute plate motion,. We find that deficits in crustal accretion occur mainly on ridge flanks overlying one or several hotspots. We therefore propose that asymmetric accretion is caused by ridge propagation towards mantle plumes or minor ridge jumps sustained by asthenospheric flow, between ridges and plumes. Quantifying the asymmetry of crustal accretion provides a complementary approach to that based on geochemical and other geophysical data, in helping to unravel how mantle plumes and mid-ocean ridges are linked through mantle convection processes.

  3. Validation of absolute axial neutron flux distribution calculations with MCNP with 197Au(n,γ)198Au reaction rate distribution measurements at the JSI TRIGA Mark II reactor.

    PubMed

    Radulović, Vladimir; Štancar, Žiga; Snoj, Luka; Trkov, Andrej

    2014-02-01

    The calculation of axial neutron flux distributions with the MCNP code at the JSI TRIGA Mark II reactor has been validated with experimental measurements of the (197)Au(n,γ)(198)Au reaction rate. The calculated absolute reaction rate values, scaled according to the reactor power and corrected for the flux redistribution effect, are in good agreement with the experimental results. The effect of different cross-section libraries on the calculations has been investigated and shown to be minor. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Relaxation processes in a low-order three-dimensional magnetohydrodynamics model

    NASA Technical Reports Server (NTRS)

    Stribling, Troy; Matthaeus, William H.

    1991-01-01

    The time asymptotic behavior of a Galerkin model of 3D magnetohydrodynamics (MHD) has been interpreted using the selective decay and dynamic alignment relaxation theories. A large number of simulations has been performed that scan a parameter space defined by the rugged ideal invariants, including energy, cross helicity, and magnetic helicity. It is concluded that time asymptotic state can be interpreted as a relaxation to minimum energy. A simple decay model, based on absolute equilibrium theory, is found to predict a mapping of initial onto time asymptotic states, and to accurately describe the long time behavior of the runs when magnetic helicity is present. Attention is also given to two processes, operating on time scales shorter than selective decay and dynamic alignment, in which the ratio of kinetic to magnetic energy relaxes to values 0(1). The faster of the two processes takes states initially dominant in magnetic energy to a state of near-equipartition between kinetic and magnetic energy through power law growth of kinetic energy. The other process takes states initially dominant in kinetic energy to the near-equipartitioned state through exponential growth of magnetic energy.

  5. Accuracy of free energies of hydration using CM1 and CM3 atomic charges.

    PubMed

    Udier-Blagović, Marina; Morales De Tirado, Patricia; Pearlman, Shoshannah A; Jorgensen, William L

    2004-08-01

    Absolute free energies of hydration (DeltaGhyd) have been computed for 25 diverse organic molecules using partial atomic charges derived from AM1 and PM3 wave functions via the CM1 and CM3 procedures of Cramer, Truhlar, and coworkers. Comparisons are made with results using charges fit to the electrostatic potential surface (EPS) from ab initio 6-31G* wave functions and from the OPLS-AA force field. OPLS Lennard-Jones parameters for the organic molecules were used together with the TIP4P water model in Monte Carlo simulations with free energy perturbation theory. Absolute free energies of hydration were computed for OPLS united-atom and all-atom methane by annihilating the solutes in water and in the gas phase, and absolute DeltaGhyd values for all other molecules were computed via transformation to one of these references. Optimal charge scaling factors were determined by minimizing the unsigned average error between experimental and calculated hydration free energies. The PM3-based charge models do not lead to lower average errors than obtained with the EPS charges for the subset of 13 molecules in the original study. However, improvement is obtained by scaling the CM1A partial charges by 1.14 and the CM3A charges by 1.15, which leads to average errors of 1.0 and 1.1 kcal/mol for the full set of 25 molecules. The scaled CM1A charges also yield the best results for the hydration of amides including the E/Z free-energy difference for N-methylacetamide in water. Copyright 2004 Wiley Periodicals, Inc.

  6. Absolute measurement of the 242Pu neutron-capture cross section

    DOE PAGES

    Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; ...

    2016-04-21

    Here, the absolute neutron-capture cross section of 242Pu was measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. The first direct measurement of the 242Pu(n,γ) cross section was made over the incident neutron energy range from thermal to ≈ 6 keV, and the absolute scale of the (n,γ) cross section was set according to the known 239Pu(n,f) resonance at E n,R = 7.83 eV. This was accomplished by adding a small quantity of 239Pu to the 242Pu sample. The relative scale of themore » cross section, with a range of four orders of magnitude, was determined for incident neutron energies from thermal to ≈ 40 keV. Our data, in general, are in agreement with previous measurements and those reported in ENDF/B-VII.1; the 242Pu(n,γ) cross section at the E n,R = 2.68 eV resonance is within 2.4% of the evaluated value. However, discrepancies exist at higher energies; our data are ≈30% lower than the evaluated data at E n ≈ 1 keV and are approximately 2σ away from the previous measurement at E n ≈ 20 keV.« less

  7. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. [Continuity and discontinuity of the geomerida: the bionomic and biotic aspects].

    PubMed

    Kafanov, A I

    2005-01-01

    The view of the spatial structure of the geomerida (Earth's life cover) as a continuum that prevails in modern phytocoenology is mostly determined by a physiognomic (landscape-bionomic) discrimination of vegetation components. In this connection, geography of life forms appears as subject of the landscapebionomic biogeography. In zoocoenology there is a tendency of synthesis of alternative concepts based on the assumption that there are no absolute continuum and absolute discontinuum in the organic nature. The problem of continuum and discontinuum of living cover being problem of scale aries from fractal structure of geomerida. This problem arises from fractal nature of the spatial structure of geomerida. The continuum mainly belongs to regularities of topological order. At regional and subregional scale the continuum of biochores is rather rare. The objective evidences of relative discontinuity of the living cover are determined by significant alterations of species diversity at the regional, subregional and even topological scale Alternatively to conventionally discriminated units in physionomically continuous vegetation, the same biotic complexes, represented as operational units of biogeographical and biocenological zoning, are distinguished repeatedly and independently by different researchers. An area occupied by certain flora (fauna, biota) could be considered as elementary unit of biotic diversity (elementary biotic complex).

  9. Using Blur to Affect Perceived Distance and Size

    PubMed Central

    HELD, ROBERT T.; COOPER, EMILY A.; O’BRIEN, JAMES F.; BANKS, MARTIN S.

    2011-01-01

    We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the image’s contents. From the model, we develop a semiautomated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scene’s contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the model’s predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently. PMID:21552429

  10. Direct Mass Measurements in the Light Neutron-Rich Region Using a Combined Energy and Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Pillai, C.; Swenson, L. W.; Vieira, D. J.; Butler, G. W.; Wouters, J. M.; Rokni, S. H.; Vaziri, K.; Remsberg, L. P.

    This experiment has demonstrated that direct mass measurements can be performed (albeit of low precision in this first attempt) using the M proportional to ET(2) method. This technique has the advantage that many particle-bound nuclei, produced in fragmentation reactions can be measured simultaneously, independent of their N or Z. The main disadvantage of this approach is that both energy and time-of-flight must be measured precisely on an absolute scale. Although some mass walk with N and Z was observed in this experiment, these uncertainties were largely removed by extrapolating the smooth dependence observed for known nuclei which lie closer to the valley of (BETA)-stability. Mass measurements for several neutron-rich light nuclei ranging from C-17 to NE-26 have been performed. In all cases these measurements agree with the latest mass compilation of Wapstra and Audi. The masses of N-20 N and F-24 have been determined for the first time.

  11. Single-drop impingement onto a wavy liquid film and description of the asymmetrical cavity dynamics

    NASA Astrophysics Data System (ADS)

    van Hinsberg, Nils Paul; Charbonneau-Grandmaison, Marie

    2015-07-01

    The present paper is devoted to an experimental investigation of the cavity formed upon a single-drop impingement onto a traveling solitary surface wave on a deep pool of the same liquid. The dynamics of the cavity throughout its complete expansion and receding phase are analyzed using high-speed shadowgraphy and compared to the outcomes of drop impingements onto steady liquid surface films having equal thickness. The effects of the surface wave velocity, amplitude and phase, drop impingement velocity, and liquid viscosity on the cavity's diameter and depth evolution are accurately characterized at various time instants. The wave velocity induces a distinct and in time increasing inclination of the cavity in the wave propagation direction. In particular for strong waves an asymmetrical distribution of the radial expansion and retraction velocity along the cavity's circumference is observed. A linear dependency between the absolute Weber number and the typical length and time scales associated with the cavity's maximum depth and maximum diameter is reported.

  12. On the social rate of discount: the case for macroenvironmental policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doeleman, J.A.

    Concern for the rapidly growing scale and intensity of the human exploitation of the environment, in particular the alienation of natural ecosystems, but also resource exhaustion, pollution, and congestion, leads one to wonder about the short time horizons allowed for in decision making. Time preference is dictated by the rate of interest, allowing in practice a horizon often not exceeding several decades. I argue that this is unsatisfactory. Some minimal social rate of discount should not be enforced. Instead, it is more feasible to set absolute environmental standards, thereby introducing quantity constraints on our decision making, within which time preferencemore » can be permitted to find its own level. This acknowledges that the myopia of human vision may not be a flaw but rather a biological design which has served us well in evolution. It may, therefore, be better to change the rules by introducing self-imposed collective constraints than to try to change the shortsightedness of people in their day-to-day grass-roots decision making.« less

  13. [Comparison of GIMMS and MODIS normalized vegetation index composite data for Qing-Hai-Tibet Plateau].

    PubMed

    Du, Jia-Qiang; Shu, Jian-Min; Wang, Yue-Hui; Li, Ying-Chang; Zhang, Lin-Bo; Guo, Yang

    2014-02-01

    Consistent NDVI time series are basic and prerequisite in long-term monitoring of land surface properties. Advanced very high resolution radiometer (AVHRR) measurements provide the longest records of continuous global satellite measurements sensitive to live green vegetation, and moderate resolution imaging spectroradiometer (MODIS) is more recent typical with high spatial and temporal resolution. Understanding the relationship between the AVHRR-derived NDVI and MODIS NDVI is critical to continued long-term monitoring of ecological resources. NDVI time series acquired by the global inventory modeling and mapping studies (GIMMS) and Terra MODIS were compared over the same time periods from 2000 to 2006 at four scales of Qinghai-Tibet Plateau (whole region, sub-region, biome and pixel) to assess the level of agreement in terms of absolute values and dynamic change by independently assessing the performance of GIMMS and MODIS NDVI and using 495 Landsat samples of 20 km x20 km covering major land cover type. High correlations existed between the two datasets at the four scales, indicating their mostly equal capability of capturing seasonal and monthly phenological variations (mostly at 0. 001 significance level). Simi- larities of the two datasets differed significantly among different vegetation types. The relative low correlation coefficients and large difference of NDVI value between the two datasets were found among dense vegetation types including broadleaf forest and needleleaf forest, yet the correlations were strong and the deviations were small in more homogeneous vegetation types, such as meadow, steppe and crop. 82% of study area was characterized by strong consistency between GIMMS and MODIS NDVI at pixel scale. In the Landsat NDVI vs. GIMMS and MODIS NDVI comparison of absolute values, the MODIS NDVI performed slightly better than GIMMS NDVI, whereas in the comparison of temporal change values, the GIMMS data set performed best. Similar with comparison results of GIMMS and MODIS NDVI, the consistency across the three datasets was clearly different among various vegetation types. In dynamic changes, differences between Landsat and MODIS NDVI were smaller than Landsat NDVI vs. GIMMS NDVI for forest, but Landsat and GIMMS NDVI agreed better for grass and crop. The results suggested that spatial patterns and dynamic trends of GIMMS NDVI were found to be in overall acceptable agreement with MODIS NDVI. It might be feasible to successfully integrate historical GIMMS and more recent MODIS NDVI to provide continuity of NDVI products. The accuracy of merging AVHRR historical data recorded with more modern MODIS NDVI data strongly depends on vegetation type, season and phenological period, and spatial scale. The integration of the two datasets for needleleaf forest, broadleaf forest, and for all vegetation types in the phenological transition periods in spring and autumn should be treated with caution.

  14. Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).

    PubMed

    Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando

    2014-07-02

    Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  15. GTSnext: towards a next generation of the geological time scale over the last 100 millions years. (Invited)

    NASA Astrophysics Data System (ADS)

    Kuiper, K.; Condon, D.; Hilgen, F.; Laskar, J.; Mezger, K.; Pälike, H.; Quidelleur, X.; Schaltegger, U.; Sprovieri, M.; Storey, M.; Wijbrans, J. R.

    2009-12-01

    The principal scientific objective of the Marie Curie Initial Trainings Network GTSnext is to establish the next generation standard Geological Time Scale with unprecedented accuracy, precision and resolution through integration and intercalibration of state-of-the-art numerical dating techniques. Such time scales underlie all fields in the Earth Sciences and their application will significantly contribute to a much enhanced understanding of Earth System evolution. During the last decade deep marine successions were successfully employed to establish an astronomical tuning for the entire Neogene, as incorporated in the standard Geological Time Scale (ATNTS2004). In GTSnext we aim to fine-tune this Neogene time scale, before it can reliably be used to accurately determine phase relations between astronomical forcing and climate response in the Neogene and possibly also the Oligocene. Radio-isotopic dating of late Neogene ash layers offers excellent opportunities for gaining insight into isotope systematics via their independent dating by astronomical tuning. An example of this synergy is the development of astronomically calibrated standards for 40Ar/39Ar geochronology. The cross-calibration between the different methods might also yield information on the fundamental problem of potential residence times in U/Pb dating. Extension of the astronomical time scale into the Paleogene seems limited to ~40 Ma due to the accuracy of the current astronomical solution. However, the 405 kyr eccentricity component is very stable permitting its use in time scale calibrations back to 250 Ma using only this frequency. This cycle is strong and well developed in Oligocene and even Eocene records. Phase relations between cyclic paleo-climate records and the 405 kyr eccentricity cycle are typically straightforward and unambiguous. Therefore, a first-order tuning to ~405 kyr eccentricity can only be revised by shifting the tuning with (multiples of) ~405 kyr. Isotopic age constraints of both U/Pb and 40Ar/39Ar will be used to anchor floating astronomical tunings, but absolute uncertainties in isotopic ages should be less than ± 200 kyr. The Cretaceous is famous for its remarkable cyclic successions of marine pelagic sediments which bear the unmistakable imprint of astronomical climate forcing. As a consequence floating astrochronologies which are based on number of cycles have been developed for significant portions of the Cretaceous, covering a number of geological stages. Unfortunately, such floating time scales provide us only with the duration of stages but not with their age. However, due to significant improvements in numerical astronomical solutions for the Solar System and in the accuracy of radio-isotopic dating we will try to establish a tuned time scale for the Late Cretaceous. Classical cyclic sections in Europe (e.g. Sopelana, Spain) will be used for the tuning, but lack ash beds. Therefore, radio-isotopic age constraints necessary for the tuning will come from ash beds in the Western Interior Basin in North America. Here we will present the first results of the GTSnext project.

  16. A carbon-supported copper complex of 3,5-diamino-1,2,4-triazole as a cathode catalyst for alkaline fuel cell applications.

    PubMed

    Brushett, Fikile R; Thorum, Matthew S; Lioutas, Nicholas S; Naughton, Matthew S; Tornow, Claire; Jhong, Huei-Ru Molly; Gewirth, Andrew A; Kenis, Paul J A

    2010-09-08

    The performance of a novel carbon-supported copper complex of 3,5-diamino-1,2,4-triazole (Cu-tri/C) is investigated as a cathode material using an alkaline microfluidic H(2)/O(2) fuel cell. The absolute Cu-tri/C cathode performance is comparable to that of a Pt/C cathode. Furthermore, at a commercially relevant potential, the measured mass activity of an unoptimized Cu-tri/C-based cathode was significantly greater than that of similar Pt/C- and Ag/C-based cathodes. Accelerated cathode durability studies suggested multiple degradation regimes at various time scales. Further enhancements in performance and durability may be realized by optimizing catalyst and electrode preparation procedures.

  17. Martian Chronology: Goals for Investigations from a Recent Multidisciplinary Workshop

    NASA Technical Reports Server (NTRS)

    Nyquist, L.; Doran, P. T.; Cerling, T. E.; Clifford, S. M.; Forman, S. L.; Papanastassiou, D. A.; Stewart, B. W.; Sturchio, N. C.; Swindle, T. D.

    2000-01-01

    The absolute chronology of Martian rocks and events is based mainly on crater statistics and remains highly uncertain. Martian chronology will be critical to building a time scale comparable to Earth's to address questions about the early evolution of the planets and their ecosystems. In order to address issues and strategies specific to Martian chronology, a workshop was held, 4-7 June 2000, with invited participants from the planetary, geochronology, geochemistry, and astrobiology communities. The workshop focused on identifying: a) key scientific questions of Martian chronology; b) chronological techniques applicable to Mars; c) unique processes on Mars that could be exploited to obtain rates, fluxes, ages; and d) sampling issues for these techniques. This is an overview of the workshop findings and recommendations.

  18. N Vibrational Temperatures and OH Number Density Measurements in a NS Pulse Discharge Hydrogen-Air Plasmas

    NASA Astrophysics Data System (ADS)

    Hung, Yichen; Winters, Caroline; Jans, Elijah R.; Frederickson, Kraig; Adamovich, Igor V.

    2017-06-01

    This work presents time-resolved measurements of nitrogen vibrational temperature, translational-rotational temperature, and absolute OH number density in lean hydrogen-air mixtures excited in a diffuse filament nanosecond pulse discharge, at a pressure of 100 Torr and high specific energy loading. The main objective of these measurements is to study a possible effect of nitrogen vibrational excitation on low-temperature kinetics of HO2 and OH radicals. N2 vibrational temperature and gas temperature in the discharge and the afterglow are measured by ns broadband Coherent Anti-Stokes Scattering (CARS). Hydroxyl radical number density is measured by Laser Induced Fluorescence (LIF) calibrated by Rayleigh scattering. The results show that the discharge generates strong vibrational nonequilibrium in air and H2-air mixtures for delay times after the discharge pulse of up to 1 ms, with peak vibrational temperature of Tv ≈ 2000 K at T ≈ 500 K. Nitrogen vibrational temperature peaks ≈ 200 μs after the discharge pulse, before decreasing due to vibrational-translational relaxation by O atoms (on the time scale of a few hundred μs) and diffusion (on ms time scale). OH number density increases gradually after the discharge pulse, peaking at t 100-300 μs and decaying on a longer time scale, until t 1 ms. Both OH rise time and decay time decrease as H2 fraction in the mixture is increased from 1% to 5%. OH number density in a 1% H2-air mixture peaks at approximately the same time as vibrational temperature in air, suggesting that OH kinetics may be affected by N2 vibrational excitation. However, preliminary kinetic modeling calculations demonstrate that OH number density overshoot is controlled by known reactions of H and O radicals generated in the plasma, rather than by dissociation by HO2 radical in collisions with vibrationally excited N2 molecules, as has been suggested earlier. Additional measurements at higher specific energy loadings and kinetic modeling calculations are underway.

  19. Communicating cardiovascular disease risk: an interview study of General Practitioners' use of absolute risk within tailored communication strategies.

    PubMed

    Bonner, Carissa; Jansen, Jesse; McKinn, Shannon; Irwig, Les; Doust, Jenny; Glasziou, Paul; McCaffery, Kirsten

    2014-05-29

    Cardiovascular disease (CVD) prevention guidelines encourage assessment of absolute CVD risk - the probability of a CVD event within a fixed time period, based on the most predictive risk factors. However, few General Practitioners (GPs) use absolute CVD risk consistently, and communication difficulties have been identified as a barrier to changing practice. This study aimed to explore GPs' descriptions of their CVD risk communication strategies, including the role of absolute risk. Semi-structured interviews were conducted with a purposive sample of 25 GPs in New South Wales, Australia. Transcribed audio-recordings were thematically coded, using the Framework Analysis method to ensure rigour. GPs used absolute CVD risk within three different communication strategies: 'positive', 'scare tactic', and 'indirect'. A 'positive' strategy, which aimed to reassure and motivate, was used for patients with low risk, determination to change lifestyle, and some concern about CVD risk. Absolute risk was used to show how they could reduce risk. A 'scare tactic' strategy was used for patients with high risk, lack of motivation, and a dismissive attitude. Absolute risk was used to 'scare' them into taking action. An 'indirect' strategy, where CVD risk was not the main focus, was used for patients with low risk but some lifestyle risk factors, high anxiety, high resistance to change, or difficulty understanding probabilities. Non-quantitative absolute risk formats were found to be helpful in these situations. This study demonstrated how GPs use three different communication strategies to address the issue of CVD risk, depending on their perception of patient risk, motivation and anxiety. Absolute risk played a different role within each strategy. Providing GPs with alternative ways of explaining absolute risk, in order to achieve different communication aims, may improve their use of absolute CVD risk assessment in practice.

  20. High-accuracy absolute rotation rate measurements with a large ring laser gyro: establishing the scale factor.

    PubMed

    Hurst, Robert B; Mayerbacher, Marinus; Gebauer, Andre; Schreiber, K Ulrich; Wells, Jon-Paul R

    2017-02-01

    Large ring lasers have exceeded the performance of navigational gyroscopes by several orders of magnitude and have become useful tools for geodesy. In order to apply them to tests in fundamental physics, remaining systematic errors have to be significantly reduced. We derive a modified expression for the Sagnac frequency of a square ring laser gyro under Earth rotation. The modifications include corrections for dispersion (of both the gain medium and the mirrors), for the Goos-Hänchen effect in the mirrors, and for refractive index of the gas filling the cavity. The corrections were measured and calculated for the 16  m2 Grossring laser located at the Geodetic Observatory Wettzell. The optical frequency and the free spectral range of this laser were measured, allowing unique determination of the longitudinal mode number, and measurement of the dispersion. Ultimately we find that the absolute scale factor of the gyroscope can be estimated to an accuracy of approximately 1 part in 108.

  1. Vertical land motion controls regional sea level rise patterns on the United States east coast since 1900

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Hay, C.; Mitrovica, J. X.; Little, C. M.; Ponte, R. M.; Tingley, M.

    2017-12-01

    Understanding observed spatial variations in centennial relative sea level trends on the United States east coast has important scientific and societal applications. Past studies based on models and proxies variously suggest roles for crustal displacement, ocean dynamics, and melting of the Greenland ice sheet. Here we perform joint Bayesian inference on regional relative sea level, vertical land motion, and absolute sea level fields based on tide gauge records and GPS data. Posterior solutions show that regional vertical land motion explains most (80% median estimate) of the spatial variance in the large-scale relative sea level trend field on the east coast over 1900-2016. The posterior estimate for coastal absolute sea level rise is remarkably spatially uniform compared to previous studies, with a spatial average of 1.4-2.3 mm/yr (95% credible interval). Results corroborate glacial isostatic adjustment models and reveal that meaningful long-period, large-scale vertical velocity signals can be extracted from short GPS records.

  2. Methodological Challenges in Research on Sexual Risk Behavior: I. Item Content, Scaling, and Data Analytical Options

    PubMed Central

    Schroder, Kerstin E. E.; Carey, Michael P.; Vanable, Peter A.

    2008-01-01

    Investigation of sexual behavior involves many challenges, including how to assess sexual behavior and how to analyze the resulting data. Sexual behavior can be assessed using absolute frequency measures (also known as “counts”) or with relative frequency measures (e.g., rating scales ranging from “never” to “always”). We discuss these two assessment approaches in the context of research on HIV risk behavior. We conclude that these two approaches yield non-redundant information and, more importantly, that only data yielding information about the absolute frequency of risk behavior have the potential to serve as valid indicators of HIV contraction risk. However, analyses of count data may be challenging due to non-normal distributions with many outliers. Therefore, we identify new and powerful data analytical solutions that have been developed recently to analyze count data, and discuss limitations of a commonly applied method (viz., ANCOVA using baseline scores as covariates). PMID:14534027

  3. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells.

    PubMed

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-10-14

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  4. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells

    PubMed Central

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-01-01

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412

  5. Meniscal Extrusion Progresses Shortly after the Medial Meniscus Posterior Root Tear.

    PubMed

    Furumatsu, Takayuki; Kodama, Yuya; Kamatsuki, Yusuke; Hino, Tomohito; Okazaki, Yoshiki; Ozaki, Toshifumi

    2017-12-01

    Medial meniscus posterior root tears (MMPRT) induce medial meniscus extrusion (MME). However, the time-dependent extent of MME in patients suffering from the MMPRT remains unclear. This study evaluated the extent of MME after painful popping events that occurred at the onset of the MMPRT. Thirty-five patients who had an episode of posteromedial painful popping were investigated. All the patients were diagnosed as having an MMPRT by magnetic resonance imaging (MRI) within 12 months after painful popping. Medial meniscus body width (MMBW), absolute MME, and relative MME (100×absolute MME/MMBW) were assessed among three groups divided according to the time after painful popping events: early period (〈1 month), subacute period (1-3 months), and chronic period (4-12 months). In the early period, absolute and relative MMEs were 3.0 mm and 32.7%, respectively. Absolute MME increased up to 4.2 mm and 5.8 mm during the subacute and chronic periods, respectively. Relative MME also progressed to 49.2% and 60.3% in the subacute and chronic periods, respectively. This study demonstrated that absolute and relative MMEs increased progressively within the short period after the onset of symptomatic MMPRT. Our results suggest that early diagnosis of an MMPRT may be important to prevent progression of MME following the MMPRT.

  6. Origins and Scaling of Hot-Electron Preheat in Ignition-Scale Direct-Drive Inertial Confinement Fusion Experiments.

    PubMed

    Rosenberg, M J; Solodov, A A; Myatt, J F; Seka, W; Michel, P; Hohenberger, M; Short, R W; Epstein, R; Regan, S P; Campbell, E M; Chapman, T; Goyon, C; Ralph, J E; Barrios, M A; Moody, J D; Bates, J W

    2018-02-02

    Planar laser-plasma interaction (LPI) experiments at the National Ignition Facility (NIF) have allowed access for the first time to regimes of electron density scale length (∼500 to 700  μm), electron temperature (∼3 to 5 keV), and laser intensity (6 to 16×10^{14}  W/cm^{2}) that are relevant to direct-drive inertial confinement fusion ignition. Unlike in shorter-scale-length plasmas on OMEGA, scattered-light data on the NIF show that the near-quarter-critical LPI physics is dominated by stimulated Raman scattering (SRS) rather than by two-plasmon decay (TPD). This difference in regime is explained based on absolute SRS and TPD threshold considerations. SRS sidescatter tangential to density contours and other SRS mechanisms are observed. The fraction of laser energy converted to hot electrons is ∼0.7% to 2.9%, consistent with observed levels of SRS. The intensity threshold for hot-electron production is assessed, and the use of a Si ablator slightly increases this threshold from ∼4×10^{14} to ∼6×10^{14}  W/cm^{2}. These results have significant implications for mitigation of LPI hot-electron preheat in direct-drive ignition designs.

  7. Gravity data from the San Pedro River Basin, Cochise County, Arizona

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Winester, Daniel

    2011-01-01

    The U.S. Geological Survey, Arizona Water Science Center in cooperation with the National Oceanic and Atmospheric Administration, National Geodetic Survey has collected relative and absolute gravity data at 321 stations in the San Pedro River Basin of southeastern Arizona since 2000. Data are of three types: observed gravity values and associated free-air, simple Bouguer, and complete Bouguer anomaly values, useful for subsurface-density modeling; high-precision relative-gravity surveys repeated over time, useful for aquifer-storage-change monitoring; and absolute-gravity values, useful as base stations for relative-gravity surveys and for monitoring gravity change over time. The data are compiled, without interpretation, in three spreadsheet files. Gravity values, GPS locations, and driving directions for absolute-gravity base stations are presented as National Geodetic Survey site descriptions.

  8. Measurement of absolute concentrations of individual compounds in metabolite mixtures by gradient-selective time-zero 1H-13C HSQC with two concentration references and fast maximum likelihood reconstruction analysis.

    PubMed

    Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L

    2011-12-15

    Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.

  9. Estimating aquifer properties using time-lapse, high precision gravity surveys and groundwater modeling

    NASA Astrophysics Data System (ADS)

    Keating, E.; Cogbill, A. H.; Ferguson, J. F.

    2003-12-01

    In the past, gravity methods have had limited application for monitoring aquifers, primarily due to the poor drift characteristics of relative gravimeters, which made long-term gravity studies of aquifers prohibitively expensive. Recent developments in portable, very accurate, absolute gravity instruments having essentially zero long-term drift have reawakened interest in using gravity methods for hydrologic monitoring. Such instruments have accuracies of 7 microGals or better and can acquire measurements at the rate of better than one station per hour. Theoretically, temporal changes in gravity can be used to infer storage characteristics and fluxes into and out of the aquifer. The sensitivity of the method to scaling effects, temporal lags between recharge/discharge and changes in storage, and to uncertainties in aquifer structure are poorly understood. In preparation for interpreting a basin-scale, time-lapse gravity data set, we have established a network of gravity stations within the Espanola Basin in northern New Mexico, a semi-arid region which is experiencing rapid population growth and groundwater resource use. We are using an existing basin-scale groundwater flow model to predict changes in mass, given our current level of understanding of inflows, outflows, and aquifer properties. Preliminary model results will be used to examine scaling issues related to the spatial density of the gravity station network and depths to the regional water table. By modeling the gravitational response to water movement in the aquifer, we study the sensitivity of gravity measurements to aquifer storage properties, given other known uncertainties in basin-scale fluxes. Results will be used to evaluate the adequacy of the existing network and to modify its design, if necessary.

  10. Volcanic synchronisation of the EPICA-DC and TALDICE ice cores for the last 42 kyr BP

    NASA Astrophysics Data System (ADS)

    Severi, M.; Udisti, R.; Becagli, S.; Stenni, B.; Traversi, R.

    2012-03-01

    The age scale synchronisation between the Talos Dome and the EPICA Dome C ice cores was carried on through the identification of several common volcanic signatures. This paper describes the rigorous method, using the signature of volcanic sulphate, which was employed for the last 42 kyr of the record. Using this tight stratigraphic link, we transferred the EDC age scale to the Talos Dome ice core, producing a new age scale for the last 12 kyr. We estimated the discrepancies between the modelled TALDICE-1 age scale and the new scale during the studied period, by evaluating the ratio R of the apparent duration of temporal intervals between pairs of isochrones. Except for a very few cases, R ranges between 0.8 and 1.2, corresponding to an uncertainty of up to 20% in the estimate of the time duration in at least one of the two ice cores. At this stage our approach does not allow us to unequivocally identify which of the models is affected by errors, but, taking into account only the historically known volcanic events, we found that discrepancies up to 200 yr appear in the last two millennia in the TALDICE-1 model, while our new age scale shows a much better agreement with the volcanic absolute horizons. Thus, we propose for the Talos Dome ice core a new age scale (covering the whole Holocene) obtained by a direct transfer, via our stratigraphic link, from the EDC modelled age scale by Lemieux-Dudon et al. (2010).

  11. Measuring Socioeconomic Inequalities With Predicted Absolute Incomes Rather Than Wealth Quintiles: A Comparative Assessment Using Child Stunting Data From National Surveys.

    PubMed

    Fink, Günther; Victora, Cesar G; Harttgen, Kenneth; Vollmer, Sebastian; Vidaletti, Luís Paulo; Barros, Aluisio J D

    2017-04-01

    To compare the predictive power of synthetic absolute income measures with that of asset-based wealth quintiles in low- and middle-income countries (LMICs) using child stunting as an outcome. We pooled data from 239 nationally representative household surveys from LMICs and computed absolute incomes in US dollars based on households' asset rank as well as data on national consumption and inequality levels. We used multivariable regression models to compare the predictive power of the created income measure with the predictive power of existing asset indicator measures. In cross-country analysis, log absolute income predicted 54.5% of stunting variation observed, compared with 20% of variation explained by wealth quintiles. For within-survey analysis, we also found absolute income gaps to be predictive of the gaps between stunting in the wealthiest and poorest households (P < .001). Our results suggest that absolute income levels can greatly improve the prediction of stunting levels across and within countries over time, compared with models that rely solely on relative wealth quintiles.

  12. Wavefunctions, quantum diffusion, and scaling exponents in golden-mean quasiperiodic tilings.

    PubMed

    Thiem, Stefanie; Schreiber, Michael

    2013-02-20

    We study the properties of wavefunctions and the wavepacket dynamics in quasiperiodic tight-binding models in one, two, and three dimensions. The atoms in the one-dimensional quasiperiodic chains are coupled by weak and strong bonds aligned according to the Fibonacci sequence. The associated d-dimensional quasiperiodic tilings are constructed from the direct product of d such chains, which yields either the hypercubic tiling or the labyrinth tiling. This approach allows us to consider fairly large systems numerically. We show that the wavefunctions of the system are multifractal and that their properties can be related to the structure of the system in the regime of strong quasiperiodic modulation by a renormalization group (RG) approach. We also study the dynamics of wavepackets to get information about the electronic transport properties. In particular, we investigate the scaling behaviour of the return probability of the wavepacket with time. Applying again the RG approach we show that in the regime of strong quasiperiodic modulation the return probability is governed by the underlying quasiperiodic structure. Further, we also discuss lower bounds for the scaling exponent of the width of the wavepacket and propose a modified lower bound for the absolute continuous regime.

  13. Estimates of solar variability using the solar backscatter ultraviolet (SBUV) 2 Mg II index from the NOAA 9 satellite

    NASA Technical Reports Server (NTRS)

    Cebula, Richard P.; Deland, Matthew T.; Schlesinger, Barry M.

    1992-01-01

    The Mg II core to wing index was first developed for the Nimbus 7 solar backscatter ultraviolet (SBUV) instrument as an indicator of solar variability on both solar 27-day rotational and solar cycle time scales. This work extends the Mg II index to the NOAA 9 SBUV 2 instrument and shows that the variations in absolute value between Mg II index data sets caused by interinstrument differences do not affect the ability to track temporal variations. The NOAA 9 Mg II index accurately represents solar rotational modulation but contains more day-to-day noise than the Nimbus 7 Mg II index. Solar variability at other UV wavelengths is estimated by deriving scale factors between the Mg II index rotational variations and at those selected wavelengths. Based on the 27-day average of the NOAA 9 Mg II index and the NOAA 9 scale factors, the solar irradiance change from solar minimum in September 1986 to the beginning of the maximum of solar cycle 22 in 1989 is estimated to be 8.6 percent at 205 nm, 3.5 percent at 250 nm, and less than 1 percent beyond 300 nm.

  14. Coupled Long-Term Simulation of Reach-Scale Water and Heat Fluxes Across the River-Groundwater Interface for Retrieving Hyporheic Residence Times and Temperature Dynamics

    NASA Astrophysics Data System (ADS)

    Munz, Matthias; Oswald, Sascha E.; Schmidt, Christian

    2017-11-01

    Flow patterns in conjunction with seasonal and diurnal temperature variations control ecological and biogeochemical conditions in hyporheic sediments. In particular, hyporheic temperatures have a great impact on many temperature-sensitive microbial processes. In this study, we used 3-D coupled water flow and heat transport simulations applying the HydroGeoSphere code in combination with high-resolution observations of hydraulic heads and temperatures to quantify reach-scale water and heat flux across the river-groundwater interface and hyporheic temperature dynamics of a lowland gravel bed river. The model was calibrated in order to constrain estimates of the most sensitive model parameters. The magnitude and variations of the simulated temperatures matched the observed ones, with an average mean absolute error of 0.7°C and an average Nash Sutcliffe efficiency of 0.87. Our results indicate that nonsubmerged streambed structures such as gravel bars cause substantial thermal heterogeneity within the saturated sediment at the reach scale. Individual hyporheic flow path temperatures strongly depend on the flow path residence time, flow path depth, river, and groundwater temperature. Variations in individual hyporheic flow path temperatures were up to 7.9°C, significantly higher than the daily average (2.8°C), but still lower than the average seasonal hyporheic temperature difference (19.2°C). The distribution between flow path temperatures and residence times follows a power law relationship with exponent of about 0.37. Based on this empirical relation, we further estimated the influence of hyporheic flow path residence time and temperature on oxygen consumption which was found to partly increase by up to 29% in simulations.

  15. Assessing Multi-scale Reptile and Amphibian Biodiversity: Mojave Ecoregion Case Study

    EPA Science Inventory

    The ability to assess, report, map, and forecast the life support functions of ecosystems is absolutely critical to our capacity to make informed decisions to maintain the sustainable nature of our environment now and into the future. Because of the variability among living orga...

  16. 40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...

  17. 40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...

  18. 40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...

  19. 40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...

  20. 40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...

  1. Developmental Trends in Distractibility: Is Absolute or Proportional Decrement the Appropriate Measure of Interference?

    ERIC Educational Resources Information Center

    Well, Arnold D.; And Others

    1980-01-01

    Robust interference effects were found which declined with age. Manipulating discriminability of the relevant stimulus dimension resulted in large changes in sorting time, but interference effects did not vary with baseline difficulty. These results were interpreted as strongly supporting both an absolute decrement model and a developmental trend…

  2. Nitric oxide kinetics in the afterglow of a diffuse plasma filament

    NASA Astrophysics Data System (ADS)

    Burnette, D.; Montello, A.; Adamovich, I. V.; Lempert, W. R.

    2014-08-01

    A suite of laser diagnostics is used to study kinetics of vibrational energy transfer and plasma chemical reactions in a nanosecond pulse, diffuse filament electric discharge and afterglow in N2 and dry air at 100 Torr. Laser-induced fluorescence of NO and two-photon absorption laser-induced fluorescence of O and N atoms are used to measure absolute, time-resolved number densities of these species after the discharge pulse, and picosecond coherent anti-Stokes Raman spectroscopy is used to measure time-resolved rotational temperature and ground electronic state N2(v = 0-4) vibrational level populations. The plasma filament diameter, determined from plasma emission and NO planar laser-induced fluorescence images, remains nearly constant after the discharge pulse, over a few hundred microseconds, and does not exhibit expansion on microsecond time scale. Peak temperature in the discharge and the afterglow is low, T ≈ 370 K, in spite of significant vibrational nonequilibrium, with peak N2 vibrational temperature of Tv ≈ 2000 K. Significant vibrational temperature rise in the afterglow is likely caused by the downward N2-N2 vibration-vibration (V-V) energy transfer. Simple kinetic modeling of time-resolved N, O, and NO number densities in the afterglow, on the time scale longer compared to relaxation and quenching time of excited species generated in the plasma, is in good agreement with the data. In nitrogen, the N atom density after the discharge pulse is controlled by three-body recombination and radial diffusion. In air, N, NO and O concentrations are dominated by the reverse Zel'dovich reaction, N + NO → N2 + O, and ozone formation reaction, O + O2 + M → O3 + M, respectively. The effect of vibrationally excited nitrogen molecules and excited N atoms on NO formation kinetics is estimated to be negligible. The results suggest that NO formation in the nanosecond pulse discharge is dominated by reactions of excited electronic states of nitrogen, occurring on microsecond time scale.

  3. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  4. Evaluation of changes in periodontal bacteria in healthy dogs over 6 months using quantitative real-time PCR.

    PubMed

    Maruyama, N; Mori, A; Shono, S; Oda, H; Sako, T

    2018-03-01

    Porphyromonas gulae, Tannerella forsythia and Campylobacter rectus are considered dominant periodontal pathogens in dogs. Recently, quantitative real-time PCR (qRT-PCR) methods have been used for absolute quantitative determination of oral bacterial counts. The purpose of the present study was to establish a standardized qRT-PCR procedure to quantify bacterial counts of the three target periodontal bacteria (P. gulae, T. forsythia and C. rectus). Copy numbers of the three target periodontal bacteria were evaluated in 26 healthy dogs. Then, changes in bacterial counts of the three target periodontal bacteria were evaluated for 24 weeks in 7 healthy dogs after periodontal scaling. Analytical evaluation of each self-designed primer indicated acceptable analytical imprecision. All 26 healthy dogs were found to be positive for P. gulae, T. forsythia and C. rectus. Median total bacterial counts (copies/ng) of each target genes were 385.612 for P. gulae, 25.109 for T. forsythia and 5.771 for C. rectus. Significant differences were observed between the copy numbers of the three target periodontal bacteria. Periodontal scaling reduced median copy numbers of the three target periodontal bacteria in 7 healthy dogs. However, after periodontal scaling, copy numbers of all three periodontal bacteria significantly increased over time (p<0.05, Kruskal-Wallis test) (24 weeks). In conclusion, our results demonstrated that qRT-PCR can accurately measure periodontal bacteria in dogs. Furthermore, the present study has revealed that qRT-PCR method can be considered as a new objective evaluation system for canine periodontal disease. Copyright© by the Polish Academy of Sciences.

  5. Intrapair differences in personality and cognitive ability among young monozygotic twins distinguished by chorion type.

    PubMed

    Sokol, D K; Moore, C A; Rose, R J; Williams, C J; Reed, T; Christian, J C

    1995-09-01

    We evaluated placentation effects on behavioral resemblance of 44 pairs of monozygotic (MZ) twin children. Tested at ages 4-6, the twins' zygosity and placental type had been determined at their delivery. The sample included 23 monochorionic (MC) and 21 dichorionic (DC) MZ twin pairs: DC-MZ twins result from separation of blastomeres within 72 h of ovulation; MC-MZ twins arise from later duplication of the inner cell mass. Twins were individually administered the McCarthy Scales of Cognitive Ability, while their mothers separately rated each cotwin on an individualized 280-item form of the Personality Inventory for Children (PIC). Absolute differences between MC-MZ cotwins were smaller than those between DC-MZ cotwins for all 20 PIC scales, significantly so for 3 of 4 factor scales, 8 of 12 clinical scales, and 2 of 4 validity/screening scales from the PIC; in contrast, no consistent differences in intrapair resemblance of mono- and dichorionic MZ twins were found for the McCarthy Scales. The chorion differences found in the PIC data cannot be due to genetic differences, because all pairs are monozygotes; nor are they associated with differences in parity, gestational age, birth weight, maternal education, palmar dermatoglyphic asymmetry, or maternal knowledge of chorion type. We interpret our findings as suggestive evidence that variation in timing of embryological division, with effects on MZ twins' placental vasculature, has significant consequences for some dimensions of their behavioral development, as well.

  6. Application of psychometric theory to the measurement of voice quality using rating scales.

    PubMed

    Shrivastav, Rahul; Sapienza, Christine M; Nandur, Vuday

    2005-04-01

    Rating scales are commonly used to study voice quality. However, recent research has demonstrated that perceptual measures of voice quality obtained using rating scales suffer from poor interjudge agreement and reliability, especially in the mid-range of the scale. These findings, along with those obtained using multidimensional scaling (MDS), have been interpreted to show that listeners perceive voice quality in an idiosyncratic manner. Based on psychometric theory, the present research explored an alternative explanation for the poor interlistener agreement observed in previous research. This approach suggests that poor agreement between listeners may result, in part, from measurement errors related to a variety of factors rather than true differences in the perception of voice quality. In this study, 10 listeners rated breathiness for 27 vowel stimuli using a 5-point rating scale. Each stimulus was presented to the listeners 10 times in random order. Interlistener agreement and reliability were calculated from these ratings. Agreement and reliability were observed to improve when multiple ratings of each stimulus from each listener were averaged and when standardized scores were used instead of absolute ratings. The probability of exact agreement was found to be approximately .9 when using averaged ratings and standardized scores. In contrast, the probability of exact agreement was only .4 when a single rating from each listener was used to measure agreement. These findings support the hypothesis that poor agreement reported in past research partly arises from errors in measurement rather than individual differences in the perception of voice quality.

  7. Orbital structure in oscillating galactic potentials

    NASA Astrophysics Data System (ADS)

    Terzić, Balša; Kandrup, Henry E.

    2004-01-01

    Subjecting a galactic potential to (possibly damped) nearly periodic, time-dependent variations can lead to large numbers of chaotic orbits experiencing systematic changes in energy, and the resulting chaotic phase mixing could play an important role in explaining such phenomena as violent relaxation. This paper focuses on the simplest case of spherically symmetric potentials subjected to strictly periodic driving with the aim of understanding precisely why orbits become chaotic and under what circumstances they will exhibit systematic changes in energy. Four unperturbed potentials V0(r) were considered, each subjected to a time dependence of the form V(r, t) =V0(r)(1 +m0 sinωt). In each case, the orbits divide clearly into regular and chaotic, distinctions which appear absolute. In particular, transitions from regularity to chaos are seemingly impossible. Over finite time intervals, chaotic orbits subdivide into what can be termed `sticky' chaotic orbits, which exhibit no large-scale secular changes in energy and remain trapped in the phase-space region where they started; and `wildly' chaotic orbits, which do exhibit systematic drifts in energy as the orbits diffuse to different phase-space regions. This latter distinction is not absolute, transitions corresponding apparently to orbits penetrating a `leaky' phase-space barrier. The three different orbit types can be identified simply in terms of the frequencies for which their Fourier spectra have the most power. An examination of the statistical properties of orbit ensembles as a function of driving frequency ω allows us to identify the specific resonances that determine orbital structure. Attention focuses also on how, for fixed amplitude m0, such quantities as the mean energy shift, the relative measure of chaotic orbits and the mean value of the largest Lyapunov exponent vary with driving frequency ω and how, for fixed ω, the same quantities depend on m0.

  8. A study of the reaction Li+HCl by the technique of time-resolved laser-induced fluorescence spectroscopy of Li (2 2PJ-2 2S1/2, λ=670.7 nm) between 700 and 1000 K

    NASA Astrophysics Data System (ADS)

    Plane, John M. C.; Saltzman, Eric S.

    1987-10-01

    A kinetic study is presented of the reaction between lithium atoms and hydrogen chloride over the temperature range 700-1000 K. Li atoms are produced in an excess of HCl and He bath gas by pulsed photolysis of LiCl vapor. The concentration of the metal atoms is then monitored in real time by the technique of laser-induced fluorescence of Li atoms at λ=670.7 nm using a pulsed nitrogen-pumped dye laser and box-car integration of the fluorescence signal. Absolute second-order rate constants for this reaction have been measured at T=700, 750, 800, and 900 K. At T=1000 K the reverse reaction is sufficiently fast that equilibrium is rapidly established on the time scale of the experiment. A fit of the data between 700 and 900 K to the Arrhenius form, with 2σ errors calculated from the absolute errors in the rate constants, yields k(T)=(3.8±1.1)×10-10 exp[-(883±218)/T] cm3 molecule-1 s-1. This result is interpreted through a modified form of collision theory which is constrained to take account of the conservation of total angular momentum during the reaction. Thereby we obtain an estimate for the reaction energy threshold, E0=8.2±1.4 kJ mol-1 (where the error arises from uncertainty in the exothermicity of the reaction), in very good agreement with a crossed molecular beam study of the title reaction, and substantially lower than estimates of E0 from both semiempirical and ab initio calculations of the potential energy surface.

  9. A randomized trial of telemedicine efficacy and safety for nonacute headaches.

    PubMed

    Müller, Kai I; Alstadhaug, Karl B; Bekkelund, Svein I

    2017-07-11

    To evaluate long-term treatment efficacy and safety of one-time telemedicine consultations for nonacute headaches. We randomized, allocated, and consulted nonacute headache patients via telemedicine (n = 200) or in a traditional manner (n = 202) in a noninferiority trial. Efficacy endpoints, assessed by questionnaires at 3 and 12 months, included change from baseline in Headache Impact Test-6 (HIT-6) (primary endpoint) and pain intensity (visual analogue scale [VAS]) (secondary endpoint). The primary safety endpoint, assessed via patient records, was presence of secondary headache within 12 months after consultation. We found no differences between telemedicine and traditional consultations in HIT-6 ( p = 0.84) or VAS ( p = 0.64) over 3 periods. The absolute difference in HIT-6 from baseline was 0.3 (95% confidence interval [CI] -1.26 to 1.82, p = 0.72) at 3 months and 0.2 (95% CI -1.98 to 1.58, p = 0.83) at 12 months. The absolute change in VAS was 0.4 (95% CI -0.93 to 0.22, p = 0.23) after 3 months and 0.3 (95% CI -0.94 to 0.29, p = 0.30) at 12 months. We found one secondary headache in each group at 12 months. The estimated number of consultations needed to miss one secondary headache with the use of telemedicine was 20,200. Telemedicine consultation for nonacute headache is as efficient and safe as a traditional consultation. NCT02270177. This study provides Class III evidence that a one-time telemedicine consultation for nonacute headache is noninferior to a one-time traditional consultation regarding long-term treatment outcome and safety. Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology.

  10. Results of Absolute Cavity Pyrgeometer and Infrared Integrating Sphere Comparisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, Ibrahim M; Sengupta, Manajit; Dooraghi, Michael R

    Accurate and traceable atmospheric longwave irradiance measurements are required for understanding radiative impacts on the Earth's energy budget. The standard to which pyrgeometers are traceable is the interim World Infrared Standard Group (WISG), maintained in the Physikalisch-Meteorologisches Observatorium Davos (PMOD). The WISG consists of four pyrgeometers that were calibrated using Rolf Philipona's Absolute Sky-scanning Radiometer [1]. The Atmospheric Radiation Measurement (ARM) facility has recently adopted the WISG to maintain the traceability of the calibrations of all Eppley precision infrared radiometer (PIR) pyrgeometers. Subsequently, Julian Grobner [2] developed the infrared interferometer spectrometer and radiometer (IRIS) radiometer, and Ibrahim Reda [3] developedmore » the absolute cavity pyrgeometer (ACP). The ACP and IRIS were developed to establish a world reference for calibrating pyrgeometers with traceability to the International System of Units (SI). The two radiometers are unwindowed with negligible spectral dependence, and they are traceable to SI units through the temperature scale (ITS-90). The two instruments were compared directly to the WISG three times at PMOD and twice at the Southern Great Plains (SGP) facility to WISG-traceable pyrgeometers. The ACP and IRIS agreed within +/- 1 W/m2 to +/- 3 W/m2 in all comparisons, whereas the WISG references exhibit a 2-5 Wm2 low bias compared to the ACP/IRIS average, depending on the water vapor column, as noted in Grobner et al. [4]. Consequently, a case for changing the current WISG has been made by Grobner and Reda. However, during the five comparisons the column water vapor exceeded 8 mm. Therefore, it is recommended that more ACP and IRIS comparisons should be held under different environmental conditions and water vapor column content to better establish the traceability of these instruments to SI with established uncertainty.« less

  11. The Dependence of Cloud Property Trend Detection on Absolute Calibration Accuracy of Passive Satellite Sensors

    NASA Astrophysics Data System (ADS)

    Shea, Y.; Wielicki, B. A.; Sun-Mack, S.; Minnis, P.; Zelinka, M. D.

    2016-12-01

    Detecting trends in climate variables on global, decadal scales requires highly accurate, stable measurements and retrieval algorithms. Trend uncertainty depends on its magnitude, natural variability, and instrument and retrieval algorithm accuracy and stability. We applied a climate accuracy framework to quantify the impact of absolute calibration on cloud property trend uncertainty. The cloud properties studied were cloud fraction, effective temperature, optical thickness, and effective radius retrieved using the Clouds and the Earth's Radiant Energy System (CERES) Cloud Property Retrieval System, which uses Moderate-resolution Imaging Spectroradiometer measurements (MODIS). Modeling experiments from the fifth phase of the Climate Model Intercomparison Project (CMIP5) agree that net cloud feedback is likely positive but disagree regarding its magnitude, mainly due to uncertainty in shortwave cloud feedback. With the climate accuracy framework we determined the time to detect trends for instruments with various calibration accuracies. We estimated a relationship between cloud property trend uncertainty, cloud feedback, and Equilibrium Climate Sensitivity and also between effective radius trend uncertainty and aerosol indirect effect trends. The direct relationship between instrument accuracy requirements and climate model output provides the level of instrument absolute accuracy needed to reduce climate model projection uncertainty. Different cloud types have varied radiative impacts on the climate system depending on several attributes, such as their thermodynamic phase, altitude, and optical thickness. Therefore, we also conducted these studies by cloud types for a clearer understanding of instrument accuracy requirements needed to detect changes in their cloud properties. Combining this information with the radiative impact of different cloud types helps to prioritize among requirements for future satellite sensors and understanding the climate detection capabilities of existing sensors.

  12. Veridical mapping in savant abilities, absolute pitch, and synesthesia: an autism case study

    PubMed Central

    Bouvet, Lucie; Donnadieu, Sophie; Valdois, Sylviane; Caron, Chantal; Dawson, Michelle; Mottron, Laurent

    2014-01-01

    An enhanced role and autonomy of perception are prominent in autism. Furthermore, savant abilities, absolute pitch, and synesthesia are all more commonly found in autistic individuals than in the typical population. The mechanism of veridical mapping has been proposed to account for how enhanced perception in autism leads to the high prevalence of these three phenomena and their structural similarity. Veridical mapping entails functional rededication of perceptual brain regions to higher order cognitive operations, allowing the enhanced detection and memorization of isomorphisms between perceptual and non-perceptual structures across multiple scales. In this paper, we present FC, an autistic individual who possesses several savant abilities in addition to both absolute pitch and synesthesia-like associations. The co-occurrence in FC of abilities, some of them rare, which share the same structure, as well as FC’s own accounts of their development, together suggest the importance of veridical mapping in the atypical range and nature of abilities displayed by autistic people. PMID:24600416

  13. Regional biases in absolute sea-level estimates from tide gauge data due to residual unmodeled vertical land movement

    NASA Astrophysics Data System (ADS)

    King, Matt A.; Keshin, Maxim; Whitehouse, Pippa L.; Thomas, Ian D.; Milne, Glenn; Riva, Riccardo E. M.

    2012-07-01

    The only vertical land movement signal routinely corrected for when estimating absolute sea-level change from tide gauge data is that due to glacial isostatic adjustment (GIA). We compare modeled GIA uplift (ICE-5G + VM2) with vertical land movement at ˜300 GPS stations located near to a global set of tide gauges, and find regionally coherent differences of commonly ±0.5-2 mm/yr. Reference frame differences and signal due to present-day mass trends cannot reconcile these differences. We examine sensitivity to the GIA Earth model by fitting to a subset of the GPS velocities and find substantial regional sensitivity, but no single Earth model is able to reduce the disagreement in all regions. We suggest errors in ice history and neglected lateral Earth structure dominate model-data differences, and urge caution in the use of modeled GIA uplift alone when interpreting regional- and global- scale absolute (geocentric) sea level from tide gauge data.

  14. MEERCAT: Multiplexed Efficient Cell Free Expression of Recombinant QconCATs For Large Scale Absolute Proteome Quantification*

    PubMed Central

    Takemori, Nobuaki; Takemori, Ayako; Tanaka, Yuki; Endo, Yaeta; Hurst, Jane L.; Gómez-Baena, Guadalupe; Harman, Victoria M.; Beynon, Robert J.

    2017-01-01

    A major challenge in proteomics is the absolute accurate quantification of large numbers of proteins. QconCATs, artificial proteins that are concatenations of multiple standard peptides, are well established as an efficient means to generate standards for proteome quantification. Previously, QconCATs have been expressed in bacteria, but we now describe QconCAT expression in a robust, cell-free system. The new expression approach rescues QconCATs that previously were unable to be expressed in bacteria and can reduce the incidence of proteolytic damage to QconCATs. Moreover, it is possible to cosynthesize QconCATs in a highly-multiplexed translation reaction, coexpressing tens or hundreds of QconCATs simultaneously. By obviating bacterial culture and through the gain of high level multiplexing, it is now possible to generate tens of thousands of standard peptides in a matter of weeks, rendering absolute quantification of a complex proteome highly achievable in a reproducible, broadly deployable system. PMID:29055021

  15. Fine-scale structure of the San Andreas fault zone and location of the SAFOD target earthquakes

    USGS Publications Warehouse

    Thurber, C.; Roecker, S.; Zhang, H.; Baher, S.; Ellsworth, W.

    2004-01-01

    We present results from the tomographic analysis of seismic data from the Parkfield area using three different inversion codes. The models provide a consistent view of the complex velocity structure in the vicinity of the San Andreas, including a sharp velocity contrast across the fault. We use the inversion results to assess our confidence in the absolute location accuracy of a potential target earthquake. We derive two types of accuracy estimates, one based on a consideration of the location differences from the three inversion methods, and the other based on the absolute location accuracy of "virtual earthquakes." Location differences are on the order of 100-200 m horizontally and up to 500 m vertically. Bounds on the absolute location errors based on the "virtual earthquake" relocations are ??? 50 m horizontally and vertically. The average of our locations places the target event epicenter within about 100 m of the SAF surface trace. Copyright 2004 by the American Geophysical Union.

  16. Quantifying discipline practices using absolute versus relative frequencies: clinical and research implications for child welfare.

    PubMed

    Lindhiem, Oliver; Shaffer, Anne; Kolko, David J

    2014-01-01

    In the parent intervention outcome literatures, discipline practices are generally quantified as absolute frequencies or, less commonly, as relative frequencies. These differences in methodology warrant direct comparison as they have critical implications for study results and conclusions among treatments targeted at reducing parental aggression and harsh discipline. In this study, we directly compared the absolute frequency method and the relative frequency method for quantifying physically aggressive, psychologically aggressive, and nonaggressive discipline practices. Longitudinal data over a 3-year period came from an existing data set of a clinical trial examining the effectiveness of a psychosocial treatment in reducing parental physical and psychological aggression and improving child behavior (N = 139). Discipline practices (aggressive and nonaggressive) were assessed using the Conflict Tactics Scale. The two methods yielded different patterns of results, particularly for nonaggressive discipline strategies. We suggest that each method makes its own unique contribution to a more complete understanding of the association between parental aggression and intervention effects.

  17. Quantifying Treatment Benefit in Molecular Subgroups to Assess a Predictive Biomarker.

    PubMed

    Iasonos, Alexia; Chapman, Paul B; Satagopan, Jaya M

    2016-05-01

    An increased interest has been expressed in finding predictive biomarkers that can guide treatment options for both mutation carriers and noncarriers. The statistical assessment of variation in treatment benefit (TB) according to the biomarker carrier status plays an important role in evaluating predictive biomarkers. For time-to-event endpoints, the hazard ratio (HR) for interaction between treatment and a biomarker from a proportional hazards regression model is commonly used as a measure of variation in TB. Although this can be easily obtained using available statistical software packages, the interpretation of HR is not straightforward. In this article, we propose different summary measures of variation in TB on the scale of survival probabilities for evaluating a predictive biomarker. The proposed summary measures can be easily interpreted as quantifying differential in TB in terms of relative risk or excess absolute risk due to treatment in carriers versus noncarriers. We illustrate the use and interpretation of the proposed measures with data from completed clinical trials. We encourage clinical practitioners to interpret variation in TB in terms of measures based on survival probabilities, particularly in terms of excess absolute risk, as opposed to HR. Clin Cancer Res; 22(9); 2114-20. ©2016 AACR. ©2016 American Association for Cancer Research.

  18. Thermodynamic Temperature of High-Temperature Fixed Points Traceable to Blackbody Radiation and Synchrotron Radiation

    NASA Astrophysics Data System (ADS)

    Wähmer, M.; Anhalt, K.; Hollandt, J.; Klein, R.; Taubert, R. D.; Thornagel, R.; Ulm, G.; Gavrilov, V.; Grigoryeva, I.; Khlevnoy, B.; Sapritsky, V.

    2017-10-01

    Absolute spectral radiometry is currently the only established primary thermometric method for the temperature range above 1300 K. Up to now, the ongoing improvements of high-temperature fixed points and their formal implementation into an improved temperature scale with the mise en pratique for the definition of the kelvin, rely solely on single-wavelength absolute radiometry traceable to the cryogenic radiometer. Two alternative primary thermometric methods, yielding comparable or possibly even smaller uncertainties, have been proposed in the literature. They use ratios of irradiances to determine the thermodynamic temperature traceable to blackbody radiation and synchrotron radiation. At PTB, a project has been established in cooperation with VNIIOFI to use, for the first time, all three methods simultaneously for the determination of the phase transition temperatures of high-temperature fixed points. For this, a dedicated four-wavelengths ratio filter radiometer was developed. With all three thermometric methods performed independently and in parallel, we aim to compare the potential and practical limitations of all three methods, disclose possibly undetected systematic effects of each method and thereby confirm or improve the previous measurements traceable to the cryogenic radiometer. This will give further and independent confidence in the thermodynamic temperature determination of the high-temperature fixed point's phase transitions.

  19. Low-level precipitation sublimation on the coasts of East Antarctica

    NASA Astrophysics Data System (ADS)

    Grazioli, Jacopo; Genthon, Christophe; Madeleine, Jean-Baptiste; Lemonnier, Florentin; Gallée, Hubert; Krinner, Gerhard; Berne, Alexis

    2017-04-01

    The weather of East Antarctica is affected by the peculiar morphology of this large continent and by its isolation from the surroundings. The high-elevation interior of the continent, very dry in absolute terms, originates winds that can reach the coastal areas with very high speed and persistence in time. The absence of topographic barriers and the near-ground temperature inversion allow these density-driven air movements to fall from the continent towards the coasts without excessive interaction and mixing with the atmosphere aloft. Thus, the air remains dry in absolute terms, and very dry in relative terms because of the higher temperatures near the coast and the adiabatic warming due to the descent. The coasts of Antarctica are less isolated and more exposed to incoming moist air masses than the rest of the continent, and precipitation in the form of snowfall more frequently occurs. Through its descent, however, snowfall encounters the layer of dry air coming from the continent and the deficit in humidity can lead to the partial or complete sublimation of the precipitating flux. This phenomenon is named here LPS (Low-level Precipitation Sublimation) and it has been observed by means of ground-based remote sensing instruments (weather radars) and atmospheric radio-sounding balloons records in the framework of the APRES3 campaign (Antarctic Precipitation: REmote Sensing from Surface and Space) in the coastal base of Dumont d' Urville (Terre Adélie), and then examined at the continental scale thanks to numerical weather models. LPS occurs over most of the coastal locations, where the total sublimated snowfall can be a significant percentage of the total snowfall. For example, in Dumont d' Urville the total yearly snowfall at 341 m height is less than 80% of the snowfall at 941 m height (the height of maximum yearly accumulation), and at shorter time scales complete sublimation (i.e. virga) often occurs. At the scale of individual precipitation events, LPS is overall inversely proportional to the intensity of precipitation, because more developed systems can extend further into the continent and eventually saturate the low levels of the atmosphere. This contribution presents the data, models, and analysis used to characterize LPS over the coastal regions of East Antarctica and discusses the possible implications for predicting climate change in Antarctica.

  20. Assessing the performance of community-available global MHD models using key system parameters and empirical relationships

    NASA Astrophysics Data System (ADS)

    Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.

    2015-12-01

    Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively evaluate progress in magnetospheric modeling.

  1. Absolute acceleration measurements on STS-50 from the Orbital Acceleration Research Experiment (OARE)

    NASA Technical Reports Server (NTRS)

    Blanchard, Robert C.; Nicholson, John Y.; Ritter, James R.

    1994-01-01

    Orbital Acceleration Research Experiment (OARE) data on Space Transportation System (STS)-50 have been examined in detail during a 2-day time period. Absolute acceleration levels have been derived at the OARE location, the orbiter center-of-gravity, and at the STS-50 spacelab Crystal Growth Facility. During the interval, the tri-axial OARE raw telemetered acceleration measurements have been filtered using a sliding trimmed mean filter in order to remove large acceleration spikes (e.g., thrusters) and reduce the noise. Twelve OARE measured biases in each acceleration channel during the 2-day interval have been analyzed and applied to the filtered data. Similarly, the in situ measured x-axis scale factors in the sensor's most sensitive range were also analyzed and applied to the data. Due to equipment problem(s) on this flight, both y- and z-axis sensitive range scale factors were determined in a separate process using orbiter maneuvers and subsequently applied to the data. All known significant low-frequency corrections at the OARE location (i.e., both vertical and horizontal gravity-gradient, and rotational effects) were removed from the filtered data in order to produce the acceleration components at the orbiter center-of-gravity, which are the aerodynamic signals along each body axis. Results indicate that there is a force being applied to the Orbiter in addition to the aerodynamic forces. The OARE instrument and all known gravitational and electromagnetic forces have been reexamined, but none produces the observed effect. Thus, it is tentatively concluded that the orbiter is creating the environment observed. At least part of this force is thought to be due to the Flash Evaporator System.

  2. Seismic peak amplitude as a predictor of TOC content in shallow marine sediments

    NASA Astrophysics Data System (ADS)

    Neto, Arthur Ayres; Mota, Bruno Bourguignon; Belem, André Luiz; Albuquerque, Ana Luiza; Capilla, Ramsés

    2016-10-01

    Acoustic remote sensing is a highly effective tool for exploring the seafloor of both deep and shallow marine settings. Indeed, the acoustic response depends on several physicochemical factors such as sediment grain size, bulk density, water content, and mineralogy. The objective of the present study is to assess the suitability of seismic peak amplitude as a predictor of total organic carbon (TOC) content in shallow marine sediments, based on data collected in the Cabo Frio mud belt in an upwelling zone off southeastern Brazil. These comprise records of P-wave velocity ( V P) along 680 km of high-resolution single-channel seismic surveys, combined with analyses of grain size, wet bulk density, absolute water content and TOC content for four piston-cores. TOC contents of sediments from 13 box-cores served to validate the methodology. The results show well-defined positive correlations between TOC content and mean grain size (phi scale) as well as absolute water content, and negative correlations with V P, wet bulk density, and acoustic impedance. These relationships yield a regression equation by which TOC content can be satisfactorily predicted on the basis of acoustic impedance for this region: y = - 4.84 ln( x) + 40.04. Indeed, the derived TOC contents differ by only 5% from those determined by geochemical analysis. After appropriate calibration, acoustic impedance can thus be conveniently used as a predictor of large-scale spatial distributions of organic carbon enrichment in marine sediments. This not only contributes to optimizing scientific project objectives, but also enhances the cost-effectiveness of marine surveys by greatly reducing the ship time commonly required for grid sampling.

  3. Scale effects on information theory-based measures applied to streamflow patterns in two rural watersheds

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Pachepsky, Yakov A.; Guber, Andrey K.; McPherson, Brian J.; Hill, Robert L.

    2012-01-01

    SummaryUnderstanding streamflow patterns in space and time is important for improving flood and drought forecasting, water resources management, and predictions of ecological changes. Objectives of this work include (a) to characterize the spatial and temporal patterns of streamflow using information theory-based measures at two thoroughly-monitored agricultural watersheds located in different hydroclimatic zones with similar land use, and (b) to elucidate and quantify temporal and spatial scale effects on those measures. We selected two USDA experimental watersheds to serve as case study examples, including the Little River experimental watershed (LREW) in Tifton, Georgia and the Sleepers River experimental watershed (SREW) in North Danville, Vermont. Both watersheds possess several nested sub-watersheds and more than 30 years of continuous data records of precipitation and streamflow. Information content measures (metric entropy and mean information gain) and complexity measures (effective measure complexity and fluctuation complexity) were computed based on the binary encoding of 5-year streamflow and precipitation time series data. We quantified patterns of streamflow using probabilities of joint or sequential appearances of the binary symbol sequences. Results of our analysis illustrate that information content measures of streamflow time series are much smaller than those for precipitation data, and the streamflow data also exhibit higher complexity, suggesting that the watersheds effectively act as filters of the precipitation information that leads to the observed additional complexity in streamflow measures. Correlation coefficients between the information-theory-based measures and time intervals are close to 0.9, demonstrating the significance of temporal scale effects on streamflow patterns. Moderate spatial scale effects on streamflow patterns are observed with absolute values of correlation coefficients between the measures and sub-watershed area varying from 0.2 to 0.6 in the two watersheds. We conclude that temporal effects must be evaluated and accounted for when the information theory-based methods are used for performance evaluation and comparison of hydrological models.

  4. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  5. New agreement measures based on survival processes

    PubMed Central

    Guo, Ying; Li, Ruosha; Peng, Limin; Manatunga, Amita K.

    2013-01-01

    Summary The need to assess agreement arises in many scenarios in biomedical sciences when measurements were taken by different methods on the same subjects. When the endpoints are survival outcomes, the study of agreement becomes more challenging given the special characteristics of time-to-event data. In this paper, we propose a new framework for assessing agreement based on survival processes that can be viewed as a natural representation of time-to-event outcomes. Our new agreement measure is formulated as the chance-corrected concordance between survival processes. It provides a new perspective for studying the relationship between correlated survival outcomes and offers an appealing interpretation as the agreement between survival times on the absolute distance scale. We provide a multivariate extension of the proposed agreement measure for multiple methods. Furthermore, the new framework enables a natural extension to evaluate time-dependent agreement structure. We develop nonparametric estimation of the proposed new agreement measures. Our estimators are shown to be strongly consistent and asymptotically normal. We evaluate the performance of the proposed estimators through simulation studies and then illustrate the methods using a prostate cancer data example. PMID:23844617

  6. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE PAGES

    Yoo, Wucherl; Sim, Alex

    2016-06-24

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  7. Absolute Standard Hydrogen Electrode Potential Measured by Reduction of Aqueous Nanodrops in the Gas Phase

    PubMed Central

    Donald, William A.; Leib, Ryan D.; O'Brien, Jeremy T.; Bush, Matthew F.; Williams, Evan R.

    2008-01-01

    In solution, half-cell potentials are measured relative to those of other half cells, thereby establishing a ladder of thermochemical values that are referenced to the standard hydrogen electrode (SHE), which is arbitrarily assigned a value of exactly 0 V. Although there has been considerable interest in, and efforts toward, establishing an absolute electrochemical half-cell potential in solution, there is no general consensus regarding the best approach to obtain this value. Here, ion-electron recombination energies resulting from electron capture by gas-phase nanodrops containing individual [M(NH3)6]3+, M = Ru, Co, Os, Cr, and Ir, and Cu2+ ions are obtained from the number of water molecules that are lost from the reduced precursors. These experimental data combined with nanodrop solvation energies estimated from Born theory and solution-phase entropies estimated from limited experimental data provide absolute reduction energies for these redox couples in bulk aqueous solution. A key advantage of this approach is that solvent effects well past two solvent shells, that are difficult to model accurately, are included in these experimental measurements. By evaluating these data relative to known solution-phase reduction potentials, an absolute value for the SHE of 4.2 ± 0.4 V versus a free electron is obtained. Although not achieved here, the uncertainty of this method could potentially be reduced to below 0.1 V, making this an attractive method for establishing an absolute electrochemical scale that bridges solution and gas-phase redox chemistry. PMID:18288835

  8. An investigation of rotor harmonic noise by the use of small scale wind tunnel models

    NASA Technical Reports Server (NTRS)

    Sternfeld, H., Jr.; Schaffer, E. G.

    1982-01-01

    Noise measurements of small scale helicopter rotor models were compared with noise measurements of full scale helicopters to determine what information about the full scale helicopters could be derived from noise measurements of small scale helicopter models. Comparisons were made of the discrete frequency (rotational) noise for 4 pairs of tests. Areas covered were tip speed effects, isolated rotor, tandem rotor, and main rotor/tail rotor interaction. Results show good comparison of noise trends with configuration and test condition changes, and good comparison of absolute noise measurements with the corrections used except for the isolated rotor case. Noise measurements of the isolated rotor show a great deal of scatter reflecting the fact that the rotor in hover is basically unstable.

  9. The effect of latitude on photoperiodic control of gonadal maturation, regression and molt in birds.

    PubMed

    Dawson, Alistair

    2013-09-01

    Photoperiod is the major cue used by birds to time breeding seasons and molt. However, the annual cycle in photoperiod changes with latitude. Within species, for temperate and high latitude species, gonadal maturation and breeding start earlier at lower latitudes but regression and molt both occur at similar times at different latitudes. Earlier gonadal maturation can be explained simply by the fact that considerable maturation occurs before the equinox when photoperiod is longer at lower latitudes - genetic differences between populations are not necessary to explain earlier breeding at lower latitudes. Gonadal regression is caused either by absolute photorefractoriness or, in some species with long breeding seasons, relative photorefractoriness. In either case, the timing of regression and molt cannot be explained by absolute prevailing photoperiod or rate of change in photoperiod - birds appear to be using more subtle cues from the pattern of change in photoperiod. However, there may be no difference between absolute and relative photorefractory species in how they utilise the annual cycle in photoperiod to time regression. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Optimization of KOH etching parameters for quantitative defect recognition in n- and p-type doped SiC

    NASA Astrophysics Data System (ADS)

    Sakwe, S. A.; Müller, R.; Wellmann, P. J.

    2006-04-01

    We have developed a KOH-based defect etching procedure for silicon carbide (SiC), which comprises in situ temperature measurement and control of melt composition. As benefit for the first time reproducible etching conditions were established (calibration plot, etching rate versus temperature and time); the etching procedure is time independent, i.e. no altering in KOH melt composition takes place, and absolute melt temperature values can be set. The paper describes this advanced KOH etching furnace, including the development of a new temperature sensor resistant to molten KOH. We present updated, absolute KOH etching parameters of n-type SiC and new absolute KOH etching parameters for low and highly p-type doped SiC, which are used for quantitative defect analysis. As best defect etching recipes we found T=530 °C/5 min (activation energy: 16.4 kcal/mol) and T=500 °C/5 min (activation energy: 13.5 kcal/mol) for n-type and p-type SiC, respectively.

  11. Absolute IGS antenna phase center model igs08.atx: status and potential improvements

    NASA Astrophysics Data System (ADS)

    Schmid, R.; Dach, R.; Collilieux, X.; Jäggi, A.; Schmitz, M.; Dilssner, F.

    2016-04-01

    On 17 April 2011, all analysis centers (ACs) of the International GNSS Service (IGS) adopted the reference frame realization IGS08 and the corresponding absolute antenna phase center model igs08.atx for their routine analyses. The latter consists of an updated set of receiver and satellite antenna phase center offsets and variations (PCOs and PCVs). An update of the model was necessary due to the difference of about 1 ppb in the terrestrial scale between two consecutive realizations of the International Terrestrial Reference Frame (ITRF2008 vs. ITRF2005), as that parameter is highly correlated with the GNSS satellite antenna PCO components in the radial direction.

  12. Comparison between artificial neural network and multilinear regression models in an evaluation of cognitive workload in a flight simulator.

    PubMed

    Hannula, Manne; Huttunen, Kerttu; Koskelo, Jukka; Laitinen, Tomi; Leino, Tuomo

    2008-01-01

    In this study, the performances of artificial neural network (ANN) analysis and multilinear regression (MLR) model-based estimation of heart rate were compared in an evaluation of individual cognitive workload. The data comprised electrocardiography (ECG) measurements and an evaluation of cognitive load that induces psychophysiological stress (PPS), collected from 14 interceptor fighter pilots during complex simulated F/A-18 Hornet air battles. In our data, the mean absolute error of the ANN estimate was 11.4 as a visual analog scale score, being 13-23% better than the mean absolute error of the MLR model in the estimation of cognitive workload.

  13. An absolute chronology for early Egypt using radiocarbon dating and Bayesian statistical modelling

    PubMed Central

    Dee, Michael; Wengrow, David; Shortland, Andrew; Stevenson, Alice; Brock, Fiona; Girdland Flink, Linus; Bronk Ramsey, Christopher

    2013-01-01

    The Egyptian state was formed prior to the existence of verifiable historical records. Conventional dates for its formation are based on the relative ordering of artefacts. This approach is no longer considered sufficient for cogent historical analysis. Here, we produce an absolute chronology for Early Egypt by combining radiocarbon and archaeological evidence within a Bayesian paradigm. Our data cover the full trajectory of Egyptian state formation and indicate that the process occurred more rapidly than previously thought. We provide a timeline for the First Dynasty of Egypt of generational-scale resolution that concurs with prevailing archaeological analysis and produce a chronometric date for the foundation of Egypt that distinguishes between historical estimates. PMID:24204188

  14. ACCESS: integration and pre-flight performance

    NASA Astrophysics Data System (ADS)

    Kaiser, Mary Elizabeth; Morris, Matthew J.; Aldoroty, Lauren N.; Pelton, Russell; Kurucz, Robert; Peacock, Grant O.; Hansen, Jason; McCandliss, Stephan R.; Rauscher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Wright, Edward L.; Orndorff, Joseph D.; Feldman, Paul D.; Moos, H. Warren; Riess, Adam G.; Gardner, Jonathan P.; Bohlin, Ralph; Deustua, Susana E.; Dixon, W. V.; Sahnow, David J.; Perlmutter, Saul

    2017-09-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 - 1.7μm bandpass. This paper describes the sub-system testing, payload integration, avionics operations, and data transfer for the ACCESS instrument.

  15. Mechanisms underlying the cooling observed within the TTL during the active spells of organized deep convection of the Indian Summer Monsoon with COSMC RO and In-situ Measurements

    NASA Astrophysics Data System (ADS)

    Rao, Kusuma; Reddy, Narendra

    Climate impact of the Asian monsoon as a tropical phenomena has been studied for decades in the past for its tropospheric component. However, the effort towards assessing the role of the Asian summer monsoon in the climate system with focus on the Upper Troposphere into the Lower Stratosphere (UTLS) is being addressed only in the recent times. Deep convective vertical fluxes of water and other chemical species penetrate and ventilate the TTL for redistribution of species in to stratosphere. However, the mechanisms underlying such convective transports are yet to be understood. Our specific goal here is to investigate the impact of organized deep moist convection of the Indian summer monsoon on thermal structure of UTLS, and to understand the underlying mechanisms. Since active monsoon spells are manifestations of organized deep convection embedded with overshooting convective elements, it becomes absolutely imperative to understand the impact of organized monsoon convection on three time scales, namely, (i) super synoptic scales of convectively intense active monsoon spells, (ii) on synoptic time scales of convectively disturbed conditions, and finally on (iii) cloud scales. Impact of deep convection on UTLS processes is examined here based on analysis of COSMIC RO and the METEOSAT data for the period, 2006-2011 and the in-situ measurements available from the national programme, PRWONAM during 2009-10 over the Indian land region and from the International field programme, JASMINE during 1999 over the Bay of Bengal. On all the three time scales during (i) the active monsoon spells, (ii) the disturbed periods and (iii) during the passage of deep core of MCSs, we inferred that the Coldpoint Tropopause Temperatures (CPT) lower at relatively lower CPT Altitudes (CPTA) unlike in the cases determined by normal temperature lapse rates; these unusual cases are described here as ‘Unlike Normal’ cases. TTL thickness shrinks during the convective conditions. During the passage of deep core of MCSs, cooling observed within the TTL is significantly higher than the cooling occuring on the other two time scales. The result that ‘Unlike Normal cases’ are associated with higher CAPE and higher surface equivalent potential temperatures lead to explain the possible mechanisms underlying the CPT cooling at relatively lower altitudes.

  16. Legal and Political Considerations in Large-Scale Adaptive Testing,

    DTIC Science & Technology

    One thing we can be absolutely sure of is that once personnel selection and classification decisions begin to be made using CAT , there will be legal...and to understand enough about legal processes and judgments to ’sell’ the benefits of CAT to the courts and the public.

  17. Performance Status and Change--Measuring Education System Effectiveness with Data from PISA 2000-2009

    ERIC Educational Resources Information Center

    Lenkeit, Jenny; Caro, Daniel H.

    2014-01-01

    Reports of international large-scale assessments tend to evaluate and compare education system performance based on absolute scores. And policymakers refer to high-performing and economically prosperous education systems to enhance their own systemic features. But socioeconomic differences between systems compromise the plausibility of those…

  18. Structure elucidation and absolute stereochemistry of isomeric monoterpene chromane esters.

    PubMed

    Batista, João M; Batista, Andrea N L; Mota, Jonas S; Cass, Quezia B; Kato, Massuo J; Bolzani, Vanderlan S; Freedman, Teresa B; López, Silvia N; Furlan, Maysa; Nafie, Laurence A

    2011-04-15

    Six novel monoterpene chromane esters were isolated from the aerial parts of Peperomia obtusifolia (Piperaceae) using chiral chromatography. This is the first time that chiral chromane esters of this kind, ones with a tethered chiral terpene, have been isolated in nature. Due to their structural features, it is not currently possible to assess directly their absolute stereochemistry using any of the standard classical approaches, such as X-ray crystallography, NMR, optical rotation, or electronic circular dichroism (ECD). Herein we report the absolute configuration of these molecules, involving four chiral centers, using vibrational circular dichroism (VCD) and density functional theory (DFT) (B3LYP/6-31G*) calculations. This work further reinforces the capability of VCD to determine unambiguously the absolute configuration of structurally complex molecules in solution, without crystallization or derivatization, and demonstrates the sensitivity of VCD to specify the absolute configuration for just one among a number of chiral centers. We also demonstrate the sufficiency of using the so-called inexpensive basis set 6-31G* compared to the triple-ζ basis set TZVP for absolute configuration analysis of larger molecules using VCD. Overall, this work extends our knowledge of secondary metabolites in plants and provides a straightforward way to determine the absolute configuration of complex natural products involving a chiral parent moiety combined with a chiral terpene adduct.

  19. Super Clausius-Clapeyron scaling of extreme hourly precipitation and its relation to large-scale atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Barbero, Renaud; Loriaux, Jessica; Fowler, Hayley

    2017-04-01

    Present-day precipitation-temperature scaling relations indicate that hourly precipitation extremes may have a response to warming exceeding the Clausius-Clapeyron (CC) relation; for The Netherlands the dependency on surface dew point temperature follows two times the CC relation corresponding to 14 % per degree. Our hypothesis - as supported by a simple physical argument presented here - is that this 2CC behaviour arises from the physics of convective clouds. So, we think that this response is due to local feedbacks related to the convective activity, while other large scale atmospheric forcing conditions remain similar except for the higher temperature (approximately uniform warming with height) and absolute humidity (corresponding to the assumption of unchanged relative humidity). To test this hypothesis, we analysed the large-scale atmospheric conditions accompanying summertime afternoon precipitation events using surface observations combined with a regional re-analysis for the data in The Netherlands. Events are precipitation measurements clustered in time and space derived from approximately 30 automatic weather stations. The hourly peak intensities of these events again reveal a 2CC scaling with the surface dew point temperature. The temperature excess of moist updrafts initialized at the surface and the maximum cloud depth are clear functions of surface dew point temperature, confirming the key role of surface humidity on convective activity. Almost no differences in relative humidity and the dry temperature lapse rate were found across the dew point temperature range, supporting our theory that 2CC scaling is mainly due to the response of convection to increases in near surface humidity, while other atmospheric conditions remain similar. Additionally, hourly precipitation extremes are on average accompanied by substantial large-scale upward motions and therefore large-scale moisture convergence, which appears to accelerate with surface dew point. This increase in large-scale moisture convergence appears to be consequence of latent heat release due to the convective activity as estimated from the quasi-geostrophic omega equation. Consequently, most hourly extremes occur in precipitation events with considerable spatial extent. Importantly, this event size appears to increase rapidly at the highest dew point temperature range, suggesting potentially strong impacts of climatic warming.

  20. Patterns in coupled water and energy cycle: Modeling, synthesis with observations, and assessing the subsurface-landsurface interactions

    NASA Astrophysics Data System (ADS)

    Rahman, A.; Kollet, S. J.; Sulis, M.

    2013-12-01

    In the terrestrial hydrological cycle, the atmosphere and the free groundwater table act as the upper and lower boundary condition, respectively, in the non-linear two-way exchange of mass and energy across the land surface. Identifying and quantifying the interactions among various atmospheric-subsurface-landsurface processes is complicated due to the diverse spatiotemporal scales associated with these processes. In this study, the coupled subsurface-landsurface model ParFlow.CLM was applied over a ~28,000 km2 model domain encompassing the Rur catchment, Germany, to simulate the fluxes of the coupled water and energy cycle. The model was forced by hourly atmospheric data from the COSMO-DE model (numerical weather prediction system of the German Weather Service) over one year. Following a spinup period, the model results were synthesized with observed river discharge, soil moisture, groundwater table depth, temperature, and landsurface energy flux data at different sites in the Rur catchment. It was shown that the model is able to reproduce reasonably the dynamics and also absolute values in observed fluxes and state variables without calibration. The spatiotemporal patterns in simulated water and energy fluxes as well as the interactions were studied using statistical, geostatistical and wavelet transform methods. While spatial patterns in the mass and energy fluxes can be predicted from atmospheric forcing and power law scaling in the transition and winter months, it appears that, in the summer months, the spatial patterns are determined by the spatially correlated variability in groundwater table depth. Continuous wavelet transform techniques were applied to study the variability of the catchment average mass and energy fluxes at varying time scales. From this analysis, the time scales associated with significant interactions among different mass and energy balance components were identified. The memory of precipitation variability in subsurface hydrodynamics acts at the 20-30 day time scale, while the groundwater contribution to sustain the long-term variability patterns in evapotranspiration acts at the 40-60 day scale. Diurnal patterns in connection with subsurface hydrodynamics were also detected. Thus, it appears that the subsurface hydrodynamics respond to the temporal patterns in land surface fluxes due to the variability in atmospheric forcing across multiple space and time scales.

  1. Testing the molecular clock using mechanistic models of fossil preservation and molecular evolution

    PubMed Central

    2017-01-01

    Molecular sequence data provide information about relative times only, and fossil-based age constraints are the ultimate source of information about absolute times in molecular clock dating analyses. Thus, fossil calibrations are critical to molecular clock dating, but competing methods are difficult to evaluate empirically because the true evolutionary time scale is never known. Here, we combine mechanistic models of fossil preservation and sequence evolution in simulations to evaluate different approaches to constructing fossil calibrations and their impact on Bayesian molecular clock dating, and the relative impact of fossil versus molecular sampling. We show that divergence time estimation is impacted by the model of fossil preservation, sampling intensity and tree shape. The addition of sequence data may improve molecular clock estimates, but accuracy and precision is dominated by the quality of the fossil calibrations. Posterior means and medians are poor representatives of true divergence times; posterior intervals provide a much more accurate estimate of divergence times, though they may be wide and often do not have high coverage probability. Our results highlight the importance of increased fossil sampling and improved statistical approaches to generating calibrations, which should incorporate the non-uniform nature of ecological and temporal fossil species distributions. PMID:28637852

  2. Ice Roughness and Thickness Evolution on a Swept NACA 0012 Airfoil

    NASA Technical Reports Server (NTRS)

    McClain, Stephen T.; Vargas, Mario; Tsao, Jen-Ching

    2017-01-01

    Several recent studies have been performed in the Icing Research Tunnel (IRT) at NASA Glenn Research Center focusing on the evolution, spatial variations, and proper scaling of ice roughness on airfoils without sweep exposed to icing conditions employed in classical roughness studies. For this study, experiments were performed in the IRT to investigate the ice roughness and thickness evolution on a 91.44-cm (36-in.) chord NACA 0012 airfoil, swept at 30-deg with 0deg angle of attack, and exposed to both Appendix C and Appendix O (SLD) icing conditions. The ice accretion event times used in the study were less than the time required to form substantially three-dimensional structures, such as scallops, on the airfoil surface. Following each ice accretion event, the iced airfoils were scanned using a ROMER Absolute Arm laser-scanning system. The resulting point clouds were then analyzed using the self-organizing map approach of McClain and Kreeger to determine the spatial roughness variations along the surfaces of the iced airfoils. The resulting measurements demonstrate linearly increasing roughness and thickness parameters with ice accretion time. Further, when compared to dimensionless or scaled results from unswept airfoil investigations, the results of this investigation indicate that the mechanisms for early stage roughness and thickness formation on swept wings are similar to those for unswept wings.

  3. Star-forming galaxies are predicted to lie on a fundamental plane of mass, star formation rate and α-enhancement

    NASA Astrophysics Data System (ADS)

    Matthee, Jorryt; Schaye, Joop

    2018-05-01

    Observations show that star-forming galaxies reside on a tight three-dimensional plane between mass, gas-phase metallicity and star formation rate (SFR), which can be explained by the interplay between metal-poor gas inflows, SFR and outflows. However, different metals are released on different time-scales, which may affect the slope of this relation. Here, we use central, star-forming galaxies with Mstar = 109.0 - 10.5 M⊙ from the EAGLE hydrodynamical simulation to examine three-dimensional relations between mass, SFR and chemical enrichment using absolute and relative C, N, O and Fe abundances. We show that the scatter is smaller when gas-phase α-enhancement is used rather than metallicity. A similar plane also exists for stellar α-enhancement, implying that present-day specific SFRs are correlated with long time-scale star formation histories. Between z = 0 and 1, the α-enhancement plane is even more insensitive to redshift than the plane using metallicity. However, it evolves at z > 1 due to lagging iron yields. At fixed mass, galaxies with higher SFRs have star formation histories shifted toward late times, are more α-enhanced and this α-enhancement increases with redshift as observed. These findings suggest that relations between physical properties inferred from observations may be affected by systematic variations in α-enhancements.

  4. A randomized study of rotigotine dose response on 'off' time in advanced Parkinson's disease.

    PubMed

    Nicholas, Anthony P; Borgohain, Rupam; Chaná, Pedro; Surmann, Erwin; Thompson, Emily L; Bauer, Lars; Whitesides, John; Elmer, Lawrence W

    2014-01-01

    Previous phase III studies in patients with advanced Parkinson's disease (PD) not adequately controlled on levodopa demonstrated significant reduction of 'off' time with rotigotine transdermal system up to 16 mg/24 h. However, the minimal effective dose has not been established. This international, randomized, double-blind, placebo-controlled study (SP921; NCT00522379) investigated rotigotine dose response up to 8 mg/24 h. Patients with advanced idiopathic PD (≥2.5 h of daily 'off' time on stable doses of levodopa) were randomized 1:1:1:1:1 to receive rotigotine 2, 4, 6, or 8 mg/24 h or placebo, titrated over 4 weeks and maintained for 12 weeks. The primary efficacy variable was change from baseline to end of maintenance in absolute time spent 'off'. 409/514 (80%) randomized patients completed maintenance. Mean (±SD) baseline daily 'off' times (h/day) were placebo: 6.4 (±2.5), rotigotine 2-8 mg/24 h: 6.4 (±2.6). Rotigotine 8 mg/24 h was the minimal dose to significantly reduce 'off' time versus placebo. LS mean (±SE) absolute change in daily 'off' time (h/day) from baseline was -2.4 (±0.28) with rotigotine 8 mg/24 h, and -1.5 (±0.26) with placebo; absolute change in 'off' time in the 8 mg/24 h group compared with placebo was -0.85 h/day (95% CI -1.59, -0.11; p = 0.024). There was an apparent dose-dependent trend. Adverse events (AEs) reported at a higher incidence in the rotigotine 8 mg/24 h group versus placebo included application site reactions, nausea, dry mouth, and dyskinesia; there was no worsening of insomnia, somnolence, orthostatic hypotension, confusional state or hallucinations, even in patients ≥75 years of age. The minimal statistically significant effective dose of rotigotine to reduce absolute 'off' time was 8 mg/24 h. The AE profile was similar to previous studies.

  5. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  6. Large-scale multiplex absolute protein quantification of drug-metabolizing enzymes and transporters in human intestine, liver, and kidney microsomes by SWATH-MS: Comparison with MRM/SRM and HR-MRM/PRM.

    PubMed

    Nakamura, Kenji; Hirayama-Kurogi, Mio; Ito, Shingo; Kuno, Takuya; Yoneyama, Toshihiro; Obuchi, Wataru; Terasaki, Tetsuya; Ohtsuki, Sumio

    2016-08-01

    The purpose of the present study was to examine simultaneously the absolute protein amounts of 152 membrane and membrane-associated proteins, including 30 metabolizing enzymes and 107 transporters, in pooled microsomal fractions of human liver, kidney, and intestine by means of SWATH-MS with stable isotope-labeled internal standard peptides, and to compare the results with those obtained by MRM/SRM and high resolution (HR)-MRM/PRM. The protein expression levels of 27 metabolizing enzymes, 54 transporters, and six other membrane proteins were quantitated by SWATH-MS; other targets were below the lower limits of quantitation. Most of the values determined by SWATH-MS differed by less than 50% from those obtained by MRM/SRM or HR-MRM/PRM. Various metabolizing enzymes were expressed in liver microsomes more abundantly than in other microsomes. Ten, 13, and eight transporters listed as important for drugs by International Transporter Consortium were quantified in liver, kidney, and intestinal microsomes, respectively. Our results indicate that SWATH-MS enables large-scale multiplex absolute protein quantification while retaining similar quantitative capability to MRM/SRM or HR-MRM/PRM. SWATH-MS is expected to be useful methodology in the context of drug development for elucidating the molecular mechanisms of drug absorption, metabolism, and excretion in the human body based on protein profile information. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Calibrated Tully-fisher Relations For Improved Photometric Estimates Of Disk Rotation Velocities

    NASA Astrophysics Data System (ADS)

    Reyes, Reinabelle; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.

    2011-01-01

    We present calibrated scaling relations (also referred to as Tully-Fisher relations or TFRs) between rotation velocity and photometric quantities-- absolute magnitude, stellar mass, and synthetic magnitude (a linear combination of absolute magnitude and color)-- of disk galaxies at z 0.1. First, we selected a parent disk sample of 170,000 galaxies from SDSS DR7, with redshifts between 0.02 and 0.10 and r band absolute magnitudes between -18.0 and -22.5. Then, we constructed a child disk sample of 189 galaxies that span the parameter space-- in absolute magnitude, color, and disk size-- covered by the parent sample, and for which we have obtained kinematic data. Long-slit spectroscopy were obtained from the Dual Imaging Spectrograph (DIS) at the Apache Point Observatory 3.5 m for 99 galaxies, and from Pizagno et al. (2007) for 95 galaxies (five have repeat observations). We find the best photometric estimator of disk rotation velocity to be a synthetic magnitude with a color correction that is consistent with the Bell et al. (2003) color-based stellar mass ratio. The improved rotation velocity estimates have a wide range of scientific applications, and in particular, in combination with weak lensing measurements, they enable us to constrain the ratio of optical-to-virial velocity in disk galaxies.

  8. Population-based absolute risk estimation with survey data

    PubMed Central

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  9. Dynamic frequency-domain interferometer for absolute distance measurements with high resolution

    NASA Astrophysics Data System (ADS)

    Weng, Jidong; Liu, Shenggang; Ma, Heli; Tao, Tianjiong; Wang, Xiang; Liu, Cangli; Tan, Hua

    2014-11-01

    A unique dynamic frequency-domain interferometer for absolute distance measurement has been developed recently. This paper presents the working principle of the new interferometric system, which uses a photonic crystal fiber to transmit the wide-spectrum light beams and a high-speed streak camera or frame camera to record the interference stripes. Preliminary measurements of harmonic vibrations of a speaker, driven by a radio, and the changes in the tip clearance of a rotating gear wheel show that this new type of interferometer has the ability to perform absolute distance measurements both with high time- and distance-resolution.

  10. Comparison of the three optical platforms for measurement of cellular respiration.

    PubMed

    Kondrashina, Alina V; Ogurtsov, Vladimir I; Papkovsky, Dmitri B

    2015-01-01

    We compared three optical platforms for measurement of cellular respiration: absolute oxygen consumption rates (OCRs) in hermetically sealed microcuvettes, relative OCRs measured in a 96-well plate with oil seal, and steady-state oxygenation of cells in an open 96-well plate. Using mouse embryonic fibroblasts cell line, the phosphorescent intracellular O2 probe MitoXpress-Intra, and time-resolved fluorescence reader, we determined algorithms for conversion of relative OCRs and cell oxygenation into absolute OCRs, thereby allowing simple high-throughput measurement of absolute OCR values. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Acupuncture for peripheral joint osteoarthritis

    PubMed Central

    Manheimer, Eric; Cheng, Ke; Linde, Klaus; Lao, Lixing; Yoo, Junghee; Wieland, Susan; van der Windt, Daniëlle AWM; Berman, Brian M; Bouter, Lex M

    2011-01-01

    Background Peripheral joint osteoarthritis is a major cause of pain and functional limitation. Few treatments are safe and effective. Objectives To assess the effects of acupuncture for treating peripheral joint osteoarthritis. Search strategy We searched the Cochrane Central Register of Controlled Trials (The Cochrane Library 2008, Issue 1), MEDLINE, and EMBASE (both through December 2007), and scanned reference lists of articles. Selection criteria Randomized controlled trials (RCTs) comparing needle acupuncture with a sham, another active treatment, or a waiting list control group in people with osteoarthritis of the knee, hip, or hand. Data collection and analysis Two authors independently assessed trial quality and extracted data. We contacted study authors for additional information. We calculated standardized mean differences using the differences in improvements between groups. Main results Sixteen trials involving 3498 people were included. Twelve of the RCTs included only people with OA of the knee, 3 only OA of the hip, and 1 a mix of people with OA of the hip and/or knee. In comparison with a sham control, acupuncture showed statistically significant, short-term improvements in osteoarthritis pain (standardized mean difference -0.28, 95% confidence interval -0.45 to -0.11; 0.9 point greater improvement than sham on 20 point scale; absolute percent change 4.59%; relative percent change 10.32%; 9 trials; 1835 participants) and function (-0.28, -0.46 to -0.09; 2.7 point greater improvement on 68 point scale; absolute percent change 3.97%; relative percent change 8.63%); however, these pooled short-term benefits did not meet our predefined thresholds for clinical relevance (i.e. 1.3 points for pain; 3.57 points for function) and there was substantial statistical heterogeneity. Additionally, restriction to sham-controlled trials using shams judged most likely to adequately blind participants to treatment assignment (which were also the same shams judged most likely to have physiological activity), reduced heterogeneity and resulted in pooled short-term benefits of acupuncture that were smaller and non-significant. In comparison with sham acupuncture at the six-month follow-up, acupuncture showed borderline statistically significant, clinically irrelevant improvements in osteoarthritis pain (-0.10, -0.21 to 0.01; 0.4 point greater improvement than sham on 20 point scale; absolute percent change 1.81%; relative percent change 4.06%; 4 trials;1399 participants) and function (-0.11, -0.22 to 0.00; 1.2 point greater improvement than sham on 68 point scale; absolute percent change 1.79%; relative percent change 3.89%). In a secondary analysis versus a waiting list control, acupuncture was associated with statistically significant, clinically relevant short-term improvements in osteoarthritis pain (-0.96, -1.19 to -0.72; 14.5 point greater improvement than sham on 100 point scale; absolute percent change 14.5%; relative percent change 29.14%; 4 trials; 884 participants) and function (-0.89, -1.18 to -0.60; 13.0 point greater improvement than sham on 100 point scale; absolute percent change 13.0%; relative percent change 25.21%). In the head-on comparisons of acupuncture with the ‘supervised osteoarthritis education’ and the ‘physician consultation’ control groups, acupuncture was associated with clinically relevant short- and long-term improvements in pain and function. In the head on comparisons of acupuncture with ‘home exercises/advice leaflet’ and ‘supervised exercise’, acupuncture was associated with similar treatment effects as the controls. Acupuncture as an adjuvant to an exercise based physiotherapy program did not result in any greater improvements than the exercise program alone. Information on safety was reported in only 8 trials and even in these trials there was limited reporting and heterogeneous methods. Authors' conclusions Sham-controlled trials show statistically significant benefits; however, these benefits are small, do not meet our pre-defined thresholds for clinical relevance, and are probably due at least partially to placebo effects from incomplete blinding. Waiting list-controlled trials of acupuncture for peripheral joint osteoarthritis suggest statistically significant and clinically relevant benefits, much of which may be due to expectation or placebo effects. PMID:20091527

  12. Variability of rainfall over Lake Kariba catchment area in the Zambezi river basin, Zimbabwe

    NASA Astrophysics Data System (ADS)

    Muchuru, Shepherd; Botai, Joel O.; Botai, Christina M.; Landman, Willem A.; Adeola, Abiodun M.

    2016-04-01

    In this study, average monthly and annual rainfall totals recorded for the period 1970 to 2010 from a network of 13 stations across the Lake Kariba catchment area of the Zambezi river basin were analyzed in order to characterize the spatial-temporal variability of rainfall across the catchment area. In the analysis, the data were subjected to intervention and homogeneity analysis using the Cumulative Summation (CUSUM) technique and step change analysis using rank-sum test. Furthermore, rainfall variability was characterized by trend analysis using the non-parametric Mann-Kendall statistic. Additionally, the rainfall series were decomposed and the spectral characteristics derived using Cross Wavelet Transform (CWT) and Wavelet Coherence (WC) analysis. The advantage of using the wavelet-based parameters is that they vary in time and can therefore be used to quantitatively detect time-scale-dependent correlations and phase shifts between rainfall time series at various localized time-frequency scales. The annual and seasonal rainfall series were homogeneous and demonstrated no apparent significant shifts. According to the inhomogeneity classification, the rainfall series recorded across the Lake Kariba catchment area belonged to category A (useful) and B (doubtful), i.e., there were zero to one and two absolute tests rejecting the null hypothesis (at 5 % significance level), respectively. Lastly, the long-term variability of the rainfall series across the Lake Kariba catchment area exhibited non-significant positive and negative trends with coherent oscillatory modes that are constantly locked in phase in the Morlet wavelet space.

  13. Variability of temperature sensitivity of extreme precipitation from a regional-to-local impact scale perspective

    NASA Astrophysics Data System (ADS)

    Schroeer, K.; Kirchengast, G.

    2016-12-01

    Relating precipitation intensity to temperature is a popular approach to assess potential changes of extreme events in a warming climate. Potential increases in extreme rainfall induced hazards, such as flash flooding, serve as motivation. It has not been addressed whether the temperature-precipitation scaling approach is meaningful on a regional to local level, where the risk of climate and weather impact is dealt with. Substantial variability of temperature sensitivity of extreme precipitation has been found that results from differing methodological assumptions as well as from varying climatological settings of the study domains. Two aspects are consistently found: First, temperature sensitivities beyond the expected consistency with the Clausius-Clapeyron (CC) equation are a feature of short-duration, convective, sub-daily to sub-hourly high-percentile rainfall intensities at mid-latitudes. Second, exponential growth ceases or reverts at threshold temperatures that vary from region to region, as moisture supply becomes limited. Analyses of pooled data, or of single or dispersed stations over large areas make it difficult to estimate the consequences in terms of local climate risk. In this study we test the meaningfulness of the scaling approach from an impact scale perspective. Temperature sensitivities are assessed using quantile regression on hourly and sub-hourly precipitation data from 189 stations in the Austrian south-eastern Alpine region. The observed scaling rates vary substantially, but distinct regional and seasonal patterns emerge. High sensitivity exceeding CC-scaling is seen on the 10-minute scale more than on the hourly scale, in storms shorter than 2 hours duration, and in shoulder seasons, but it is not necessarily a significant feature of the extremes. To be impact relevant, change rates need to be linked to absolute rainfall amounts. We show that high scaling rates occur in lower temperature conditions and thus have smaller effect on absolute precipitation intensities. While reporting of mere percentage numbers can be misleading, scaling studies can add value to process understanding on the local scale, if the factors that influence scaling rates are considered from both a methodological and a physical perspective.

  14. Angular scale expansion theory and the misperception of egocentric distance in locomotor space.

    PubMed

    Durgin, Frank H

    Perception is crucial for the control of action, but perception need not be scaled accurately to produce accurate actions. This paper reviews evidence for an elegant new theory of locomotor space perception that is based on the dense coding of angular declination so that action control may be guided by richer feedback. The theory accounts for why so much direct-estimation data suggests that egocentric distance is underestimated despite the fact that action measures have been interpreted as indicating accurate perception. Actions are calibrated to the perceived scale of space and thus action measures are typically unable to distinguish systematic (e.g., linearly scaled) misperception from accurate perception. Whereas subjective reports of the scaling of linear extent are difficult to evaluate in absolute terms, study of the scaling of perceived angles (which exist in a known scale, delimited by vertical and horizontal) provides new evidence regarding the perceptual scaling of locomotor space.

  15. Study of multi-functional precision optical measuring system for large scale equipment

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  16. Communicating cardiovascular disease risk: an interview study of General Practitioners’ use of absolute risk within tailored communication strategies

    PubMed Central

    2014-01-01

    Background Cardiovascular disease (CVD) prevention guidelines encourage assessment of absolute CVD risk - the probability of a CVD event within a fixed time period, based on the most predictive risk factors. However, few General Practitioners (GPs) use absolute CVD risk consistently, and communication difficulties have been identified as a barrier to changing practice. This study aimed to explore GPs’ descriptions of their CVD risk communication strategies, including the role of absolute risk. Methods Semi-structured interviews were conducted with a purposive sample of 25 GPs in New South Wales, Australia. Transcribed audio-recordings were thematically coded, using the Framework Analysis method to ensure rigour. Results GPs used absolute CVD risk within three different communication strategies: ‘positive’, ‘scare tactic’, and ‘indirect’. A ‘positive’ strategy, which aimed to reassure and motivate, was used for patients with low risk, determination to change lifestyle, and some concern about CVD risk. Absolute risk was used to show how they could reduce risk. A ‘scare tactic’ strategy was used for patients with high risk, lack of motivation, and a dismissive attitude. Absolute risk was used to ‘scare’ them into taking action. An ‘indirect’ strategy, where CVD risk was not the main focus, was used for patients with low risk but some lifestyle risk factors, high anxiety, high resistance to change, or difficulty understanding probabilities. Non-quantitative absolute risk formats were found to be helpful in these situations. Conclusions This study demonstrated how GPs use three different communication strategies to address the issue of CVD risk, depending on their perception of patient risk, motivation and anxiety. Absolute risk played a different role within each strategy. Providing GPs with alternative ways of explaining absolute risk, in order to achieve different communication aims, may improve their use of absolute CVD risk assessment in practice. PMID:24885409

  17. Educational inequalities in mortality over four decades in Norway: prospective study of middle aged men and women followed for cause specific mortality, 1960-2000.

    PubMed

    Strand, Bjørn Heine; Grøholt, Else-Karin; Steingrímsdóttir, Olöf Anna; Blakely, Tony; Graff-Iversen, Sidsel; Naess, Øyvind

    2010-02-23

    To determine the extent to which educational inequalities in relation to mortality widened in Norway during 1960-2000 and which causes of death were the main drivers of this disparity. Nationally representative prospective study. Four cohorts of the Norwegian population aged 45-64 years in 1960, 1970, 1980, and 1990 and followed up for mortality over 10 years. 359 547 deaths and 32 904 589 person years. All cause mortality and deaths due to cancer of lung, trachea, or bronchus; other cancer; cardiovascular diseases; suicide; external causes; chronic lower respiratory tract diseases; or other causes. Absolute and relative indices of inequality were used to present differences in mortality by educational level (basic, secondary, and tertiary). Mortality fell from the 1960s to the 1990s in all educational groups. At the same time the proportion of adults in the basic education group, with the highest mortality, decreased substantially. As mortality dropped more among those with the highest level of education, inequalities widened. Absolute inequalities in mortality denoting deaths among the basic education groups minus deaths among the high education groups doubled in men and increased by a third in women. This is equivalent to an increase in the slope index of inequality of 105% in men and 32% in women. Inequalities on a relative scale widened more, from 1.33 to 2.24 among men (P=0.01) and from 1.52 to 2.19 among women (P=0.05). Among men, absolute inequalities mainly increased as a result of cardiovascular diseases, lung cancer, and chronic lower respiratory tract diseases. Among women this was mainly due to lung cancer and chronic lower respiratory tract diseases. Unlike the situation in men, absolute inequalities in deaths due to cardiovascular causes narrowed among women. Chronic lower respiratory tract diseases contributed more to the disparities in inequalities among women than among men. All educational groups showed a decline in mortality. Nevertheless, and despite the fact that the Norwegian welfare model is based on an egalitarian ideology, educational inequalities in mortality among middle aged people in Norway are substantial and increased during 1960-2000.

  18. Decomposing health inequality with population-based surveys: a case study in Rwanda.

    PubMed

    Liu, Kai; Lu, Chunling

    2018-05-10

    Ensuring equal access to care and providing financial risk protection are at the center of the global health agenda. While Rwanda has made impressive progress in improving health outcomes, inequalities in medical care utilization and household catastrophic health spending (HCHS) between the impoverished and non-impoverished populations persist. Decomposing inequalities will help us understand the factors contributing to inequalities and design effective policy instruments in reducing inequalities. This study aims to decompose the inequalities in medical care utilization among those reporting illnesses and HCHS between the poverty and non-poverty groups in Rwanda. Using the 2005 and 2010 nationally representative Integrated Living Conditions Surveys, our analysis focuses on measuring contributions to inequalities from poverty status and other sources. We conducted multivariate logistic regression analysis to obtain poverty's contribution to inequalities by controlling for all observed covariates. We used multivariate nonlinear decomposition method with logistic regression models to partition the relative and absolute contributions from other sources to inequalities due to compositional or response effects. Poverty status accounted for the majority of inequalities in medical care utilization (absolute contribution 0.093 in 2005 and 0.093 in 2010) and HCHS (absolute contribution 0.070 in 2005 and 0.032 in 2010). Health insurance status (absolute contribution 0.0076 in 2005 and 0.0246 in 2010) and travel time to health centers (absolute contribution 0.0025 in 2005 and 0.0014 in 2010) were significant contributors to inequality in medical care utilization. Health insurance status (absolute contribution 0.0021 in 2005 and 0.0011 in 2010), having under-five children (absolute contribution 0.0012 in 2005 and 0.0011 in 2010), and having disabled family members (absolute contribution 0.0002 in 2005 and 0.0001 in 2010) were significant contributors to inequality in HCHS. Between 2005 and 2010, the main sources of the inequalities remained unchanged. Expanding insurance coverage and reducing travel time to health facilities for those living in poverty could be used as policy instruments to mitigate inequalities in medical care utilization and HCHS between the poverty and non-poverty groups.

  19. Cosmological constraints on neutrinos with Planck data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spinelli, M.

    2015-07-15

    Neutrinos take part in the dance of the evolving Universe influencing its history from leptogenesis, to Big Bang nucleosynthesis, until late time structure formation. This makes cosmology, and in particular one of its primary observables the Cosmic Microwave Background (CMB), an unusual but valuable tool for testing Neutrino Physics. The best measurement to date of full-sky CMB anisotropies comes from the Planck satellite launched in 2009 by the European Space Agency (ESA) and successful follower of COBE and WMAP. Testing Planck data against precise theoretical predictions allow us to shed light on various interesting open questions such as the valuemore » of the absolute scale of neutrino masses or their energy density. We revise here the results concerning neutrinos obtained by the Planck Collaboration in the 2013 data release.« less

  20. Cosmological constraints on neutrinos with Planck data

    NASA Astrophysics Data System (ADS)

    Spinelli, M.

    2015-07-01

    Neutrinos take part in the dance of the evolving Universe influencing its history from leptogenesis, to Big Bang nucleosynthesis, until late time structure formation. This makes cosmology, and in particular one of its primary observables the Cosmic Microwave Background (CMB), an unusual but valuable tool for testing Neutrino Physics. The best measurement to date of full-sky CMB anisotropies comes from the Planck satellite launched in 2009 by the European Space Agency (ESA) and successful follower of COBE and WMAP. Testing Planck data against precise theoretical predictions allow us to shed light on various interesting open questions such as the value of the absolute scale of neutrino masses or their energy density. We revise here the results concerning neutrinos obtained by the Planck Collaboration in the 2013 data release.

  1. Anomalous mobility of a driven active particle in a steady laminar flow

    NASA Astrophysics Data System (ADS)

    Cecconi, F.; Puglisi, A.; Sarracino, A.; Vulpiani, A.

    2018-07-01

    We study, via extensive numerical simulations, the force–velocity curve of an active particle advected by a steady laminar flow, in the nonlinear response regime. Our model for an active particle relies on a colored noise term that mimics its persistent motion over a time scale . We find that the active particle dynamics shows non-trivial effects, such as negative differential and absolute mobility (NDM and ANM, respectively). We explore the space of the model parameters and compare the observed behaviors with those obtained for a passive particle () advected by the same laminar flow. Our results show that the phenomena of NDM and ANM are quite robust with respect to the details of the considered noise: in particular for finite a more complex force–velocity relation can be observed.

  2. Performance of Single Friction Pendulum bearing for isolated buildings subjected to seismic actions in Vietnam

    NASA Astrophysics Data System (ADS)

    Nguyen, N. V.; Nguyen, C. H.; Hoang, H. P.; Huong, K. T.

    2018-04-01

    Using structural control technology in earthquake resistant design of buildings in Vietnam is very limited. In this paper, a performance evaluation of using Single Friction Pendulum (SFP) bearing for seismically isolated buildings with earthquake conditions in Vietnam is presented. A two-dimensional (2-D) model of the 5-storey building subjected to earthquakes is analyzed in time domain. Accordingly, the model is analyzed for 2 cases: with and without SFP bearing. The ground acceleration data is selected and scaled to suit the design acceleration in Hanoi followed by the Standard TCVN 9386:2012. It is shown that the seismically isolated buildings gets the performance objectives while achieving an 91% reduction in the base shear, a significant decrease in the inter-story drift and absolute acceleration of each story.

  3. Population connectivity of deep-sea corals: Chapter 12

    USGS Publications Warehouse

    Morrison, Cheryl L.; Baco, Amy; Nizinski, Martha S.; Coykendall, D. Katharine; Demopoulos, Amanda W. J.; Cho, Walter; Shank, Tim

    2015-01-01

    Identifying the scale of dispersal among habitats has been a challenge in marine ecology for decades (Grantham et al., 2003; Kinlan & Gaines, 2003; Hixon, 2011). Unlike terrestrial habitats in which barriers to dispersal may be obvious (e.g. mountain ranges, rivers), few absolute barriers to dispersal are recognizable in the sea. Additionally, most marine species have complex life cycles in which juveniles are more mobile than adults. As such, the dynamics of populations may involve processes in distant habitats that are coupled by a transport mechanism. Studies of population connectivity try to quantify the transport, or dispersal of individuals, among geographically separated populations. For benthic marine species, such as corals and demersal fishes, colonization of new populations occurs primarily by dispersal of larvae (Figure 1; Shank, 2010). Successful dispersal and recruitment, followed by maturation and reproduction of these new migrants ensures individuals contribute to the gene pool (Hedgecock, 2007). Thus, successful dispersal links and cohesively maintains spatially separated sub-populations. At shorter time scales (10-100s years), connectivity regulates community structure by influencing the genetic composition, diversity and demographic stability of the population, whereas at longer time scales (1000s years), geographic distributions are affected (McClain and Hardy, 2010). Alternatively, populations may become extinct or speciation may occur if connectivity ceases (Cowen et al., 2007). Therefore, the genetic exchange of individuals between populations is fundamental to the short-term resilience and long-term maintenance of the species. However, for the vast majority of marine species, population connectivity remains poorly understood.

  4. Quantifying predictability in a model with statistical features of the atmosphere

    PubMed Central

    Kleeman, Richard; Majda, Andrew J.; Timofeyev, Ilya

    2002-01-01

    The Galerkin truncated inviscid Burgers equation has recently been shown by the authors to be a simple model with many degrees of freedom, with many statistical properties similar to those occurring in dynamical systems relevant to the atmosphere. These properties include long time-correlated, large-scale modes of low frequency variability and short time-correlated “weather modes” at smaller scales. The correlation scaling in the model extends over several decades and may be explained by a simple theory. Here a thorough analysis of the nature of predictability in the idealized system is developed by using a theoretical framework developed by R.K. This analysis is based on a relative entropy functional that has been shown elsewhere by one of the authors to measure the utility of statistical predictions precisely. The analysis is facilitated by the fact that most relevant probability distributions are approximately Gaussian if the initial conditions are assumed to be so. Rather surprisingly this holds for both the equilibrium (climatological) and nonequilibrium (prediction) distributions. We find that in most cases the absolute difference in the first moments of these two distributions (the “signal” component) is the main determinant of predictive utility variations. Contrary to conventional belief in the ensemble prediction area, the dispersion of prediction ensembles is generally of secondary importance in accounting for variations in utility associated with different initial conditions. This conclusion has potentially important implications for practical weather prediction, where traditionally most attention has focused on dispersion and its variability. PMID:12429863

  5. Absolute quantification by droplet digital PCR versus analog real-time PCR

    PubMed Central

    Hindson, Christopher M; Chevillet, John R; Briggs, Hilary A; Gallichotte, Emily N; Ruf, Ingrid K; Hindson, Benjamin J; Vessella, Robert L; Tewari, Muneesh

    2014-01-01

    Nanoliter-sized droplet technology paired with digital PCR (ddPCR) holds promise for highly precise, absolute nucleic acid quantification. Our comparison of microRNA quantification by ddPCR and real-time PCR revealed greater precision (coefficients of variation decreased by 37–86%) and improved day-to-day reproducibility (by a factor of seven) of ddPCR but with comparable sensitivity. When we applied ddPCR to serum microRNA biomarker analysis, this translated to superior diagnostic performance for identifying individuals with cancer. PMID:23995387

  6. Can real time location system technology (RTLS) provide useful estimates of time use by nursing personnel?

    PubMed

    Jones, Terry L; Schlegel, Cara

    2014-02-01

    Accurate, precise, unbiased, reliable, and cost-effective estimates of nursing time use are needed to insure safe staffing levels. Direct observation of nurses is costly, and conventional surrogate measures have limitations. To test the potential of electronic capture of time and motion through real time location systems (RTLS), a pilot study was conducted to assess efficacy (method agreement) of RTLS time use; inter-rater reliability of RTLS time-use estimates; and associated costs. Method agreement was high (mean absolute difference = 28 seconds); inter-rater reliability was high (ICC = 0.81-0.95; mean absolute difference = 2 seconds); and costs for obtaining RTLS time-use estimates on a single nursing unit exceeded $25,000. Continued experimentation with RTLS to obtain time-use estimates for nursing staff is warranted. © 2013 Wiley Periodicals, Inc.

  7. Precision atomic beam density characterization by diode laser absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxley, Paul; Wihbey, Joseph

    2016-09-15

    We provide experimental and theoretical details of a simple technique to determine absolute line-of-sight integrated atomic beam densities based on resonant laser absorption. In our experiments, a thermal lithium beam is chopped on and off while the frequency of a laser crossing the beam at right angles is scanned slowly across the resonance transition. A lock-in amplifier detects the laser absorption signal at the chop frequency from which the atomic density is determined. The accuracy of our experimental method is confirmed using the related technique of wavelength modulation spectroscopy. For beams which absorb of order 1% of the incident lasermore » light, our measurements allow the beam density to be determined to an accuracy better than 5% and with a precision of 3% on a time scale of order 1 s. Fractional absorptions of order 10{sup −5} are detectable on a one-minute time scale when we employ a double laser beam technique which limits laser intensity noise. For a lithium beam with a thickness of 9 mm, we have measured atomic densities as low as 5 × 10{sup 4} atoms cm{sup −3}. The simplicity of our technique and the details we provide should allow our method to be easily implemented in most atomic or molecular beam apparatuses.« less

  8. Precision atomic beam density characterization by diode laser absorption spectroscopy.

    PubMed

    Oxley, Paul; Wihbey, Joseph

    2016-09-01

    We provide experimental and theoretical details of a simple technique to determine absolute line-of-sight integrated atomic beam densities based on resonant laser absorption. In our experiments, a thermal lithium beam is chopped on and off while the frequency of a laser crossing the beam at right angles is scanned slowly across the resonance transition. A lock-in amplifier detects the laser absorption signal at the chop frequency from which the atomic density is determined. The accuracy of our experimental method is confirmed using the related technique of wavelength modulation spectroscopy. For beams which absorb of order 1% of the incident laser light, our measurements allow the beam density to be determined to an accuracy better than 5% and with a precision of 3% on a time scale of order 1 s. Fractional absorptions of order 10 -5 are detectable on a one-minute time scale when we employ a double laser beam technique which limits laser intensity noise. For a lithium beam with a thickness of 9 mm, we have measured atomic densities as low as 5 × 10 4 atoms cm -3 . The simplicity of our technique and the details we provide should allow our method to be easily implemented in most atomic or molecular beam apparatuses.

  9. How regularity representations of short sound patterns that are based on relative or absolute pitch information establish over time: An EEG study.

    PubMed

    Bader, Maria; Schröger, Erich; Grimm, Sabine

    2017-01-01

    The recognition of sound patterns in speech or music (e.g., a melody that is played in different keys) requires knowledge about pitch relations between successive sounds. We investigated the formation of regularity representations for sound patterns in an event-related potential (ERP) study. A pattern, which consisted of six concatenated 50 ms tone segments differing in fundamental frequency, was presented 1, 2, 3, 6, or 12 times and then replaced by another pattern by randomly changing the pitch of the tonal segments (roving standard paradigm). In an absolute repetition condition, patterns were repeated identically, whereas in a transposed condition, only the pitch relations of the tonal segments of the patterns were repeated, while the entire patterns were shifted up or down in pitch. During ERP measurement participants were not informed about the pattern repetition rule, but were instructed to discriminate rarely occurring targets of lower or higher sound intensity. EPRs for pattern changes (mismatch negativity, MMN; and P3a) and for pattern repetitions (repetition positivity, RP) revealed that the auditory system is able to rapidly extract regularities from unfamiliar complex sound patterns even when absolute pitch varies. Yet, enhanced RP and P3a amplitudes, and improved behavioral performance measured in a post-hoc test, in the absolute as compared with the transposed condition suggest that it is more difficult to encode patterns without absolute pitch information. This is explained by dissociable processing of standards and deviants as well as a back propagation mechanism to early sensory processing stages, which is effective after less repetitions of a standard stimulus for absolute pitch.

  10. How regularity representations of short sound patterns that are based on relative or absolute pitch information establish over time: An EEG study

    PubMed Central

    Schröger, Erich; Grimm, Sabine

    2017-01-01

    The recognition of sound patterns in speech or music (e.g., a melody that is played in different keys) requires knowledge about pitch relations between successive sounds. We investigated the formation of regularity representations for sound patterns in an event-related potential (ERP) study. A pattern, which consisted of six concatenated 50 ms tone segments differing in fundamental frequency, was presented 1, 2, 3, 6, or 12 times and then replaced by another pattern by randomly changing the pitch of the tonal segments (roving standard paradigm). In an absolute repetition condition, patterns were repeated identically, whereas in a transposed condition, only the pitch relations of the tonal segments of the patterns were repeated, while the entire patterns were shifted up or down in pitch. During ERP measurement participants were not informed about the pattern repetition rule, but were instructed to discriminate rarely occurring targets of lower or higher sound intensity. EPRs for pattern changes (mismatch negativity, MMN; and P3a) and for pattern repetitions (repetition positivity, RP) revealed that the auditory system is able to rapidly extract regularities from unfamiliar complex sound patterns even when absolute pitch varies. Yet, enhanced RP and P3a amplitudes, and improved behavioral performance measured in a post-hoc test, in the absolute as compared with the transposed condition suggest that it is more difficult to encode patterns without absolute pitch information. This is explained by dissociable processing of standards and deviants as well as a back propagation mechanism to early sensory processing stages, which is effective after less repetitions of a standard stimulus for absolute pitch. PMID:28472146

  11. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Note on Two-Phase Phenomena in Financial Markets

    NASA Astrophysics Data System (ADS)

    Jiang, Shi-Mei; Cai, Shi-Min; Zhou, Tao; Zhou, Pei-Ling

    2008-06-01

    The two-phase behaviour in financial markets actually means the bifurcation phenomenon, which represents the change of the conditional probability from an unimodal to a bimodal distribution. We investigate the bifurcation phenomenon in Hang-Seng index. It is observed that the bifurcation phenomenon in financial index is not universal, but specific under certain conditions. For Hang-Seng index and randomly generated time series, the phenomenon just emerges when the power-law exponent of absolute increment distribution is between 1 and 2 with appropriate period. Simulations on a randomly generated time series suggest the bifurcation phenomenon itself is subject to the statistics of absolute increment, thus it may not be able to reflect essential financial behaviours. However, even under the same distribution of absolute increment, the range where bifurcation phenomenon occurs is far different from real market to artificial data, which may reflect certain market information.

  12. The value of the internship for radiation oncology training: results of a survey of current and recent trainees.

    PubMed

    Baker, Stephen R; Romero, Michelle J; Geannette, Christian; Patel, Amish

    2009-07-15

    Although a 12-month clinical internship is the traditional precursor to a radiation oncology residency, the continuance of this mandated training sequence has been questioned. This study was performed to evaluate the perceptions of current radiation oncology residents with respect to the value of their internship experience. A survey was sent to all US radiation oncology residents. Each was queried about whether they considered the internship to be a necessary prerequisite for a career as a radiation oncologist and as a physician. Preferences were listed on a Likert scale (1 = not at all necessary to 5 = absolutely necessary). Seventy-one percent considered the internship year mostly (Likert Scale 4) or absolutely necessary (Likert Scale 5) for their development as a radiation oncologist, whereas 19.1% answered hardly or not at all (Likert Scale 2 and 1, respectively). With respect to their collective considerations about the impact of the internship year on their development as a physician, 89% had a positive response, 5.8% had a negative response, and 4.7% had no opinion. Although both deemed the preliminary year favorably, affirmative answers were more frequent among erstwhile internal medicine interns than former transitional program interns. A majority of radiation oncology residents positively acknowledged their internship for their development as a specialist and an even greater majority valued it for their development as a physician. This affirmative opinion was registered more frequently by those completing an internal medicine internship compared with a transitional internship.

  13. Inequalities in mortality during and after restructuring of the New Zealand economy: repeated cohort studies

    PubMed Central

    2008-01-01

    Objectives To determine whether disparities between income and mortality changed during a period of major structural and macroeconomic reform and to estimate the changing contribution of different diseases to these disparities. Design Repeated cohort studies. Data sources 1981, 1986, 1991, 1996, and 2001 censuses linked to mortality data. Population Total New Zealand population, ages 1-74 years. Methods Mortality rates standardised for age and ethnicity were calculated for each census cohort by level of household income. Standardised rate differences and rate ratios, and slope and relative indices of inequality (SII and RII), were calculated to measure disparities on both absolute and relative scales. Results All cause mortality rates declined over the 25 year study period in all groups stratified by sex, age, and income, except for 25-44 year olds of both sexes on low incomes among whom there was little change. In all age groups pooled, relative inequalities increased from 1981-4 to 1996-9 (RIIs increased from 1.85 (95% confidence interval 1.67 to 2.04) to 2.54 (2.29 to 2.82) for males and from 1.54 (1.35 to 1.76) to 2.12 (1.88 to 2.39) for females), then stabilised in 2001-4 (RIIs of 2.60 (2.34 to 2.89) and 2.18 (1.93 to 2.45), respectively). Absolute inequalities were stable over time, with a possible fall from 1996-9 to 2001-4. Cardiovascular disease was the major contributor to the observed disparities between income and mortality but decreased in importance from 45% in 1981-4 to 33% in 2001-4 for males and from 50% to 29% for females. The corresponding contribution of cancer increased from 16% to 22% for males and from 12% to 25% for females. Conclusions During and after restructuring of the economy disparities in mortality between income groups in New Zealand increased in relative terms (but not in absolute terms), but it is difficult to confidently draw a causal link with structural reforms. The contribution of different causes of death to this inequality changed over time, indicating a need to re-prioritise health policy accordingly. PMID:18218998

  14. Seasonality of absolute humidity explains seasonality of influenza-like illness in Vietnam.

    PubMed

    Thai, Pham Quang; Choisy, Marc; Duong, Tran Nhu; Thiem, Vu Dinh; Yen, Nguyen Thu; Hien, Nguyen Tran; Weiss, Daniel J; Boni, Maciej F; Horby, Peter

    2015-12-01

    Experimental and ecological studies have shown the role of climatic factors in driving the epidemiology of influenza. In particular, low absolute humidity (AH) has been shown to increase influenza virus transmissibility and has been identified to explain the onset of epidemics in temperate regions. Here, we aim to study the potential climatic drivers of influenza-like illness (ILI) epidemiology in Vietnam, a tropical country characterized by a high diversity of climates. We specifically focus on quantifying and explaining the seasonality of ILI. We used 18 years (1993-2010) of monthly ILI notifications aggregated by province (52) and monthly climatic variables (minimum, mean, maximum temperatures, absolute and relative humidities, rainfall and hours of sunshine) from 67 weather stations across Vietnam. Seasonalities were quantified from global wavelet spectra, using the value of the power at the period of 1 year as a measure of the intensity of seasonality. The 7 climatic time series were characterized by 534 summary statistics which were entered into a regression tree to identify factors associated with the seasonality of AH. Results were extrapolated to the global scale using simulated climatic times series from the NCEP/NCAR project. The intensity of ILI seasonality in Vietnam is best explained by the intensity of AH seasonality. We find that ILI seasonality is weak in provinces experiencing weak seasonal fluctuations in AH (annual power <17.6), whereas ILI seasonality is strongest in provinces with pronounced AH seasonality (power >17.6). In Vietnam, AH and ILI are positively correlated. Our results identify a role for AH in driving the epidemiology of ILI in a tropical setting. However, in contrast to temperate regions, high rather than low AH is associated with increased ILI activity. Fluctuation in AH may be the climate factor that underlies and unifies the seasonality of ILI in both temperate and tropical regions. Alternatively, the mechanism of action of AH on disease transmission may be different in cold-dry versus hot-humid settings. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Towards an Absolute Chronology for the Aegean Iron Age: New Radiocarbon Dates from Lefkandi, Kalapodi and Corinth

    PubMed Central

    Toffolo, Michael B.; Fantalkin, Alexander; Lemos, Irene S.; Felsch, Rainer C. S.; Niemeier, Wolf-Dietrich; Sanders, Guy D. R.; Finkelstein, Israel; Boaretto, Elisabetta

    2013-01-01

    The relative chronology of the Aegean Iron Age is robust. It is based on minute stylistic changes in the Submycenaean, Protogeometric and Geometric styles and their sub-phases. Yet, the absolute chronology of the time-span between the final stages of Late Helladic IIIC in the late second millennium BCE and the archaic colonization of Italy and Sicily toward the end of the 8th century BCE lacks archaeological contexts that can be directly related to events carrying absolute dates mentioned in Egyptian/Near Eastern historical sources, or to well-dated Egyptian/Near Eastern rulers. The small number of radiocarbon dates available for this time span is not sufficient to establish an absolute chronological sequence. Here we present a new set of short-lived radiocarbon dates from the sites of Lefkandi, Kalapodi and Corinth in Greece. We focus on the crucial transition from the Submycenaean to the Protogeometric periods. This transition is placed in the late 11th century BCE according to the Conventional Aegean Chronology and in the late 12th century BCE according to the High Aegean Chronology. Our results place it in the second half of the 11th century BCE. PMID:24386150

  16. Especially for High School Teachers

    NASA Astrophysics Data System (ADS)

    Howell, J. Emory

    1998-01-01

    Secondary School Feature Articles * Heat Capacity, Body Temperature, and Hypothermia, by Doris Kimbrough, p 48. * The Electromotive Series and Other Non-Absolute Scales, by Gavin Peckham, p 49. * Demonstrations on Paramagnetism with an Electronic Balance, by Adolf Cortel, p 61. * Toward More Performance Evaluation in Chemistry, by Sharon Rasp, p 64. A Wealth of Useful Information

  17. Hydraulic head estimation at unobserved locations: Approximating the distribution of the absolute error based on geologic interpretations

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini

    2017-04-01

    Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.

  18. DAQ Software Contributions, Absolute Scale Energy Calibration and Background Evaluation for the NOvA Experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flumerfelt, Eric Lewis

    2015-08-01

    The NOvA (NuMI Off-axis v e [nu_e] Appearance) Experiment is a long-baseline accelerator neutrino experiment currently in its second year of operations. NOvA uses the Neutrinos from the Main Injector (NuMI) beam at Fermilab, and there are two main off-axis detectors: a Near Detector at Fermilab and a Far Detector 810 km away at Ash River, MN. The work reported herein is in support of the NOvA Experiment, through contributions to the development of data acquisition software, providing an accurate, absolute-scale energy calibration for electromagnetic showers in NOvA detector elements, crucial to the primary electron neutrino search, and through anmore » initial evaluation of the cosmic background rate in the NOvA Far Detector, which is situated on the surface without significant overburden. Additional support work for the NOvA Experiment is also detailed, including DAQ Server Administration duties and a study of NOvA’s sensitivity to neutrino oscillations into a “sterile” state.« less

  19. Attitude Toward Ambiguity: Empirically Robust Factors in Self-Report Personality Scales.

    PubMed

    Lauriola, Marco; Foschi, Renato; Mosca, Oriana; Weller, Joshua

    2016-06-01

    Two studies were conducted to examine the factor structure of attitude toward ambiguity, a broad personality construct that refers to personal reactions to perceived ambiguous stimuli in a variety of context and situations. Using samples from two countries, Study 1 mapped the hierarchical structure of 133 items from seven tolerance-intolerance of ambiguity scales (N = 360, Italy; N = 306, United States). Three major factors-Discomfort with Ambiguity, Moral Absolutism/Splitting, and Need for Complexity and Novelty-were recovered in each country with high replicability coefficients across samples. In Study 2 (N = 405, Italian community sample; N =366, English native speakers sample), we carried out a confirmatory analysis on selected factor markers. A bifactor model had an acceptable fit for each sample and reached the construct-level invariance for general and group factors. Convergent validity with related traits was assessed in both studies. We conclude that attitude toward ambiguity can be best represented a multidimensional construct involving affective (Discomfort with Ambiguity), cognitive (Moral Absolutism/Splitting), and epistemic (Need for Complexity and Novelty) components. © The Author(s) 2015.

  20. In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope

    NASA Technical Reports Server (NTRS)

    Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Barbielini, G; Bastieri, D.; Bechtol, K.; Bellazzini, R.; hide

    2012-01-01

    The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron- plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in the Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between approx. 6 and approx. 13 GeV with an estimated uncertainty of approx. 2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.

  1. In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Ajello, M.

    The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron-plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in themore » Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between {approx}6 and {approx}13 GeV with an estimated uncertainty of {approx}2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.« less

  2. Absolute Thermal SST Measurements over the Deepwater Horizon Oil Spill

    NASA Astrophysics Data System (ADS)

    Good, W. S.; Warden, R.; Kaptchen, P. F.; Finch, T.; Emery, W. J.

    2010-12-01

    Climate monitoring and natural disaster rapid assessment require baseline measurements that can be tracked over time to distinguish anthropogenic versus natural changes to the Earth system. Disasters like the Deepwater Horizon Oil Spill require constant monitoring to assess the potential environmental and economic impacts. Absolute calibration and validation of Earth-observing sensors is needed to allow for comparison of temporally separated data sets and provide accurate information to policy makers. The Ball Experimental Sea Surface Temperature (BESST) radiometer was designed and built by Ball Aerospace to provide a well calibrated measure of sea surface temperature (SST) from an unmanned aerial system (UAS). Currently, emissive skin SST observed by satellite infrared radiometers is validated by shipborne instruments that are expensive to deploy and can only take a few data samples along the ship track to overlap within a single satellite pixel. Implementation on a UAS will allow BESST to map the full footprint of a satellite pixel and perform averaging to remove any local variability due to the difference in footprint size of the instruments. It also enables the capability to study this sub-pixel variability to determine if smaller scale effects need to be accounted for in models to improve forecasting of ocean events. In addition to satellite sensor validation, BESST can distinguish meter scale variations in SST which could be used to remotely monitor and assess thermal pollution in rivers and coastal areas as well as study diurnal and seasonal changes to bodies of water that impact the ocean ecosystem. BESST was recently deployed on a conventional Twin Otter airplane for measurements over the Gulf of Mexico to access the thermal properties of the ocean surface being affected by the oil spill. Results of these measurements will be presented along with ancillary sensor data used to eliminate false signals including UV and Synthetic Aperture Radar (SAR) information. Spatial variations and day-to-day changes in the visible oil concentration on the surface of the water were observed in performing these measurements. An assessment of the thermal imagery variation will be made based on the absolute calibration of the sensor to determine if the visible variation was due to properties of the reflected light or of the actual oil composition. Comparisons with satellite data (both SAR and thermal infrared images) and buoy data will also be included.

  3. Absolute radiometric calibration of Landsat using a pseudo invariant calibration site

    USGS Publications Warehouse

    Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young

    2013-01-01

    Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.

  4. Yet another time about time … Part I: An essay on the phenomenology of physical time.

    PubMed

    Simeonov, Plamen L

    2015-12-01

    This paper presents yet another personal reflection on one the most important concepts in both science and the humanities: time. This elusive notion has been not only bothering philosophers since Plato and Aristotle. It goes throughout human history embracing all analytical and creative (anthropocentric) disciplines. Time has been a central theme in physical and life sciences, philosophy, psychology, music, art and many more. This theme is known with a vast body of knowledge across different theories and categories. What has been explored concerns its nature (rational, irrational, arational), appearances/qualia, degrees, dimensions and scales of conceptualization (internal, external, fractal, discrete, continuous, mechanical, quantum, local, global, etc.). Of particular interest have been parameters of time such as duration ranges, resolutions, modes (present, now, past, future), varieties of tenses (e.g. present perfect, present progressive, etc.) and some intuitive, but also fancy phenomenological characteristics such as "arrow", "stream", "texture", "width", "depth", "density", even "scent". Perhaps the most distinct characteristic of this fundamental concept is the absolute time constituting the flow of consciousness according to Husserl, the reflection of pure (human) nature without having the distinction between exo and endo. This essay is a personal reflection upon time in modern physics and phenomenological philosophy. Copyright © 2015. Published by Elsevier Ltd.

  5. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  6. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    NASA Astrophysics Data System (ADS)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  7. Absolute measurement of undulator radiation in the extreme ultraviolet

    NASA Astrophysics Data System (ADS)

    Maezawa, H.; Mitani, S.; Suzuki, Y.; Kanamori, H.; Tamamushi, S.; Mikuni, A.; Kitamura, H.; Sasaki, T.

    1983-04-01

    The spectral brightness of undulator radiation emitted by the model PMU-1 incorporated in the SOR-RING, the dedicated synchrotron radiation source in Tokyo, has been studied in the extreme ultraviolet region from 21.6 to 72.9 eV as a function of the electron energy γ, the field parameter K, and the angle of observation ϴ in the absolute scale. A series of measurements covering the first and the second harmonic component of undulator radiation was compared with the fundamental formula λ n= {λ 0}/{2nγ 2}( {1+K 2}/{2}+γϴ 2 and the effects of finite emittance were studied. The brightness at the first peak was smaller than the theoretical value, while an enhanced second harmonic component was observed.

  8. Renormalization group flow of the Higgs potential

    NASA Astrophysics Data System (ADS)

    Gies, Holger; Sondenheimer, René

    2018-01-01

    We summarize results for local and global properties of the effective potential for the Higgs boson obtained from the functional renormalization group, which allows one to describe the effective potential as a function of both scalar field amplitude and renormalization group scale. This sheds light onto the limitations of standard estimates which rely on the identification of the two scales and helps in clarifying the origin of a possible property of meta-stability of the Higgs potential. We demonstrate that the inclusion of higher-dimensional operators induced by an underlying theory at a high scale (GUT or Planck scale) can relax the conventional lower bound on the Higgs mass derived from the criterion of absolute stability. This article is part of the Theo Murphy meeting issue `Higgs cosmology'.

  9. Measurement of Absolute Concentrations of Individual Compounds in Metabolite Mixtures by Gradient-Selective Time-Zero 1H-13C HSQC (gsHSQC0) with Two Concentration References and Fast Maximum Likelihood Reconstruction Analysis

    PubMed Central

    Hu, Kaifeng; Ellinger, James J.; Chylla, Roger A.; Markley, John L.

    2011-01-01

    Time-zero 2D 13C HSQC (HSQC0) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC0 spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero 1H-13C HSQC0 in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant time mode. Semi-automatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semi-automated gsHSQC0 with those obtained by the original manual phase-cycled HSQC0 approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture. PMID:22029275

  10. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  11. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  12. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187

  13. Method For Detecting The Presence Of A Ferromagnetic Object

    DOEpatents

    Roybal, Lyle G.

    2000-11-21

    A method for detecting a presence or an absence of a ferromagnetic object within a sensing area may comprise the steps of sensing, during a sample time, a magnetic field adjacent the sensing area; producing surveillance data representative of the sensed magnetic field; determining an absolute value difference between a maximum datum and a minimum datum comprising the surveillance data; and determining whether the absolute value difference has a positive or negative sign. The absolute value difference and the corresponding positive or negative sign thereof forms a representative surveillance datum that is indicative of the presence or absence in the sensing area of the ferromagnetic material.

  14. How do we know about Earth's history? Constructing the story of Earth's geologic history by collecting and interpreting evidence based scenarios.

    NASA Astrophysics Data System (ADS)

    Ruthford, Steven; DeBari, Susan; Linneman, Scott; Boriss, Miguel; Chesbrough, John; Holmes, Randall; Thibault, Allison

    2013-04-01

    Beginning in 2003, faculty from Western Washington University, Skagit Valley Community College, local public school teachers, and area tribal college members created an innovative, inquiry based undergraduate geology curriculum. The curriculum, titled "Energy and Matter in Earth's Systems," was supported through various grants and partnerships, including Math and Science Partnership and Noyce Teacher Scholarship grants from the National Science Foundation. During 2011, the authors wrote a geologic time unit for the curriculum. The unit is titled, "How Do We Know About Earth's History?" and has students actively investigate the concepts related to geologic time and methods for determining age. Starting with reflection and assessment of personal misconceptions called "Initial Ideas," students organize a series of events into a timeline. The unit then focuses on the concepts of relative dating, biostratigraphy, and historical attempts at absolute dating, including uniformitarianism, catastrophism, Halley and Joly's Salinity hypothesis, and Kelvin's Heat Loss model. With limited lecture and text, students then dive into current understandings of the age of the Earth, which include radioactive decay rates and radiometric dating. Finally, using their newfound understanding, students investigate a number of real world scenarios and create a timeline of events related to the geologic history of the Earth. The unit concludes with activities that reinforce the Earth's absolute age and direct students to summarize what they have learned by reorganizing the timeline from the "Initial Ideas" and sharing with the class. This presentation will include the lesson materials and findings from one activity titled, "The Earth's Story." The activity is located midway through the unit and begins with reflection on the question, "What are the major events in the Earth's history and when did they happen?" Students are directed to revisit the timeline of events from the "Initial Ideas" activity at the beginning of the unit. After the review and reflection, students collect and interpret six evidence based scenarios dealing with the absolute ages of various rocks, including moon and meteorite samples, microfossil data, banded iron formations, plant and animal fossils, tectonic movement, extinction events, and human migration. The scenarios are dated and allow students to have a more complete view of geologic time. Using this more complete view, students are prompted to revisit and reorganize the timeline from the "Initial Ideas." By the end of the lesson, students will demonstrate a more complete understanding of the age of the Earth, the geologic time scale, and the role of biotic factors in Earth's systems.

  15. A High-Resolution View of Global Seismicity

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.

    2014-12-01

    We present high-precision earthquake relocation results from our global-scale re-analysis of the combined seismic archives of parametric data for the years 1964 to present from the International Seismological Centre (ISC), the USGS's Earthquake Data Report (EDR), and selected waveform data from IRIS. We employed iterative, multistep relocation procedures that initially correct for large location errors present in standard global earthquake catalogs, followed by a simultaneous inversion of delay times formed from regional and teleseismic arrival times of first and later arriving phases. An efficient multi-scale double-difference (DD) algorithm is used to solve for relative event locations to the precision of a few km or less, while incorporating information on absolute hypocenter locations from catalogs such as EHB and GEM. We run the computations on both a 40-core cluster geared towards HTC problems (data processing) and a 500-core HPC cluster for data inversion. Currently, we are incorporating waveform correlation delay time measurements available for events in selected regions, but are continuously building up a comprehensive, global correlation database for densely distributed events recorded at stations with a long history of high-quality waveforms. The current global DD catalog includes nearly one million earthquakes, equivalent to approximately 70% of the number of events in the ISC/EDR catalogs initially selected for relocation. The relocations sharpen the view of seismicity in most active regions around the world, in particular along subduction zones where event density is high, but also along mid-ocean ridges where existing hypocenters are especially poorly located. The new data offers the opportunity to investigate earthquake processes and fault structures along entire plate boundaries at the ~km scale, and provides a common framework that facilitates analysis and comparisons of findings across different plate boundary systems.

  16. Effects of task and age on the magnitude and structure of force fluctuations: insights into underlying neuro-behavioral processes.

    PubMed

    Vieluf, Solveig; Temprado, Jean-Jacques; Berton, Eric; Jirsa, Viktor K; Sleimen-Malkoun, Rita

    2015-03-13

    The present study aimed at characterizing the effects of increasing (relative) force level and aging on isometric force control. To achieve this objective and to infer changes in the underlying control mechanisms, measures of information transmission, as well as magnitude and time-frequency structure of behavioral variability were applied to force-time-series. Older adults were found to be weaker, more variable, and less efficient than young participants. As a function of force level, efficiency followed an inverted-U shape in both groups, suggesting a similar organization of the force control system. The time-frequency structure of force output fluctuations was only significantly affected by task conditions. Specifically, a narrower spectral distribution with more long-range correlations and an inverted-U pattern of complexity changes were observed with increasing force level. Although not significant older participants displayed on average a less complex behavior for low and intermediate force levels. The changes in force signal's regularity presented a strong dependence on time-scales, which significantly interacted with age and condition. An inverted-U profile was only observed for the time-scale relevant to the sensorimotor control process. However, in both groups the peak was not aligned with the optimum of efficiency. Our results support the view that behavioral variability, in terms of magnitude and structure, has a functional meaning and affords non-invasive markers of the adaptations of the sensorimotor control system to various constraints. The measures of efficiency and variability ought to be considered as complementary since they convey specific information on the organization of control processes. The reported weak age effect on variability and complexity measures suggests that the behavioral expression of the loss of complexity hypothesis is not as straightforward as conventionally admitted. However, group differences did not completely vanish, which suggests that age differences can be more or less apparent depending on task properties and whether difficulty is scaled in relative or absolute terms.

  17. The hydrogen anomaly problem in neutron Compton scattering

    NASA Astrophysics Data System (ADS)

    Karlsson, Erik B.

    2018-03-01

    Neutron Compton scattering (also called ‘deep inelastic scattering of neutrons’, DINS) is a method used to study momentum distributions of light atoms in solids and liquids. It has been employed extensively since the start-up of intense pulsed neutron sources about 25 years ago. The information lies primarily in the width and shape of the Compton profile and not in the absolute intensity of the Compton peaks. It was therefore not immediately recognized that the relative intensities of Compton peaks arising from scattering on different isotopes did not always agree with values expected from standard neutron cross-section tables. The discrepancies were particularly large for scattering on protons, a phenomenon that became known as ‘the hydrogen anomaly problem’. The present paper is a review of the discovery, experimental tests to prove or disprove the existence of the hydrogen anomaly and discussions concerning its origin. It covers a twenty-year-long history of experimentation, theoretical treatments and discussions. The problem is of fundamental interest, since it involves quantum phenomena on the subfemtosecond time scale, which are not visible in conventional thermal neutron scattering but are important in Compton scattering where neutrons have two orders of magnitude times higher energy. Different H-containing systems show different cross-section deficiencies and when the scattering processes are followed on the femtosecond time scale the cross-section losses disappear on different characteristic time scales for each H-environment. The last section of this review reproduces results from published papers based on quantum interference in scattering on identical particles (proton or deuteron pairs or clusters), which have given a quantitative theoretical explanation both regarding the H-cross-section reduction and its time dependence. Some new explanations are added and the concluding chapter summarizes the conditions for observing the specific quantum phenomena observed in neutron Compton scattering on protons and deuterons in condensed systems.

  18. Time-resolved production and detection of reactive atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grossman, L. W.; Hurst, G. S.

    1977-09-01

    Cesium iodide in the presence of a buffer gas was dissociated with a pulsed ultraviolet laser, which will be referred to as the source laser. This created a population of atoms at a well defined time and in a compact, well defined volume. A second pulsed laser, with a beam that completely surrounded that of the first, photoionized the cesium after a known time delay. This laser will be referred to as the detector laser. It was determined that for short time delays, all of the cesium atoms were easily ionized. When focused, the source laser generated an extremely intensemore » fluence. By accounting for the beam intensity profile it was shown that all of the molecules in the central portion of the beam can be dissociated and detected. Besides proving the feasibility of single-molecule detection, this enabled a determination of the absolute photodissociation cross section as a function of wavelength. Initial studies of the time decay of the cesium signal at low argon pressures indicated a non-exponential decay. This was consistent with a diffusion mechanism transporting cesium atoms out of the laser beam. Therefore, it was desired to conduct further experiments using a tightly focused source beam, passing along the axis of the detector beam. The theoretical behavior of this simple geometry accounting for diffusion and reaction is easily calculated. A diffusion coefficient can then be extracted by data fitting. If reactive decay is due to impurities constituting a fixed percentage of the buffer gas, then two-body reaction rates will scale linearly with pressure and three-body reaction rates will scale quadratically. Also, the diffusion coefficient will scale inversely with pressure. At low pressures it is conceivable that decay due to diffusion would be sufficiently rapid that all other processes can be neglected. Extraction of a diffusion coefficient would then be quite direct. Finally, study of the reaction of cesium and oxygen was undertaken.« less

  19. Ontogenetic scaling of burrowing forces in the earthworm Lumbricus terrestris.

    PubMed

    Quillin, K J

    2000-09-01

    In hydrostatic skeletons, it is the internal fluid under pressure surrounded by a body wall in tension (rather than a rigid lever) that enables the stiffening of the organism, the antagonism of muscles and the transmission of force from the muscles to the environment. This study examined the ontogenetic effects of body size on force production by an organism supported with a hydrostatic skeleton. The earthworm Lumbricus terrestris burrows by forcefully enlarging crevices in the soil. I built a force-measuring apparatus that measured the radial forces as earthworms of different sizes crawled through and enlarged pre-formed soil burrows. I also built an apparatus that measured the radial and axial forces as earthworms of different sizes attempted to elongate a dead-end burrow. Earthworms ranging in body mass m(b) from hatchlings (0.012 g) to adults (8.9 g) exerted maximum forces (F, in N) during active radial expansion of their burrows (F=0.32 m(b)(0.43)) and comparable forces during axial elongation of the burrow (F=0.26 m(b)(0.47)). Both these forces were almost an order of magnitude greater than the radial anchoring forces during normal peristalsis within burrows (F=0.04 m(b)(0.45)). All radial and axial forces scaled as body mass raised to the 2/5 power rather than to the 2/3 power expected by geometric similarity, indicating that large worms exert greater forces than small worms on an absolute scale, but the difference was less than predicted by scaling considerations. When forces were normalized by body weight, hatchlings could push 500 times their own body weight, while large adults could push only 10 times their own body weight.

  20. Big data driven cycle time parallel prediction for production planning in wafer manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, Junliang; Yang, Jungang; Zhang, Jie; Wang, Xiaoxi; Zhang, Wenjun Chris

    2018-07-01

    Cycle time forecasting (CTF) is one of the most crucial issues for production planning to keep high delivery reliability in semiconductor wafer fabrication systems (SWFS). This paper proposes a novel data-intensive cycle time (CT) prediction system with parallel computing to rapidly forecast the CT of wafer lots with large datasets. First, a density peak based radial basis function network (DP-RBFN) is designed to forecast the CT with the diverse and agglomerative CT data. Second, the network learning method based on a clustering technique is proposed to determine the density peak. Third, a parallel computing approach for network training is proposed in order to speed up the training process with large scaled CT data. Finally, an experiment with respect to SWFS is presented, which demonstrates that the proposed CTF system can not only speed up the training process of the model but also outperform the radial basis function network, the back-propagation-network and multivariate regression methodology based CTF methods in terms of the mean absolute deviation and standard deviation.

Top