Science.gov

Sample records for absolute peak magnitudes

  1. Absolute magnitudes of trans-neptunian objects

    NASA Astrophysics Data System (ADS)

    Duffard, R.; Alvarez-candal, A.; Pinilla-Alonso, N.; Ortiz, J. L.; Morales, N.; Santos-Sanz, P.; Thirouin, A.

    2015-10-01

    Accurate measurements of diameters of trans- Neptunian objects are extremely complicated to obtain. Radiomatric techniques applied to thermal measurements can provide good results, but precise absolute magnitudes are needed to constrain diameters and albedos. Our objective is to measure accurate absolute magnitudes for a sample of trans- Neptunian objects, many of which have been observed, and modelled, by the "TNOs are cool" team, one of Herschel Space Observatory key projects grantes with ~ 400 hours of observing time. We observed 56 objects in filters V and R, if possible. These data, along with data available in the literature, was used to obtain phase curves and to measure absolute magnitudes by assuming a linear trend of the phase curves and considering magnitude variability due to rotational light-curve. In total we obtained 234 new magnitudes for the 56 objects, 6 of them with no reported previous measurements. Including the data from the literature we report a total of 109 absolute magnitudes.

  2. Absolute-magnitude distributions of supernovae

    SciTech Connect

    Richardson, Dean; Wright, John; Jenkins III, Robert L.; Maddox, Larry

    2014-05-01

    The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.

  3. Asteroid absolute magnitudes and slope parameters

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.

    1991-01-01

    A new listing of absolute magnitudes (H) and slope parameters (G) has been created and published in the Minor Planet Circulars; this same listing will appear in the 1992 Ephemerides of Minor Planets. Unlike previous listings, the values of the current list were derived from fits of data at the V band. All observations were reduced in the same fashion using, where appropriate, a single basis default value of 0.15 for the slope parameter. Distances and phase angles were computed for each observation. The data for 113 asteroids was of sufficiently high quality to permit derivation of their H and G. These improved absolute magnitudes and slope parameters will be used to deduce the most reliable bias-corrected asteroid size-frequency distribution yet made.

  4. THE ABSOLUTE MAGNITUDES OF TYPE Ia SUPERNOVAE IN THE ULTRAVIOLET

    SciTech Connect

    Brown, Peter J.; Roming, Peter W. A.; Ciardullo, Robin; Gronwall, Caryl; Hoversten, Erik A.; Pritchard, Tyler; Milne, Peter; Bufano, Filomena; Mazzali, Paolo; Elias-Rosa, Nancy; Filippenko, Alexei V.; Li Weidong; Foley, Ryan J.; Hicken, Malcolm; Kirshner, Robert P.; Gehrels, Neil; Holland, Stephen T.; Immler, Stefan; Phillips, Mark M.; Still, Martin

    2010-10-01

    We examine the absolute magnitudes and light-curve shapes of 14 nearby (redshift z = 0.004-0.027) Type Ia supernovae (SNe Ia) observed in the ultraviolet (UV) with the Swift Ultraviolet/Optical Telescope. Colors and absolute magnitudes are calculated using both a standard Milky Way extinction law and one for the Large Magellanic Cloud that has been modified by circumstellar scattering. We find very different behavior in the near-UV filters (uvw1{sub rc} covering {approx}2600-3300 A after removing optical light, and u {approx} 3000-4000 A) compared to a mid-UV filter (uvm2 {approx}2000-2400 A). The uvw1{sub rc} - b colors show a scatter of {approx}0.3 mag while uvm2-b scatters by nearly 0.9 mag. Similarly, while the scatter in colors between neighboring filters is small in the optical and somewhat larger in the near-UV, the large scatter in the uvm2 - uvw1 colors implies significantly larger spectral variability below 2600 A. We find that in the near-UV the absolute magnitudes at peak brightness of normal SNe Ia in our sample are correlated with the optical decay rate with a scatter of 0.4 mag, comparable to that found for the optical in our sample. However, in the mid-UV the scatter is larger, {approx}1 mag, possibly indicating differences in metallicity. We find no strong correlation between either the UV light-curve shapes or the UV colors and the UV absolute magnitudes. With larger samples, the UV luminosity might be useful as an additional constraint to help determine distance, extinction, and metallicity in order to improve the utility of SNe Ia as standardized candles.

  5. Absolute magnitudes and phase coefficients of trans-Neptunian objects

    NASA Astrophysics Data System (ADS)

    Alvarez-Candal, A.; Pinilla-Alonso, N.; Ortiz, J. L.; Duffard, R.; Morales, N.; Santos-Sanz, P.; Thirouin, A.; Silva, J. S.

    2016-02-01

    Context. Accurate measurements of diameters of trans-Neptunian objects (TNOs) are extremely difficult to obtain. Thermal modeling can provide good results, but accurate absolute magnitudes are needed to constrain the thermal models and derive diameters and geometric albedos. The absolute magnitude, HV, is defined as the magnitude of the object reduced to unit helio- and geocentric distances and a zero solar phase angle and is determined using phase curves. Phase coefficients can also be obtained from phase curves. These are related to surface properties, but only few are known. Aims: Our objective is to measure accurate V-band absolute magnitudes and phase coefficients for a sample of TNOs, many of which have been observed and modeled within the program "TNOs are cool", which is one of the Herschel Space Observatory key projects. Methods: We observed 56 objects using the V and R filters. These data, along with those available in the literature, were used to obtain phase curves and measure V-band absolute magnitudes and phase coefficients by assuming a linear trend of the phase curves and considering a magnitude variability that is due to the rotational light-curve. Results: We obtained 237 new magnitudes for the 56 objects, six of which were without previously reported measurements. Including the data from the literature, we report a total of 110 absolute magnitudes with their respective phase coefficients. The average value of HV is 6.39, bracketed by a minimum of 14.60 and a maximum of -1.12. For the phase coefficients we report a median value of 0.10 mag per degree and a very large dispersion, ranging from -0.88 up to 1.35 mag per degree.

  6. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  7. Morphology and Absolute Magnitudes of the SDSS DR7 QSOs

    NASA Astrophysics Data System (ADS)

    Coelho, B.; Andrei, A. H.; Antón, S.

    2014-10-01

    The ESA mission Gaia will furnish a complete census of the Milky Way, delivering astrometrics, dynamics, and astrophysics information for 1 billion stars. Operating in all-sky repeated survey mode, Gaia will also provide measurements of extra-galactic objects. Among the later there will be at least 500,000 QSOs that will be used to build the reference frame upon which the several independent observations will be combined and interpreted. Not all the QSOs are equally suited to fulfill this role of fundamental, fiducial grid-points. Brightness, morphology, and variability define the astrometric error budget for each object. We made use of 3 morphological parameters based on the PSF sharpness, circularity and gaussianity, which enable us to distinguish the "real point-like" QSOs. These parameters are being explored on the spectroscopically certified QSOs of the SDSS DR7, to compare the performance against other morphology classification schemes, as well as to derive properties of the host galaxy. We present a new method, based on the Gaia quasar database, to derive absolute magnitudes, on the SDSS filters domain. The method can be extrapolated all over the optical window, including the Gaia filters. We discuss colors derived from SDSS apparent magnitudes and colors based on absolute magnitudes that we obtained tanking into account corrections for dust extinction, either intergalactic or from the QSO host, and for the Lyman α forest. In the future we want to further discuss properties of the host galaxies, comparing for e.g. the obtained morphological classification with the color, the apparent and absolute magnitudes, and the redshift distributions.

  8. STANDARDIZING TYPE Ia SUPERNOVA ABSOLUTE MAGNITUDES USING GAUSSIAN PROCESS DATA REGRESSION

    SciTech Connect

    Kim, A. G.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Nordin, J.; Thomas, R. C.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Kerschhaggl, M.; Kowalski, M.; Chotard, N.; Copin, Y.; Gangler, E.; and others

    2013-04-01

    We present a novel class of models for Type Ia supernova time-evolving spectral energy distributions (SEDs) and absolute magnitudes: they are each modeled as stochastic functions described by Gaussian processes. The values of the SED and absolute magnitudes are defined through well-defined regression prescriptions, so that data directly inform the models. As a proof of concept, we implement a model for synthetic photometry built from the spectrophotometric time series from the Nearby Supernova Factory. Absolute magnitudes at peak B brightness are calibrated to 0.13 mag in the g band and to as low as 0.09 mag in the z = 0.25 blueshifted i band, where the dispersion includes contributions from measurement uncertainties and peculiar velocities. The methodology can be applied to spectrophotometric time series of supernovae that span a range of redshifts to simultaneously standardize supernovae together with fitting cosmological parameters.

  9. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    SciTech Connect

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo

    2013-09-20

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.

  10. The absolute magnitude distribution of Kuiper Belt objects

    SciTech Connect

    Fraser, Wesley C.; Brown, Michael E.; Morbidelli, Alessandro; Parker, Alex; Batygin, Konstantin

    2014-02-20

    Here we measure the absolute magnitude distributions (H-distribution) of the dynamically excited and quiescent (hot and cold) Kuiper Belt objects (KBOs), and test if they share the same H-distribution as the Jupiter Trojans. From a compilation of all useable ecliptic surveys, we find that the KBO H-distributions are well described by broken power laws. The cold population has a bright-end slope, α{sub 1}=1.5{sub −0.2}{sup +0.4}, and break magnitude, H{sub B}=6.9{sub −0.2}{sup +0.1} (r'-band). The hot population has a shallower bright-end slope of, α{sub 1}=0.87{sub −0.2}{sup +0.07}, and break magnitude H{sub B}=7.7{sub −0.5}{sup +1.0}. Both populations share similar faint-end slopes of α{sub 2} ∼ 0.2. We estimate the masses of the hot and cold populations are ∼0.01 and ∼3 × 10{sup –4} M {sub ⊕}. The broken power-law fit to the Trojan H-distribution has α{sub 1} = 1.0 ± 0.2, α{sub 2} = 0.36 ± 0.01, and H {sub B} = 8.3. The Kolmogorov-Smirnov test reveals that the probability that the Trojans and cold KBOs share the same parent H-distribution is less than 1 in 1000. When the bimodal albedo distribution of the hot objects is accounted for, there is no evidence that the H-distributions of the Trojans and hot KBOs differ. Our findings are in agreement with the predictions of the Nice model in terms of both mass and H-distribution of the hot and Trojan populations. Wide-field survey data suggest that the brightest few hot objects, with H{sub r{sup ′}}≲3, do not fall on the steep power-law slope of fainter hot objects. Under the standard hierarchical model of planetesimal formation, it is difficult to account for the similar break diameters of the hot and cold populations given the low mass of the cold belt.

  11. Trends in flood peaks' magnitude and seasonality in European transects

    NASA Astrophysics Data System (ADS)

    Diamantini, Elena; Mallucci, Stefano; Allamano, Paola; Claps, Pierluigi; Laio, Francesco; Viglione, Alberto; Hall, Julia; Blöschl, Günter

    2015-04-01

    In the last decade floods seems to have lashed more and more European population, so that more accurate studies concerning flood events tendencies are needed. We present a work in which trends in flood peaks' magnitude and seasonality (in time and space) are analyzed at the European scale: in total 2055 and 4340 stations respectively for magnitude and seasonality are considered along transect lines including entire nations, ranging typically from north to south of Europe. The work is part of the ERC Project "Deciphering River Flood Change". Trend analysis of flood magnitudes is applied to time series longer than 40 years. We find that there is a cluster of stations with negative trends in flood magnitude around the alpine and perialpine area. Positive trends are more frequent in the valleys of the mid Europe. We also use quantile regressions to investigate the behaviour of the highest quantiles, corresponding to floods with the highest return period. The original database is further divided into four classes based on station elevation; the group of catchments between 500 and 1000 m a.s.l. has the most positive trends for the large quantiles. The analysis is further developed by considering the coefficient of variation in 10-years time windows covering the data; the possible presence of trends in the CV is investigated. The obtained results show that there is a global prevalence of positive trend in the CVs, in particular for stations between 500 and 1000 m a.s.l., demonstrating a tendency toward the increase of very large (and possibly very small) annual maxima. To better discriminate the above results we used quantile regressions, able to highlight the trend behaviour of the highest quantiles computed on flood time series, Moreover, the database is divided into four classes based on station elevation. Results show that the group of catchments between 500 and 1000 m a.s.l. has definite and positive trends for the large quantiles. A different branch of this study

  12. The blue anbd visual absolute magnitude distributions of Type IA supernovae

    NASA Astrophysics Data System (ADS)

    Vaughan, Thomas E.; Branch, David; Miller, Douglas L.; Perlmutter, Saul

    1995-02-01

    Tully-Fisher (TF), surface brightness fluctuation (SBF), and Hubble law distances to the parent galaxies of Type Ia supernovae (SNs Ia) are used in order to study the SN Ia blue and visual peak absolute magnitude (MB and MV) distributions. We propose two objective cuts, each of which produces a subsample with small intrinsic dispersion in M. One cut, which can be applied to either band, distinguishes between a subsample of bright events and a smaller subsample of dim events, some of which were extinquished in the parent galaxy and some of which were intrinsically subluminous. The bright events are found to be distributed with an observed dispersions of 0.3 less than or approximately = Sigmaobs less than or approximately = 0.4 about a mean absolut magnitude (M-barB or M-barV). Each of the dim SNs was spectroscopically peculiar and/or had a red B-V color; this motivates the adoption of an alternative cut that is based on B-V rather than on M. To wit, SNs Ia that are both known to have -0.25 less than B-V less than + 0.25 and not known to be spectroscopically peculiar show observational dispersion of only Sigmaobs(MB) = Sigmaobs(MV) = 0.3. Because characteristics observational errors produce Sigmaerr(M) greater than 0.2,the intrinsic dispersion among such SNs Ia is Sigmaint(M) less than or approximately = 0.2. The small observational dispersion indicates that SNs Ia, the TF relation, and SBFs all good relative distances to those galaxies that produce SNs Ia. The conflict between those who use SNs Ia in order to determine the value of the Hubble constant (H0) and those who use TF and SBF distances to determine H0 results from discrepant calibrations.

  13. Spectrophotometry of Wolf-Rayet stars - Intrinsic colors and absolute magnitudes

    NASA Technical Reports Server (NTRS)

    Torres-Dodgen, Ana V.; Massey, Philip

    1988-01-01

    Absolute spectrophotometry of about 10-A resolution in the range 3400-7300 A have been obtained for southern Wolf-Rayet stars, and line-free magnitudes and colors have been constructed. The emission-line contamination in the narrow-band ubvr systems of Westerlund (1966) and Smith (1968) is shown to be small for most WN stars, but to be quite significant for WC stars. It is suggested that the more severe differences in intrinsic color from star to star of the same spectral subtype noted at shorter wavelengths are due to differences in atmospheric extent. True continuum absolute visual magnitudes and intrinsic colors are obtained for the LMC WR stars. The most visually luminous WN6-WN7 stars are found to be located in the core of the 30 Doradus region.

  14. Absolute magnitude estimation and relative judgement approaches to subjective workload assessment

    NASA Technical Reports Server (NTRS)

    Vidulich, Michael A.; Tsang, Pamela S.

    1987-01-01

    Two rating scale techniques employing an absolute magnitude estimation method, were compared to a relative judgment method for assessing subjective workload. One of the absolute estimation techniques used was an unidimensional overall workload scale and the other was the multidimensional NASA-Task Load Index technique. Thomas Saaty's Analytic Hierarchy Process was the unidimensional relative judgment method used. These techniques were used to assess the subjective workload of various single- and dual-tracking conditions. The validity of the techniques was defined as their ability to detect the same phenomena observed in the tracking performance. Reliability was assessed by calculating test-retest correlations. Within the context of the experiment, the Saaty Analytic Hierarchy Process was found to be superior in validity and reliability. These findings suggest that the relative judgment method would be an effective addition to the currently available subjective workload assessment techniques.

  15. The visual surface brightness relation and the absolute magnitudes of RR Lyrae stars. I - Theory

    NASA Technical Reports Server (NTRS)

    Manduca, A.; Bell, R. A.

    1981-01-01

    A theoretical relation analogous to the Barnes-Evans relation between stellar surface brightness and V-R color is derived which is applicable to the temperatures and gravities appropriate to RR Lyrae stars. Values of the visual surface brightness and V-R colors are calculated for model stellar atmospheres with effective temperatures between 6000 and 8000 K, log surface gravities from 2.2 to 3.5, and A/H anbundance ratios from -0.5 to -3.0. The resulting relation is found to be in reasonable agreement with the empirical relation of Barnes, Evans and Moffet (1978), with, however, small sensitivities to gravity and metal abundance. The relation may be used to derive stellar angular diameters from (V,R) photometry and to derive radii, distances, and absolute magnitudes for variable stars when combined with a radial velocity curve. The accuracies of the radii and distances (within 10%) and absolute magnitudes (within 0.25 magnitudes) compare favorably with those of the Baade-Wesselink method currently in use.

  16. Absolute magnitudes and slope parameters of Pan-STARRS PS1 asteroids --- preliminary results

    NASA Astrophysics Data System (ADS)

    Vereš, P.; Jedicke, R.; Fitzsimmons, A.; Denneau, L.; Bolin, B.; Wainscoat, R.; Tonry, J.

    2014-07-01

    We present the study of absolute magnitude (H) and slope parameter (G) of 170,000 asteroids observed by the Pan-STARRS1 telescope during the period of 15 months within its 3-year all-sky survey mission. The exquisite photometry with photometric errors below 0.04 mag and well-defined filter and photometric system allowed to derive H and G with statistical and systematic errors. Our new approach lies in the Monte Carlo technique simulating rotation periods, amplitudes, and colors, and deriving most-likely H, G and their systematic errors. Comparison of H_M by Muinonen's phase function (Muinonen et al., 2010) with the Minor Planet Center database revealed a negative offset of 0.22±0.29 meaning that Pan-STARRS1 asteroids are fainter. We showed that the absolute magnitude derived by Muinonen's function is systematically larger on average by 0.14±0.29 and by 0.30±0.16 when assuming fixed slope parameter (G=0.15, G_{12}=0.53) than Bowell's absolute magnitude (Bowell et al., 1989). We also derived slope parameters of asteroids of known spectral types and showed a good agreement with the previous studies within the derived uncertainties. However, our systematic errors on G and G_{12} are significantly larger than in previous work, which is caused by poor temporal and phase coverage of vast majority of the detected asteroids. This disadvantage will vanish when full survey data will be available and ongoing extended and enhanced mission will provide new data.

  17. Uvby-beta photometry of visual double stars - Absolute magnitudes of intrinsically bright stars

    NASA Astrophysics Data System (ADS)

    Olsen, E. H.

    1982-05-01

    Individual absolute visual magnitudes M(v) are derived for intrinsically bright stars and evolved stars. The results are collected for 106 objects believed to be members of binary systems. uvby-beta photometry was empirically calibrated in terms of M(v) for main sequence stars and photoelectrically determined apparent magnitudes. The derived M(v) values are not significantly different from those stated in the Wilson catalogue (1976). Binary systems with main sequence primaries and secondary components off the main sequence were also investigated. Several systems in which at least one component may be in the pre-main sequence contraction stage are pointed out. A wide variety of comments and derived data are given individually for 136 double stars, including metallicities, distance moduli, and masses.

  18. Analysis of the Magnitude and Frequency of Peak Discharge and Maximum Observed Peak Discharge in New Mexico and Surrounding Areas

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2008-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent

  19. Distance and absolute magnitudes of the brightest stars in the dwarf galaxy Sextans A

    NASA Technical Reports Server (NTRS)

    Sandage, A.; Carlson, G.

    1982-01-01

    In an attempt to improve present bright star calibration, data were gathered for the brightest red and blue stars and the Cepheids in the Im V dwarf galaxy, Sextans A. On the basis of a magnitude sequence measured to V and B values of about 22 and 23, respectively, the mean magnitudes of the three brightest blue stars are V=17.98 and B=17.88. The three brightest red supergiants have V=18.09 and B=20.14. The periods and magnitudes measured for five Cepheids yield an apparent blue distance modulus of 25.67 + or - 0.2, via the P-L relation, and the mean absolute magnitudes of V=-7.56 and B=-5.53 for the red supergiants provide additional calibration of the brightest red stars as distance indicators. If Sextans A were placed at the distance of the Virgo cluster, it would appear to have a surface brightness of 23.5 mag/sq arcec. This, together with the large angular diameter, would make such a galaxy easily discoverable in the Virgo cluster by means of ground-based surveys.

  20. Metrological activity determination of 133Ba by sum-peak absolute method

    NASA Astrophysics Data System (ADS)

    da Silva, R. L.; de Almeida, M. C. M.; Delgado, J. U.; Poledna, R.; Santos, A.; de Veras, E. V.; Rangel, J.; Trindade, O. L.

    2016-07-01

    The National Laboratory for Metrology of Ionizing Radiation provides gamma sources of radionuclide and standardized in activity with reduced uncertainties. Relative methods require standards to determine the sample activity while the absolute methods, as sum-peak, not. The activity is obtained directly with good accuracy and low uncertainties. 133Ba is used in research laboratories and on calibration of detectors for analysis in different work areas. Classical absolute methods don't calibrate 133Ba due to its complex decay scheme. The sum-peak method using gamma spectrometry with germanium detector standardizes 133Ba samples. Uncertainties lower than 1% to activity results were obtained.

  1. Analysis of the magnitude and frequency of peak discharge and maximum observed peak discharge in New Mexico

    USGS Publications Warehouse

    Waltemeyer, S.D.

    1996-01-01

    Equations for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years were updated for New Mexico. The equations represent flood response for eight distinct physiographic regions of New Mexico. Additionally, a regional equation was developed for basins less than 10 square miles and below 7,500 feet in mean basin elevation. Flood-frequency relations were updated for 201 gaging stations on unregulated streams in New Mexico and the bordering areas of adjacent States. The analysis described in this report used data collected through 1993. A low-discharge threshold was applied to frequency analysis of 140 gaging stations. Inclusion of these low peak flows affects the fitting of the lower tail and the upper tail of the distribution. Peak discharges can be estimated at an ungaged site on a stream that has a gaging station upstream or downstream. These estimates are derived using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region. Flood-frequency estimates for 201 gaged sites were weighted by estimates from the regional regression equation. The observed, predicted, and weighted flood-frequency data were computed for each gaging station. A maximum observed peak discharge as related to drainage area was determined for eight physiographic regions in New Mexico. Peak-discharge data collected at 201 gaging stations were used to develop a maximum peak-discharge relation as an alternative method of estimating the peak discharge of an extreme event.

  2. Magnitude and frequency of peak discharges for Mississippi River Basin Flood of 1993

    USGS Publications Warehouse

    Thomas, W.O., Jr.; Eash, D.A.

    1995-01-01

    The magnitude and frequency of the 1993 peak discharges in the upper Mississippi River Basin are characterized by applying Bulletin 17B and L-moment methods to annual peak discharges at 115 unregulated watersheds in the basin. The analysis indicated that the 1993 flood was primarily a 50-year or less event on unregulated watersheds less than about 50,000 km2 (20,000 mi2). Of the 115 stations analyzed, the Bulletin 17B and L-moment methods were used to identify 89 and 84 stations, respectively, having recurrence intervals of 50 years or less, and 31 and 26 stations, respectively, having recurrence intervals greater than 50 years for the 1993 peak discharges. The 1993 flood in the upper Mississippi River Basin was significant in terms of (a) peak discharges with recurrence intervals greater than 50 years at approximately 25 percent of the stations analyzed, (b) peak discharges of record at 33 of the 115 stations analyzed, (c) extreme magnitude, duration, and areal extent of precipitation, (d) flood volumes with recurrence intervals greater than 100 years at many stations, and (e) extreme flood damage and loss of lives. Furthermore, peak discharges on several larger, regulated watersheds also exceeded the 100-year recurrence interval. However, for about 75 percent of the 115 unregulated stations in the analysis, the frequency of the 1993 peak discharges was less than a 50-year event.

  3. The photometric properties of brightest cluster galaxies. I - Absolute magnitudes in 116 nearby Abell clusters

    NASA Technical Reports Server (NTRS)

    Hoessel, J. G.; Gunn, J. E.; Thuan, T. X.

    1980-01-01

    Two-color aperture photometry of the brightest galaxies in a complete sample of nearby Abell clusters is presented. The results are used to anchor the bright end of the Hubble diagram; essentially the entire formal error for this method is then due to the sample of distant clusters used. New determinations of the systematic trend of galaxy absolute magnitude with the cluster properties of richness and Bautz-Morgan type are derived. When these new results are combined with the Gunn and Oke (1975) data on high-redshift clusters, a formal value (without accounting for any evolution) of q sub 0 = -0.55 + or - 0.45 (1 standard deviations) is found.

  4. Earthquake magnitude calculation without saturation from the scaling of peak ground displacement

    NASA Astrophysics Data System (ADS)

    Melgar, Diego; Crowell, Brendan W.; Geng, Jianghui; Allen, Richard M.; Bock, Yehuda; Riquelme, Sebastian; Hill, Emma M.; Protti, Marino; Ganas, Athanassios

    2015-07-01

    GPS instruments are noninertial and directly measure displacements with respect to a global reference frame, while inertial sensors are affected by systematic offsets—primarily tilting—that adversely impact integration to displacement. We study the magnitude scaling properties of peak ground displacement (PGD) from high-rate GPS networks at near-source to regional distances (~10-1000 km), from earthquakes between Mw6 and 9. We conclude that real-time GPS seismic waveforms can be used to rapidly determine magnitude, typically within the first minute of rupture initiation and in many cases before the rupture is complete. While slower than earthquake early warning methods that rely on the first few seconds of P wave arrival, our approach does not suffer from the saturation effects experienced with seismic sensors at large magnitudes. Rapid magnitude estimation is useful for generating rapid earthquake source models, tsunami prediction, and ground motion studies that require accurate information on long-period displacements.

  5. Methods for Estimating Magnitude and Frequency of Peak Flows for Natural Streams in Utah

    USGS Publications Warehouse

    Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.

    2007-01-01

    Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.

  6. Technique for estimating magnitude and frequency of peak flows in Maryland

    USGS Publications Warehouse

    Dillow, Jonathan J.A.

    1996-01-01

    Methods are presented for estimating peak-flow magnitudes of selected frequencies for drainage basins in Maryland. The methods were developed by generalized least-squares regression techniques using data from 219 streamflow-gaging stations in and near Maryland, and apply to peak flows with recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years. The State is divided into five hydrologic regions: the Appalachian Plateaus and Allegheny Ridges region, the Blue Ridge and Great Valley region, the Piedmont region, the Western Coastal Plain region, and the Eastern Coastal Plain region. Sets of equations for calculating peak discharges based on physical basin characteristics and their associated standard errors of prediction are provided for each of the five hydrologic regions. Basin characteristics and flood-frequency characteristics are tabulated for 236 streamflow- gaging stations in Maryland and surrounding States. Methods of estimating peak flows at sites in Maryland for ungaged and gaged sites are presented.

  7. Predicting the Timing and Magnitude of Tropical Mosquito Population Peaks for Maximizing Control Efficiency

    PubMed Central

    Yang, Guo-Jing; Brook, Barry W.; Bradshaw, Corey J. A.

    2009-01-01

    The transmission of mosquito-borne diseases is strongly linked to the abundance of the host vector. Identifying the environmental and biological precursors which herald the onset of peaks in mosquito abundance would give health and land-use managers the capacity to predict the timing and distribution of the most efficient and cost-effective mosquito control. We analysed a 15-year time series of monthly abundance of Aedes vigilax, a tropical mosquito species from northern Australia, to determine periodicity and drivers of population peaks (high-density outbreaks). Two sets of density-dependent models were used to examine the correlation between mosquito abundance peaks and the environmental drivers of peaks or troughs (low-density periods). The seasonal peaks of reproduction (r) and abundance () occur at the beginning of September and early November, respectively. The combination of low mosquito abundance and a low frequency of a high tide exceeding 7 m in the previous low-abundance (trough) period were the most parsimonious predictors of a peak's magnitude, with this model explaining over 50% of the deviance in . Model weights, estimated using AICc, were also relatively high for those including monthly maximum tide height, monthly accumulated tide height or total rainfall per month in the trough, with high values in the trough correlating negatively with the onset of a high-abundance peak. These findings illustrate that basic environmental monitoring data can be coupled with relatively simple density feedback models to predict the timing and magnitude of mosquito abundance peaks. Decision-makers can use these methods to determine optimal levels of control (i.e., least-cost measures yielding the largest decline in mosquito abundance) and so reduce the risk of disease outbreaks in human populations. PMID:19238191

  8. The orbit of Phi Cygni measured with long-baseline optical interferometry - Component masses and absolute magnitudes

    NASA Technical Reports Server (NTRS)

    Armstrong, J. T.; Hummel, C. A.; Quirrenbach, A.; Buscher, D. F.; Mozurkewich, D.; Vivekanand, M.; Simon, R. S.; Denison, C. S.; Johnston, K. J.; Pan, X.-P.

    1992-01-01

    The orbit of the double-lined spectroscopic binary Phi Cygni, the distance to the system, and the masses and absolute magnitudes of its components are presented via measurements with the Mar III Optical Interferometer. On the basis of a reexamination of the spectroscopic data of Rach & Herbig (1961), the values and uncertainties are adopted for the period and the projected semimajor axes from the present fit to the spectroscopic data and the values of the remaining elements from the present fit to the Mark III data. The elements of the true orbit are derived, and the masses and absolute magnitudes of the components, and the distance to the system are calculated.

  9. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  10. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  11. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  12. Estimating flood-peak discharge magnitudes and frequencies for rural streams in Illinois

    USGS Publications Warehouse

    Soong, David T.; Ishii, Audrey; Sharpe, Jennifer B.; Avery, Charles F.

    2004-01-01

    Flood-peak discharge magnitudes and frequencies at streamflow-gaging sites were developed with the annual maximum series (AMS) and the partial duration series (PDS) in this study. Regional equations for both flood series were developed for estimating flood-peak discharge magnitudes at specified recurrence intervals of rural Illinois streams. The regional equations are techniques for estimating flood quantiles at ungaged sites or for improving estimated flood quantiles at gaged sites with short records or unrepresentative data. Besides updating at-site floodfrequency estimates using flood data up to water year 1999, this study updated the generalized skew coefficients for Illinois to be used with the Log-Pearson III probability distribution for analyzing the AMS, developed a program for analyzing the partial duration series with the Generalized Pareto probability distribution, and applied the BASINSOFT program with digital datasets in soil, topography, land cover, and precipitation to develop a set of basin characteristics. The multiple regression analysis was used to develop the regional equations with subsets of the basin characteristics and the updated at-site flood frequencies. Seven hydrologic regions were delineated using physiographic and hydrologic characteristics of drainage basins of Illinois. The seven hydrologic regions were used for both the AMS and PDS analyses. Examples are presented to illustrate the use of the AMS regional equations to estimate flood quantiles at an ungaged site and to improve flood-quantile estimates at and near a gaged site. Flood-quantile estimates in four regulated channel reaches of Illinois also are approximated by linear interpolation. Documentation of the flood data preparation and evaluation, procedures for determining the flood quantiles, basin characteristics, generalized skew coefficients, hydrologic region delineations, and the multiple regression analyses used to determine the regional equations are presented in the

  13. Estimating magnitude and frequency of floods using the PeakFQ 7.0 program

    USGS Publications Warehouse

    Veilleux, Andrea G.; Cohn, Timothy A.; Flynn, Kathleen M.; Mason, Jr., Robert R.; Hummel, Paul R.

    2014-01-01

    Flood-frequency analysis provides information about the magnitude and frequency of flood discharges based on records of annual maximum instantaneous peak discharges collected at streamgages. The information is essential for defining flood-hazard areas, for managing floodplains, and for designing bridges, culverts, dams, levees, and other flood-control structures. Bulletin 17B (B17B) of the Interagency Advisory Committee on Water Data (IACWD; 1982) codifies the standard methodology for conducting flood-frequency studies in the United States. B17B specifies that annual peak-flow data are to be fit to a log-Pearson Type III distribution. Specific methods are also prescribed for improving skew estimates using regional skew information, tests for high and low outliers, adjustments for low outliers and zero flows, and procedures for incorporating historical flood information. The authors of B17B identified various needs for methodological improvement and recommended additional study. In response to these needs, the Advisory Committee on Water Information (ACWI, successor to IACWD; http://acwi.gov/, Subcommittee on Hydrology (SOH), Hydrologic Frequency Analysis Work Group (HFAWG), has recommended modest changes to B17B. These changes include adoption of a generalized method-of-moments estimator denoted the Expected Moments Algorithm (EMA) (Cohn and others, 1997) and a generalized version of the Grubbs-Beck test for low outliers (Cohn and others, 2013). The SOH requested that the USGS implement these changes in a user-friendly, publicly accessible program.

  14. Flux of optical meteors down to M sub pg = +12. [photographic absolute magnitude

    NASA Technical Reports Server (NTRS)

    Cook, A. F.; Weekes, T. C.; Williams, J. T.; Omongain, E.

    1980-01-01

    Observations of the flux of optical meteors down to photographic magnitudes of +12 are reported. The meteors were detected by photometry using a 10-m optical reflector from December 12-15, 1974, during the Geminid shower. A total of 2222 light pulses is identified as coming from meteors within the 1 deg field of view of the detector, most of which correspond to sporadic meteors traversing the detector beam at various angles and velocities and do not differ with the date, indicating that the Geminid contribution at faint luminosities is small compared to the sporadic contribution. A rate of 1.1 to 3.3 x 10 to the -12th meteors/sq cm per sec is obtained together with a power law meteor spectrum which is used to derive a relationship between cumulative meteor flux and magnitude which is linear for magnitudes from -2.4 through +12. Expressions for the cumulative flux upon the earth's atmosphere and at a test surface at 1 AU far from the earth as a function of magnitude are also obtained along with an estimate of the cumulative number density of particles.

  15. A simple approach to estimate earthquake magnitude from the arrival time of the peak acceleration amplitude

    NASA Astrophysics Data System (ADS)

    Noda, S.; Yamamoto, S.

    2014-12-01

    In order for Earthquake Early Warning (EEW) to be effective, the rapid determination of magnitude (M) is important. At present, there are no methods which can accurately determine M even for extremely large events (ELE) for EEW, although a number of the methods have been suggested. In order to solve the problem, we use a simple approach derived from the fact that the time difference (Top) from the onset of the body wave to the arrival time of the peak acceleration amplitude of the body wave scales with M. To test this approach, we use 15,172 accelerograms of regional earthquakes (most of them are M4-7 events) from the K-NET, as the first step. Top is defined by analyzing the S-wave in this step. The S-onsets are calculated by adding the theoretical S-P times to the P-onsets which are manually picked. As the result, it is confirmed that logTop has high correlation with Mw, especially for the higher frequency band (> 2Hz). The RMS of residuals between Mw and M estimated in this step is less than 0.5. In case of the 2011 Tohoku earthquake, M is estimated to be 9.01 at 150 seconds after the initiation of the event.To increase the number of the ELE data, we add the teleseismic high frequency P-wave records to the analysis, as the second step. According to the result of various back-projection analyses, we consider the teleseismic P-waves to contain information on the entire rupture process. The BHZ channel data of the Global Seismographic Network for 24 events are used in this step. 2-4Hz data from the stations in the epicentral distance range of 30-85 degrees are used following the method of Hara [2007]. All P-onsets are manually picked. Top obtained from the teleseimic data show good correlation with Mw, complementing the one obtained from the regional data. We conclude that the proposed approach is quite useful for estimating reliable M for EEW, even for the ELE.

  16. The Fast Declining Type Ia Supernova 2003gs, and Evidence for a Significant Dispersion in Near-Infrared Absolute Magnitudes of Fast Decliners at Maximum Light

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin; Marion, G. H.; Suntzeff, Nicholas B.; Blanc, Guillaume; Bufano, Filomena; Candia, Pablo; Cartier, Regis; Elias-Rosa, Nancy; Espinoza, Juan; Gonzalez, David; Gonzalez, Luis; Gonzalez, Sergio; Gooding, Samuel D.; Hamuy, Mario; Knox, Ethan A.; Milne, Peter A.; Morrell, Nidia; Phillips, Mark M.; Stritzinger, Maximilian; Thomas-Osip, Joanna

    2009-12-01

    We obtained optical photometry of SN 2003gs on 49 nights, from 2 to 494 days after T(B max). We also obtained near-IR photometry on 21 nights. SN 2003gs was the first fast declining Type Ia SN that has been well observed since SN 1999by. While it was subluminous in optical bands compared to more slowly declining Type Ia SNe, it was not subluminous at maximum light in the near-IR bands. There appears to be a bimodal distribution in the near-IR absolute magnitudes of Type Ia SNe at maximum light. Those that peak in the near-IR after T(B max) are subluminous in the all bands. Those that peak in the near-IR prior to T(B max), such as SN 2003gs, have effectively the same near-IR absolute magnitudes at maximum light regardless of the decline rate Δm 15(B). Near-IR spectral evidence suggests that opacities in the outer layers of SN 2003gs are reduced much earlier than for normal Type Ia SNe. That may allow γ rays that power the luminosity to escape more rapidly and accelerate the decline rate. This conclusion is consistent with the photometric behavior of SN 2003gs in the IR, which indicates a faster than normal decline from approximately normal peak brightness. Based in part on observations taken at the Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation. The near-IR photometry from La Silla and Paranal was obtained by the European Supernova Collaboration (ESC).

  17. Light curves and absolute magnitudes of four recent fast LMC novae

    NASA Astrophysics Data System (ADS)

    Hearnshaw, J. B.; Livingston, C. M.; Gilmore, A. C.; Kilmartin, P. M.

    2004-05-01

    The light curves of four recent fast LMC novae (Nova LMC 1988a, 1992, 1995, 2000) have been analysed to obtain the parameter t2, the time for a two magnitude decline below maximum light. Using the calibration of Della Valle & Livio (1995), values of MV at maximum are obtained. The weighted mean distance modulus to the LMC based on these novae is 18.89 ± 0.16. This differs significantly drom the distance modulus adpoted by Della Valle & Livio of 18.50, but only differs at the 1σ-level from Feast's (1999) value of 18.70 ± 0.10. The evidence based on these novae suggests that either: (i) DMLMC = 18.50 is too close for the LMC; or (ii) some novae in the LMC, including these four, are significantly underluminous at maximum light compared with those in M31, by about 0.4 mag. This could be a metallicity effect, given that more metal-rich M31 novae were predominantly used by Della Valle & Livio to obtain their calibration.

  18. THE DYNAMICAL DISTANCE, RR LYRAE ABSOLUTE MAGNITUDE, AND AGE OF THE GLOBULAR CLUSTER NGC 6266

    SciTech Connect

    McNamara, Bernard J.; McKeever, Jean E-mail: jeanm12@nmsu.edu

    2011-11-15

    The internal proper motion dispersion of NGC 6266 was measured using Hubble Space Telescope Wide Field Planetary Camera 2 images with an epoch difference of eight years. The dispersion was found to be 0.041 {+-} 0.001 arcsec century{sup -1}. This value was then equated to the cluster's radial velocity dispersion of 13.7 {+-} 1.1 km s{sup -1} to yield a distance to NGC 6266 of 7054 {+-} 583 pc. Based on this distance we find that the NGC 6266 RR Lyrae stars have M{sub V} = 0.51 {+-} 0.18 mag. This magnitude is in good agreement with that predicted by the M{sub V} versus [Fe/H] relation found by Benedict et al. Using an average [Fe/H] of -1.25 for NGC 6266, their relation predicts M{sub V} = 0.49 {+-} 0.06. Based on the RR Lyrae M{sub V} versus age relation determined by Chaboyer et al., we estimate that NGC 6266 has an age of 11.4 {+-} 2.2 Gyr.

  19. OSSOS. II. A Sharp Transition in the Absolute Magnitude Distribution of the Kuiper Belt’s Scattering Population

    NASA Astrophysics Data System (ADS)

    Shankman, C.; Kavelaars, JJ.; Gladman, B. J.; Alexandersen, M.; Kaib, N.; Petit, J.-M.; Bannister, M. T.; Chen, Y.-T.; Gwyn, S.; Jakubik, M.; Volk, K.

    2016-02-01

    We measure the absolute magnitude, H, distribution, dN(H) ∝ 10αH, of the scattering Trans-Neptunian Objects (TNOs) as a proxy for their size-frequency distribution. We show that the H-distribution of the scattering TNOs is not consistent with a single-slope distribution, but must transition around Hg ˜ 9 to either a knee with a shallow slope or to a divot, which is a differential drop followed by second exponential distribution. Our analysis is based on a sample of 22 scattering TNOs drawn from three different TNO surveys—the Canada-France Ecliptic Plane Survey, Alexandersen et al., and the Outer Solar System Origins Survey, all of which provide well-characterized detection thresholds—combined with a cosmogonic model for the formation of the scattering TNO population. Our measured absolute magnitude distribution result is independent of the choice of cosmogonic model. Based on our analysis, we estimate that the number of scattering TNOs is (2.4-8.3) × 105 for Hr < 12. A divot H-distribution is seen in a variety of formation scenarios and may explain several puzzles in Kuiper Belt science. We find that a divot H-distribution simultaneously explains the observed scattering TNO, Neptune Trojan, Plutino, and Centaur H-distributions while simultaneously predicting a large enough scattering TNO population to act as the sole supply of the Jupiter-Family Comets.

  20. Methods for estimating the magnitude and frequency of peak discharges of rural, unregulated streams in Virginia

    USGS Publications Warehouse

    Bisese, James A.

    1995-01-01

    Methods are presented for estimating the peak dis- charges of rural, unregulated streams in Virginia. A Pearson Type III distribution was fitted to the logarithms of annual peak-discharge records from 363 stream-gaging stations in Virginia to estimate the peak discharge at these stations for recurrence intervals of 2 to 500 years. Peak-discharge characteristics for 284 stations were regressed on potential explanatory variables, including drainage area, main channel length, main channel slope, mean basin elevation, percentage of forest cover, mean annual precipitation, and maximum rainfall intensity, by using generalized least-squares multiple-regression analysis. Stations were grouped into eight peak-discharge regions based on the five physiographic provinces in the State, and equations are presented for each region. Alternative equations using drainage area only are presented for each region. Alternative equations using drainage area only are presented for each region. Methods and sample computations are provided to estimate peak discharges for recurrence intervals of 2 to 500 years at gaged and ungaged sites in Virginia, and to adjust the regression estimates for sites where nearby gaged-site data are available.

  1. Estimating magnitude and frequency of peak discharges for rural, unregulated, streams in West Virginia

    USGS Publications Warehouse

    Wiley, J.B.; Atkins, John T.; Tasker, Gary D.

    2000-01-01

    Multiple and simple least-squares regression models for the log10-transformed 100-year discharge with independent variables describing the basin characteristics (log10-transformed and untransformed) for 267 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions of the State, designated East, North, and South. Exploratory data analysis procedures identified 31 gaging stations at which discharges are different than would be expected for West Virginia. Regional equations for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak discharges were determined by generalized least-squares regression using data from 236 gaging stations. Log10-transformed drainage area was the most significant independent variable for all regions.Equations developed in this study are applicable only to rural, unregulated, streams within the boundaries of West Virginia. The accuracy of estimating equations is quantified by measuring the average prediction error (from 27.7 to 44.7 percent) and equivalent years of record (from 1.6 to 20.0 years).

  2. Color excesses, intrinsic colors, and absolute magnitudes of Galactic and Large Magellanic Cloud Wolf-Rayet stars

    NASA Technical Reports Server (NTRS)

    Vacca, William D.; Torres-Dodgen, Ana V.

    1990-01-01

    A new method of determining the color excesses of WR stars in the Galaxy and the LMC has been developed and is used to determine the excesses for 44 Galactic and 32 LMC WR stars. The excesses are combined with line-free, narrow-band spectrophotometry to derive intrinsic colors of the WR stars of nearly all spectral subtypes. No correlation of UV spectral index or intrinsic colors with spectral subtype is found for the samples of single WN or WC stars. There is evidence that early WN stars in the LMC have flatter UV continua and redder intrinsic colors than early WN stars in the Galaxy. No separation is found between the values derived for Galactic WC stars and those obtained for LMC WC stars. The intrinsic colors are compared with those calculated from model atmospheres of WR stars and generally good agreement is found. Absolute magnitudes are derived for WR stars in the LMC and for those Galactic WR stars located in clusters and associations for which there are reliable distance estimates.

  3. Absolute magnitudes and slope parameters for 250,000 asteroids observed by Pan-STARRS PS1 - Preliminary results

    NASA Astrophysics Data System (ADS)

    Vereš, Peter; Jedicke, Robert; Fitzsimmons, Alan; Denneau, Larry; Granvik, Mikael; Bolin, Bryce; Chastel, Serge; Wainscoat, Richard J.; Burgett, William S.; Chambers, Kenneth C.; Flewelling, Heather; Kaiser, Nick; Magnier, Eugen A.; Morgan, Jeff S.; Price, Paul A.; Tonry, John L.; Waters, Christopher

    2015-11-01

    We present the results of a Monte Carlo technique to calculate the absolute magnitudes (H) and slope parameters (G) of ∼240,000 asteroids observed by the Pan-STARRS1 telescope during the first 15 months of its 3-year all-sky survey mission. The system's exquisite photometry with photometric errors ≲ 0.04mag , and well-defined filter and photometric system, allowed us to derive accurate H and G even with a limited number of observations and restricted range in phase angles. Our Monte Carlo method simulates each asteroid's rotation period, amplitude and color to derive the most-likely H and G, but its major advantage is in estimating realistic statistical + systematic uncertainties and errors on each parameter. The method was tested by comparison with the well-established and accurate results for about 500 asteroids provided by Pravec et al. (Pravec, P. et al. [2012]. Icarus 221, 365-387) and then applied to determining H and G for the Pan-STARRS1 asteroids using both the Muinonen et al. (Muinonen, K. et al. [2010]. Icarus 209, 542-555) and Bowell et al. (Bowell, E. et al. [1989]. Asteroids III, Chapter Application of Photometric Models to Asteroids. University of Arizona Press, pp. 524-555) phase functions. Our results confirm the bias in MPC photometry discovered by Jurić et al. (Jurić, M. et al. [2002]. Astrophys. J. 124, 1776-1787).

  4. Using A New Model for Main Sequence Turnoff Absolute Magnitudes to Measure Stellar Streams in the Milky Way Halo

    NASA Astrophysics Data System (ADS)

    Weiss, Jake; Newberg, Heidi Jo; Arsenault, Matthew; Bechtel, Torrin; Desell, Travis; Newby, Matthew; Thompson, Jeffery M.

    2016-01-01

    Statistical photometric parallax is a method for using the distribution of absolute magnitudes of stellar tracers to statistically recover the underlying density distribution of these tracers. In previous work, statistical photometric parallax was used to trace the Sagittarius Dwarf tidal stream, the so-called bifurcated piece of the Sagittaritus stream, and the Virgo Overdensity through the Milky Way. We use an improved knowledge of this distribution in a new algorithm that accounts for the changes in the stellar population of color-selected stars near the photometric limit of the Sloan Digital Sky Survey (SDSS). Although we select bluer main sequence turnoff stars (MSTO) as tracers, large color errors near the survey limit cause many stars to be scattered out of our selection box and many fainter, redder stars to be scattered into our selection box. We show that we are able to recover parameters for analogues of these streams in simulated data using a maximum likelihood optimization on MilkyWay@home. We also present the preliminary results of fitting the density distribution of major Milky Way tidal streams in SDSS data. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.

  5. The absolute magnitude of K0V stars from Hipparcos data using an analytical treatment of the Malmquist bias

    NASA Astrophysics Data System (ADS)

    Butkevich, A. G.; Berdyugin, A. V.; Teerikorpi, P.

    2005-06-01

    We calculate the average absolute magnitude for Hipparcos single K0V stars, using a theoretical curve for the distance-dependent Malmquist bias in the data. This method is shown to be well applicable to stellar data with good parallaxes and gives results in agreement with a previous study that used another treatment of the bias developed in extragalactic astronomy (finding the "unbiased plateau"). In particular, we point out that such a fit, which uses stars in the biased part of the sample, may be less vulnerable to the fluctuations in the unbiased plateau whose definition is also somewhat subjective. We found for K0V stars M_0=5.8, with the spread of the luminosity function being 0.3 mag. It is shown that inclusion of Hipparcos non-single stars may underestimate M0 by about 0.05-0.1 mag. During the study it was found that about 20% of the sample may be mis-classified K0IV stars.

  6. Estimating the magnitude of peak discharges for selected flood frequencies on small streams in South Carolina (1975)

    USGS Publications Warehouse

    Whetstone, B.H.

    1982-01-01

    A program to collect and analyze flood data from small streams in South Carolina was conducted from 1967-75, as a cooperative research project with the South Carolina Department of Highways and Public Transportation and the Federal Highway Administration. As a result of that program, a technique is presented for estimating the magnitude and frequency of floods on small streams in South Carolina with drainage areas ranging in size from 1 to 500 square miles. Peak-discharge data from 74 stream-gaging stations (25 small streams were synthesized, whereas 49 stations had long-term records) were used in multiple regression procedures to obtain equations for estimating magnitude of floods having recurrence intervals of 10, 25, 50, and 100 years on small natural streams. The significant independent variable was drainage area. Equations were developed for the three physiographic provinces of South Carolina (Coastal Plain, Piedmont, and Blue Ridge) and can be used for estimating floods on small streams. (USGS)

  7. Analysis of the Magnitude and Frequency of Peak Discharges for the Navajo Nation in Arizona, Utah, Colorado, and New Mexico

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2006-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same

  8. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells

    PubMed Central

    Gerencser, Akos A.; Mookerjee, Shona A.; Jastroch, Martin; Brand, Martin D.

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273

  9. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells.

    PubMed

    Gerencser, Akos A; Mookerjee, Shona A; Jastroch, Martin; Brand, Martin D

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273

  10. Interference peak detection based on FPGA for real-time absolute distance ranging with dual-comb lasers

    NASA Astrophysics Data System (ADS)

    Ni, Kai; Dong, Hao; Zhou, Qian; Xu, Mingfei; Li, Xinghui; Wu, Guanhao

    2015-08-01

    Absolute distance measurement using dual femtosecond comb lasers can achieve higher accuracy and faster measurement speed, which makes it more and more attractive. The data processing flow consists of four steps: interference peak detection, fast Fourier transform (FFT), phase fitting and compensation of index of refraction. A realtime data processing system based on Field-Programmable Gate Array (FPGA) for dual-comb ranging has been newly developed. The design and implementation of the interference peak detection algorithm by FPGA and Verilog language is introduced in this paper, which is viewed as the most complicated part and an important guarantee for system precision and reliability. An adaptive sliding window for scanning is used to detect peaks. In the process of detection, the algorithm stores 16 sample data as a detection unit and calculates the average of each unit. The average result is used to determine the vertical center height of the sliding window. The algorithm estimates the noise intensity of each detection unit, and then calculates the average of the noise strength of successive 128 units. The noise average is used to calculate the signal to noise ratio of the current working environment, which is used to adjust the height of the sliding window. This adaptive sliding window helps to eliminate fake peaks caused by noise. The whole design is based on the way of pipeline, which can improves the real-time throughput of the overall peak detection module. Its execution speed is up to 140MHz in the FPGA, and the peak can be detected in 16 clock cycle when it appears.

  11. Slowed oxygen uptake kinetics in hypoxia correlate with the transient peak and reduced spatial distribution of absolute skeletal muscle deoxygenation.

    PubMed

    Bowen, T Scott; Rossiter, Harry B; Benson, Alan P; Amano, Tatsuro; Kondo, Narihiko; Kowalchuk, John M; Koga, Shunsaku

    2013-11-01

    It remains unclear whether an overshoot in skeletal muscle deoxygenation (HHb; reflecting a microvascular kinetic mismatch of O2 delivery to consumption) contributes to the slowed adjustment of oxidative energy provision at the onset of exercise. We progressively reduced the fractional inspired O2 concentration (F(I,O2)) to investigate the relationship between slowed pulmonary O2 uptake (V(O2)) kinetics and the dynamics and spatial distribution of absolute[HHb]. Seven healthy men performed 8 min cycling transitions during normoxia (F(I,O2) = 0.21),moderate hypoxia (F(I,O2) = 0.16) and severe hypoxia (F(I,O2)= 0.12). V(O2) uptake was measured using a flowmeter and gas analyser system. Absolute [HHb] was quantified by multichannel,time-resolved near-infrared spectroscopy from the rectus femoris and vastus lateralis (proximal and distal regions), and corrected for adipose tissue thickness. The phase II V(O2) time constant was slowed (P <0.05) as F(I,O2) decreased (normoxia, 17 ± 3 s;moderate hypoxia, 22 ± 4 s; and severe hypoxia, 29 ± 9 s). The [HHb] overshoot was unaffected by hypoxia, but the transient peak [HHb] increased with the reduction in F(I,O2) (P <0.05). Slowed V(O2) kinetics in hypoxia were positively correlated with increased peak [HHb] in the transient (r(2) = 0.45; P <0.05), but poorly related to the [HHb] overshoot. A reduction of spatial heterogeneity in peak [HHb]was inversely correlated with slowed V(O2) kinetics (r(2) = 0.49; P <0.05). These data suggest that aerobic energy provision at the onset of exercise may be limited by the following factors: (i) the absolute ratio (i.e. peak [HHb]) rather than the kinetic ratio (i.e. [HHb] overshoot) of microvascular O2 delivery to consumption; and (ii) a reduced spatial distribution in the ratio of microvascular O2 delivery to consumption across the muscle. PMID:23851917

  12. Evaluation of six methods for estimating magnitude and frequency of peak discharges on urban streams in New York

    USGS Publications Warehouse

    Stedfast, D.A.

    1986-01-01

    Six methods of estimating peak discharges of urban streams were compared and evaluated for applicability to urban streams in New York. Discharge and frequency values developed from a series of synthesized annual flood records were compared with values obtained from the six methods. The synthesized flood records were computed from rainfall-runoff models of 11 urban basins in three counties across the State. Four of these basins had a sufficient period of record to enable rainfall-runoff modeling of two different 5-year periods so that increases in peak flow due to increased urbanization could also be used for comparison of the six methods. A graphical analysis and three types of mathematical analyses were made to evaluate the closeness of fit and bias of the methods. All methods showed a tendency to overestimate synthetic urban flood-magnitude values, but the two adjust rural flood-frequency estimates on a nationwide basis showed smallest standard errors of estimate and bias. The standard errors for these two methods ranged from 44 to 57 percent over the six recurrence intervals (2, 5, 10, 25, 50, and 100 year), and the bias ranged from +28 to +53 percent. The bias , however, is probably due to errors inherent in using synthetic records and in applying the New York rural flood-frequency equations to urban basins with small drainage areas. (USGS)

  13. The dependence of peak horizontal acceleration on magnitude, distance, and site effects for small-magnitude earthquakes in California and eastern North America

    USGS Publications Warehouse

    Campbell, K.W.

    1989-01-01

    One-hundred and ninety free-field accelerograms recorded on deep soil (>10 m deep) were used to study the near-source scaling characteristics of peak horizontal acceleration for 91 earthquakes (2.5 ??? ML ??? 5.0) located primarily in California. An analysis of residuals based on an additional 171 near-source accelerograms from 75 earthquakes indicated that accelerograms recorded in building basements sited on deep soil have 30 per cent lower acclerations, and that free-field accelerograms recorded on shallow soil (???10 m deep) have 82 per cent higher accelerations than free-field accelerograms recorded on deep soil. An analysis of residuals based on 27 selected strong-motion recordings from 19 earthquakes in Eastern North America indicated that near-source accelerations associated with frequencies less than about 25 Hz are consistent with predictions based on attenuation relationships derived from California. -from Author

  14. Galactic model parameters of cataclysmic variables: Results from a new absolute magnitude calibration with 2MASS and WISE

    NASA Astrophysics Data System (ADS)

    Özdönmez, A.; Ak, T.; Bilir, S.

    2015-01-01

    In order to determine the spatial distribution, Galactic model parameters and luminosity function of cataclysmic variables (CVs), a J-band magnitude limited sample of 263 CVs has been established using a newly constructed period-luminosity-colours (PLCs) relation which includes J,Ks and W1-band magnitudes in 2MASS and WISE photometries, and the orbital periods of the systems. This CV sample is assumed to be homogeneous regarding to distances as the new PLCs relation is calibrated with new or re-measured trigonometric parallaxes. Our analysis shows that the scaleheight of CVs is increasing towards shorter periods, although selection effects for the periods shorter than 2.25 h dramatically decrease the scaleheight: the scaleheight of the systems increases from 192 pc to 326 pc as the orbital period decreases from 12 to 2.25 h. The z-distribution of all CVs in the sample is well fitted by an exponential function with a scaleheight of 213-10+11 pc. However, we suggest that the scaleheight of CVs in the Solar vicinity should be ∼300 pc and that the scaleheights derived using the sech2 function should be also considered in the population synthesis models. The space density of CVs in the Solar vicinity is found 5.58(1.35)×10-6 pc-3 which is in the range of previously derived space densities and not in agreement with the predictions of the population models. The analysis based on the comparisons of the luminosity function of white dwarfs with the luminosity function of CVs in this study show that the best fits are obtained by dividing the luminosity functions of white dwarfs by a factor of 350-450.

  15. A method for establishing absolute full-energy peak efficiency and its confidence interval for HPGe detectors

    NASA Astrophysics Data System (ADS)

    Rizwan, U.; Chester, A.; Domingo, T.; Starosta, K.; Williams, J.; Voss, P.

    2015-12-01

    A method is proposed for establishing the absolute efficiency calibration of a HPGe detector including the confidence interval in the energy range of 79.6-3451.2 keV. The calibrations were accomplished with the 133Ba, 60Co, 56Co and 152Eu point-like radioactive sources with only the 60Co source being activity calibrated to an accuracy of 2% at the 90% confidence level. All data sets measured from activity calibrated and uncalibrated sources were fit simultaneously using the linearized least squares method. The proposed fit function accounts for scaling of the data taken with activity uncalibrated sources to the data taken with the high accuracy activity calibrated source. The confidence interval for the fit was found analytically using the covariance matrix. Accuracy of the fit was below 3.5% at the 90% confidence level in the 79.6-3451.2 keV energy range.

  16. Estimating the magnitude of annual peak discharges with recurrence intervals between 1.1 and 3.0 years for rural, unregulated streams in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.; Atkins, John T.; Newell, Dawn A.

    2002-01-01

    Multiple and simple least-squares regression models for the log10-transformed 1.5- and 2-year recurrence intervals of peak discharges with independent variables describing the basin characteristics (log10-transformed and untransformed) for 236 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions in West Virginia designated as East, North, and South. Regional equations for the 1.1-, 1.2-, 1.3-, 1.4-, 1.5-, 1.6-, 1.7-, 1.8-, 1.9-, 2.0-, 2.5-, and 3-year recurrence intervals of peak discharges were determined by generalized least-squares regression. Log10-transformed drainage area was the most significant independent variable for all regions. Equations developed in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia. The accuracies of estimating equations are quantified by measuring the average prediction error (from 27.4 to 52.4 percent) and equivalent years of record (from 1.1 to 3.4 years).

  17. Peak strain magnitudes and rates in the tibia exceed greatly those in the skull: An in vivo study in a human subject

    PubMed Central

    Hillam, Richard A; Goodship, Allen E; Skerry, Tim M

    2015-01-01

    Bone mass and architecture are the result of a genetically determined baseline structure, modified by the effect of internal hormonal/biochemical regulators and the effect of mechanical loading. Bone strain is thought to drive a feedback mechanism to regulate bone formation and resorption to maintain an optimal, but not excessive mass and organisation of material at each skeletal location. Because every site in the skeleton has different functions, we have measured bone strains induced by physiological and more unusual activities, at two different sites, the tibia and cranium of a young human male in vivo. During the most vigorous activities, tibial strains were shown to exceed 0.2%, when ground reaction exceeded 5 times body weight. However in the skull the highest strains recorded were during heading a heavy medicine/exercise ball where parietal strains were up to 0.0192%. Interestingly parietal strains during more physiological activities were much lower, often below 0.01%. Strains during biting were not dependent upon bite force, but could be induced by facial contortions of similar appearance without contact between the teeth. Rates of strain change in the two sites were also very different, where peak tibial strain rate exceeded rate in the parietal bone by more than 5 fold. These findings suggest that the skull and tibia are subject to quite different regulatory influences, as strains that would be normal in the human skull would be likely to lead to profound bone loss by disuse in the long bones. PMID:26232812

  18. TriCalm® hydrogel is significantly superior to 2% diphenhydramine and 1% hydrocortisone in reducing the peak intensity, duration, and overall magnitude of cowhage-induced itch

    PubMed Central

    Papoiu, Alexandru DP; Chaudhry, Hunza; Hayes, Erin C; Chan, Yiong-Huak; Herbst, Kenneth D

    2015-01-01

    Background Itch is one of the most frequent skin complaints and its treatment is challenging. From a neurophysiological perspective, two distinct peripheral and spinothalamic pathways have been described for itch transmission: a histaminergic pathway and a nonhistaminergic pathway mediated by protease-activated receptors (PAR)2 and 4. The nonhistaminergic itch pathway can be activated exogenously by spicules of cowhage, a tropical plant that releases a cysteine protease named mucunain that binds to and activates PAR2 and PAR4. Purpose This study was conducted to assess the antipruritic effect of a novel over-the-counter (OTC) steroid-free topical hydrogel formulation, TriCalm®, in reducing itch intensity and duration, when itch was induced with cowhage, and compared it with two other commonly used OTC anti-itch drugs. Study participants and methods This double-blinded, vehicle-controlled, randomized, crossover study recorded itch intensity and duration in 48 healthy subjects before and after skin treatment with TriCalm hydrogel, 2% diphenhydramine, 1% hydrocortisone, and hydrogel vehicle, used as a vehicle control. Results TriCalm hydrogel significantly reduced the peak intensity and duration of cowhage-induced itch when compared to the control itch curve, and was significantly superior to the two other OTC antipruritic agents and its own vehicle in antipruritic effect. TriCalm hydrogel was eight times more effective than 1% hydrocortisone and almost six times more effective than 2% diphenhydramine in antipruritic action, as evaluated by the reduction of area under the curve. Conclusion TriCalm hydrogel has a robust antipruritic effect against nonhistaminergic pruritus induced via the PAR2 pathway, and therefore it could represent a promising treatment option for itch. PMID:25941445

  19. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  20. Discovery of Cepheids in NGC 5253: Absolute peak brightness of SN Ia 1895B and SN Ia 1972E and the value of H(sub 0)

    NASA Technical Reports Server (NTRS)

    Saha, A.; Sandage, Allan; Labhardt, Lukas; Schwengeler, Hans; Tammann, G. A.; Panagia, N.; Macchetto, F. D.

    1995-01-01

    Observations of the Hubble Space Telescope (HST) between 1993 May 31 and 1993 July 19 in 20 epochs in the F555W passband and five epochs in the F785LP passband have led to the discovery of 14 Cepheids in the Amorphous galaxy NGC 5253. The apparent V distance modulus is (m-M)(sub AV) = 28.08 +/- 0.10 determined from the 12 Cepheids with normal amplitudes. The distance modulus using the F785LP data is consistent with the V value to within the errors. Five methods used to determine the internal reddening are consistent with zero differential reddening, accurate to a level of E(B-V) less than 0.05 mag, over the region occupied by Cepheids and the two supernovae (SNe) produced by NGC 5253. The apparent magnitudes at maximum for the two SNe in NGC 5253 are adopted as B(sub max) = 8.33 +/- 0.2 mag for SN 1895B, and B(sub max) = 8.56 +/- 0.1 and V(sub max) = 8.60 +/- 0.1 for SN 1972E which is a prototype SN of Type Ia. The apparent magnitude system used by Walker (1923) for SN 1859B has been corrected to the modern B scale and zero point to determine its adopted B(sub max) value.

  1. Investigation of Faint Galactic Carbon Stars from the First Byurakan Spectral sky Survey. Optical Variability. I. N-Type AGB Carbon Stars. K-band Absolute Magnitudes and Distances

    NASA Astrophysics Data System (ADS)

    Gigoyan, K. S.; Sarkissian, A.; Russeil, D.; Mauron, N.; Kostandyan, G.; Vartanian, R.; Abrahamyan, H. V.; Paronyan, G. M.

    2014-12-01

    The goal of this paper is to present an optical variability study of the comparatively faint carbon (C) stars which have been discovered by searching the First Byurakan Survey (FBS) low-resolution (lr) spectral plates at high Galactic latitudes using recent wide-area variability databases. The light curves from the Catalina Sky Survey (CSS) and Northern Sky Variability Survey (NSVS) databases were exploited to study theit variability nature. In this paper, first in this series, the variability classes are presented for 54 N-type Asymptotic Giant Branch (AGB) C stars. One finds that 9 stars belongs to the group of Mira-type, 43 are Semi-Regular (SR), and 2 stars are Irregular (Irr)-type variables. The variability types of 27 objects has been established for the first time. K-band absolute magnitudes, distances, and height from the Galactic plane were estimated for all of them. We aim to better understand the nature of the selected C stars through spectroscopy, 2MASS photometric colors, and variability data. Most of the tools used in this study are developed within the framework of the Astronomical Virtual Observatory.

  2. Absolute Rovibrational Intensities for the Chi(sup 1)Sigma(sup +) v=3 <-- 0 Band of (12)C(16)O Obtained with Kitt Peak and BOMEM FTS Instruments

    NASA Technical Reports Server (NTRS)

    Chackerian, Charles, Jr.; Kshirsagar, R. J.; Giver, L. P.; Brown, L. R.; Condon, Estelle P. (Technical Monitor)

    1999-01-01

    This work was initiated to compare absolute line intensities retrieved with the Kitt Peak FTS (Fourier Transform Spectrometer) and Ames BOMEM FTS. Since thermal contaminations can be a problem using the BOMEM instrument if proper precautions are not taken it was thought that measurements done at 6300 per cm would more easily result in satisfactory intercomparisons. Very recent measurements of the CO 3 <-- 0 band fine intensities confirms results reported here that the intensities listed in HITRAN (High Resolution Molecular Absorption Database) for this band are on the order of six to seven percent too low. All of the infrared intensities in the current HITRAN tabulation are based on the electric dipole moment function reported fifteen years ago. The latter in turn was partly based on intensities for the 3 <-- 0 band reported thirty years ago. We have, therefore, redetermined the electric dipole moment function of ground electronic state CO.

  3. Absolute Identification by Relative Judgment

    ERIC Educational Resources Information Center

    Stewart, Neil; Brown, Gordon D. A.; Chater, Nick

    2005-01-01

    In unidimensional absolute identification tasks, participants identify stimuli that vary along a single dimension. Performance is surprisingly poor compared with discrimination of the same stimuli. Existing models assume that identification is achieved using long-term representations of absolute magnitudes. The authors propose an alternative…

  4. Absolute Zero

    NASA Astrophysics Data System (ADS)

    Donnelly, Russell J.; Sheibley, D.; Belloni, M.; Stamper-Kurn, D.; Vinen, W. F.

    2006-12-01

    Absolute Zero is a two hour PBS special attempting to bring to the general public some of the advances made in 400 years of thermodynamics. It is based on the book “Absolute Zero and the Conquest of Cold” by Tom Shachtman. Absolute Zero will call long-overdue attention to the remarkable strides that have been made in low-temperature physics, a field that has produced 27 Nobel Prizes. It will explore the ongoing interplay between science and technology through historical examples including refrigerators, ice machines, frozen foods, liquid oxygen and nitrogen as well as much colder fluids such as liquid hydrogen and liquid helium. A website has been established to promote the series: www.absolutezerocampaign.org. It contains information on the series, aimed primarily at students at the middle school level. There is a wealth of material here and we hope interested teachers will draw their student’s attention to this website and its substantial contents, which have been carefully vetted for accuracy.

  5. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  6. The UBV Color Evolution of Classical Novae. II. Color–Magnitude Diagram

    NASA Astrophysics Data System (ADS)

    Hachisu, Izumi; Kato, Mariko

    2016-04-01

    We have examined the outburst tracks of 40 novae in the color–magnitude diagram (intrinsic B ‑ V color versus absolute V magnitude). After reaching the optical maximum, each nova generally evolves toward blue from the upper right to the lower left and then turns back toward the right. The 40 tracks are categorized into one of six templates: very fast nova V1500 Cyg fast novae V1668 Cyg, V1974 Cyg, and LV Vul moderately fast nova FH Ser and very slow nova PU Vul. These templates are located from the left (blue) to the right (red) in this order, depending on the envelope mass and nova speed class. A bluer nova has a less massive envelope and faster nova speed class. In novae with multiple peaks, the track of the first decay is more red than that of the second (or third) decay, because a large part of the envelope mass had already been ejected during the first peak. Thus, our newly obtained tracks in the color–magnitude diagram provide useful information to understand the physics of classical novae. We also found that the absolute magnitude at the beginning of the nebular phase is almost similar among various novae. We are able to determine the absolute magnitude (or distance modulus) by fitting the track of a target nova to the same classification of a nova with a known distance. This method for determining nova distance has been applied to some recurrent novae, and their distances have been recalculated.

  7. Distances and absolute magnitudes of RR Lyrae variable stars

    SciTech Connect

    Jones, R.V.

    1987-01-01

    Complete optical and infrared photometry were acquired for three fields RR Lyrae variables: the very metal-poor RRab-type variable X Arietis, the moderately metal-rich RRab star SW Draconis, and the moderately metal-rich RRc-type star DH Pegasi. Radial velocities with typical accuracies of 1 km sec/sup -1/ were also obtained for these stars nearly simultaneously with the photometry. Three versions of the Baade-Wesselink method were applied to these stars, and it was determined that only combinations involving the use of the V-K color index yielded self-consistent results, due to the occurrence of a redistribution of flux during the expansion phase of the pulsation cycle such that there is an excess of flux int eh blue optical region. Results for the three program stars indicate that there is no dependence of the value of (M/sub V/) of RR Lyrae stars upon their metallicity, although this is not definitive due to the unknown evolutionary status of these stars. The results imply that metal-poor globular clusters are older than metal-rich clusters, and that the oldest globular clusters may have an age of about 23 billion years.

  8. The distances and absolute magnitudes of RR Lyrae variable stars

    NASA Astrophysics Data System (ADS)

    Jones, Rodney Vaughn

    Complete optical and infrared photometry have been acquired for three field RR Lyrae variables: the very metal-poor RRab-type variable X Arietis, the moderately metal-rich RRab star SW Draconis, and the moderately metal-rich RRc-type star DH Pegasi. Radial velocities with typical accuracies of 1 km/sec have also been obtained for these stars nearly simultaneously with the photometry. Three versions of the Baade-Wesselink method were applied, and it was determined that only combinations involving the use of the V-K color index yielded self-consistent results, due to the occurrence of a redistribution of flux during the expansion phase of the pulsation cycle such that there is an excess of flux in the blue optical region. The study of the three program stars shows that there is no dependence of the value of (MV) of RR Lyrae stars on their metallicity, although this is not definitive due to their unclear evolutionary status. If these stars are indeed typical of RR Lyrae variables, then the results of (MV) = +0.80 + or - 0.14 mag and (Mbol) = +0.85 + or - 0.15 mag for the program stars indicate that stars of this class may be less luminous than previously thought. The results imply that metal-poor globular clusters are older than metal-rich clusters, and that the oldest globular clusters may have an age of about 23 billion years.

  9. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the

  10. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  11. Automaticity of Conceptual Magnitude.

    PubMed

    Gliksman, Yarden; Itamar, Shai; Leibovich, Tali; Melman, Yonatan; Henik, Avishai

    2016-01-01

    What is bigger, an elephant or a mouse? This question can be answered without seeing the two animals, since these objects elicit conceptual magnitude. How is an object's conceptual magnitude processed? It was suggested that conceptual magnitude is automatically processed; namely, irrelevant conceptual magnitude can affect performance when comparing physical magnitudes. The current study further examined this question and aimed to expand the understanding of automaticity of conceptual magnitude. Two different objects were presented and participants were asked to decide which object was larger on the screen (physical magnitude) or in the real world (conceptual magnitude), in separate blocks. By creating congruent (the conceptually larger object was physically larger) and incongruent (the conceptually larger object was physically smaller) pairs of stimuli it was possible to examine the automatic processing of each magnitude. A significant congruity effect was found for both magnitudes. Furthermore, quartile analysis revealed that the congruity was affected similarly by processing time for both magnitudes. These results suggest that the processing of conceptual and physical magnitudes is automatic to the same extent. The results support recent theories suggested that different types of magnitude processing and representation share the same core system. PMID:26879153

  12. Automaticity of Conceptual Magnitude

    PubMed Central

    Gliksman, Yarden; Itamar, Shai; Leibovich, Tali; Melman, Yonatan; Henik, Avishai

    2016-01-01

    What is bigger, an elephant or a mouse? This question can be answered without seeing the two animals, since these objects elicit conceptual magnitude. How is an object’s conceptual magnitude processed? It was suggested that conceptual magnitude is automatically processed; namely, irrelevant conceptual magnitude can affect performance when comparing physical magnitudes. The current study further examined this question and aimed to expand the understanding of automaticity of conceptual magnitude. Two different objects were presented and participants were asked to decide which object was larger on the screen (physical magnitude) or in the real world (conceptual magnitude), in separate blocks. By creating congruent (the conceptually larger object was physically larger) and incongruent (the conceptually larger object was physically smaller) pairs of stimuli it was possible to examine the automatic processing of each magnitude. A significant congruity effect was found for both magnitudes. Furthermore, quartile analysis revealed that the congruity was affected similarly by processing time for both magnitudes. These results suggest that the processing of conceptual and physical magnitudes is automatic to the same extent. The results support recent theories suggested that different types of magnitude processing and representation share the same core system. PMID:26879153

  13. Eosinophil count - absolute

    MedlinePlus

    Eosinophils; Absolute eosinophil count ... the white blood cell count to give the absolute eosinophil count. ... than 500 cells per microliter (cells/mcL). Normal value ranges may vary slightly among different laboratories. Talk ...

  14. Absolute neutrino mass measurements

    NASA Astrophysics Data System (ADS)

    Wolf, Joachim

    2011-10-01

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2β) searches, single β-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy. Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium β-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope (137Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R&D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2β decay and single β-decay.

  15. Absolute neutrino mass measurements

    SciTech Connect

    Wolf, Joachim

    2011-10-06

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.

  16. Absolute response of Fuji imaging plate detectors to picosecond-electron bunches.

    PubMed

    Zeil, K; Kraft, S D; Jochmann, A; Kroll, F; Jahr, W; Schramm, U; Karsch, L; Pawelke, J; Hidding, B; Pretzler, G

    2010-01-01

    The characterization of the absolute number of electrons generated by laser wakefield acceleration often relies on absolutely calibrated FUJI imaging plates (IP), although their validity in the regime of extreme peak currents is untested. Here, we present an extensive study on the dependence of the sensitivity of BAS-SR and BAS-MS IP to picosecond electron bunches of varying charge of up to 60 pC, performed at the electron accelerator ELBE, making use of about three orders of magnitude of higher peak intensity than in prior studies. We demonstrate that the response of the IPs shows no saturation effect and that the BAS-SR IP sensitivity of 0.0081 photostimulated luminescence per electron number confirms surprisingly well data from previous works. However, the use of the identical readout system and handling procedures turned out to be crucial and, if unnoticed, may be an important error source. PMID:20113093

  17. Absolute response of Fuji imaging plate detectors to picosecond-electron bunches

    SciTech Connect

    Zeil, K.; Kraft, S. D.; Jochmann, A.; Kroll, F.; Jahr, W.; Schramm, U.; Karsch, L.; Pawelke, J.; Hidding, B.; Pretzler, G.

    2010-01-15

    The characterization of the absolute number of electrons generated by laser wakefield acceleration often relies on absolutely calibrated FUJI imaging plates (IP), although their validity in the regime of extreme peak currents is untested. Here, we present an extensive study on the dependence of the sensitivity of BAS-SR and BAS-MS IP to picosecond electron bunches of varying charge of up to 60 pC, performed at the electron accelerator ELBE, making use of about three orders of magnitude of higher peak intensity than in prior studies. We demonstrate that the response of the IPs shows no saturation effect and that the BAS-SR IP sensitivity of 0.0081 photostimulated luminescence per electron number confirms surprisingly well data from previous works. However, the use of the identical readout system and handling procedures turned out to be crucial and, if unnoticed, may be an important error source.

  18. Solar investigation at Terskol Peak

    NASA Astrophysics Data System (ADS)

    Burlov-Vasiljev, K. A.; Vasiljeva, I. E.

    2003-04-01

    During 1980--1990 regular observations of the solar disk spectrum and active solar structures were carried out with SEF-1 and ATsU-26 telescopes at Terskol Peak in the framework of the program ``Energy distribution in the solar spectrum in absolute energy units''. In order to refine the fine structure of telluric lines, observations with ATsU-26 telescope were carried out in parallel. This telescope was also used for the investigation of the solar active structures. In this paper the observational technique is described. The obtained results and energy distribution in the solar disk center in absolute energy units are presented.

  19. Absolute Timing Calibration of the USA Experiment Using Pulsar Observations

    NASA Astrophysics Data System (ADS)

    Ray, P. S.; Wood, K. S.; Wolff, M. T.; Lovellette, M. N.; Sheikh, S.; Moon, D.-S.; Eikenberry, S. S.; Roberts, M.; Lyne, A.; Jordon, C.; Bloom, E. D.; Tournear, D.; Saz Parkinson, P.; Reilly, K.

    2003-03-01

    We update the status of the absolute time calibration of the USA Experiment as determined by observations of X-ray emitting rotation-powered pulsars. The brightest such source is the Crab Pulsar and we have obtained observations of the Crab at radio, IR, optical, and X-ray wavelengths. We directly compare arrival time determinations for 2--10 keV X-ray observations made contemporaneously with the PCA on the Rossi X-ray Timing Explorer and the USA Experiment on ARGOS. These two X-ray measurements employ very different means of measuring time and satellite position and thus have different systematic error budgets. The comparison with other wavelengths requires additional steps such as dispersion measure corrections and a precise definition of the ``peak'' of the light curve since the light curve shape varies with observing wavelength. We will describe each of these effects and quantify the magnitude of the systematic error that each may contribute. We will also include time comparison results for other pulsars, such as PSR B1509-58 and PSR B1821-24. Once the absolute time calibrations are well understood, comparing absolute arrival times at multiple energies can provide clues to the magnetospheric structure and emission region geometry. Basic research on X-ray Astronomy at NRL is funded by NRL/ONR.

  20. PEAK READING VOLTMETER

    DOEpatents

    Dyer, A.L.

    1958-07-29

    An improvement in peak reading voltmeters is described, which provides for storing an electrical charge representative of the magnitude of a transient voltage pulse and thereafter measuring the stored charge, drawing oniy negligible energy from the storage element. The incoming voltage is rectified and stored in a condenser. The voltage of the capacitor is applied across a piezoelectric crystal between two parallel plates. Amy change in the voltage of the capacitor is reflected in a change in the dielectric constant of the crystal and the capacitance between a second pair of plates affixed to the crystal is altered. The latter capacitor forms part of the frequency determlning circuit of an oscillator and means is provided for indicating the frequency deviation which is a measure of the peak voltage applied to the voltmeter.

  1. Are Earthquake Magnitudes Clustered?

    SciTech Connect

    Davidsen, Joern; Green, Adam

    2011-03-11

    The question of earthquake predictability is a long-standing and important challenge. Recent results [Phys. Rev. Lett. 98, 098501 (2007); ibid.100, 038501 (2008)] have suggested that earthquake magnitudes are clustered, thus indicating that they are not independent in contrast to what is typically assumed. Here, we present evidence that the observed magnitude correlations are to a large extent, if not entirely, an artifact due to the incompleteness of earthquake catalogs and the well-known modified Omori law. The latter leads to variations in the frequency-magnitude distribution if the distribution is constrained to those earthquakes that are close in space and time to the directly following event.

  2. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  3. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  4. Misconceptions about astronomical magnitudes

    NASA Astrophysics Data System (ADS)

    Schulman, Eric; Cox, Caroline V.

    1997-10-01

    The present system of astronomical magnitudes was created as an inverse scale by Claudius Ptolemy in about 140 A.D. and was defined to be logarithmic in 1856 by Norman Pogson, who believed that human eyes respond logarithmically to the intensity of light. Although scientists have known for some time that the response is instead a power law, astronomers continue to use the Pogson magnitude scale. The peculiarities of this system make it easy for students to develop numerous misconceptions about how and why to use magnitudes. We present a useful exercise in the use of magnitudes to derive a cosmologically interesting quantity (the mass-to-light ratio for spiral galaxies), with potential pitfalls pointed out and explained.

  5. The secular variation of cometary magnitude

    NASA Astrophysics Data System (ADS)

    Hughes, D. W.; Daniels, P. A.

    1983-03-01

    This paper calculates the mean variation in absolute magnitude per perihelion passage, ΔH10, for short-period comets from the data of Vsekhsvyatskii and finds a value of 0.30 ± 0.06. Other mechanisms used for estimating cometary decay are reviewed an it is concluded that a more probable value for ΔH10 is about 0.002. Reasons for the discrepancy between these two values are given.

  6. Telescopic limiting magnitudes

    NASA Technical Reports Server (NTRS)

    Schaefer, Bradley E.

    1990-01-01

    The prediction of the magnitude of the faintest star visible through a telescope by a visual observer is a difficult problem in physiology. Many prediction formulas have been advanced over the years, but most do not even consider the magnification used. Here, the prediction algorithm problem is attacked with two complimentary approaches: (1) First, a theoretical algorithm was developed based on physiological data for the sensitivity of the eye. This algorithm also accounts for the transmission of the atmosphere and the telescope, the brightness of the sky, the color of the star, the age of the observer, the aperture, and the magnification. (2) Second, 314 observed values for the limiting magnitude were collected as a test of the formula. It is found that the formula does accurately predict the average observed limiting magnitudes under all conditions.

  7. Should Astronomy Abolish Magnitudes?

    NASA Astrophysics Data System (ADS)

    Brecher, K.

    2001-12-01

    Astronomy is riddled with a number of anachronistic and counterintuitive practices. Among these are: plotting increasing stellar temperature from right to left in the H-R diagram; giving the distances to remote astronomical objects in parsecs; and reporting the brightness of astronomical objects in magnitudes. Historical accident and observational technique, respectively, are the bases for the first two practices, and they will undoubtedly persist in the future. However, the use of magnitudes is especially egregious when essentially linear optical detectors like CCDs are used for measuring brightness, which are then reported in a logarithmic (base 2.512 deg!) scale. The use of magnitudes has its origin in three historical artifacts: Ptolemy's method of reporting the brightness of stars in the "Almagest"; the 19th century need for a photographic photometry scale; and the 19th century studies by psychophysicists E. H. Weber and G. T. Fechner on the response of the human eye to light. The latter work sought to uncover the relationship between the subjective response of the human eye and brain to the objective brightness of external optical stimuli. The resulting Fechner-Weber law states that this response is logarithmic: that is, that the eye essentially takes the logarithm of the incoming optical signal. However, after more than a century of perceptual studies, most intensively by S. S. Stevens, it is now well established that this relation is not logarithmic. For naked eye detection of stars from the first to sixth magnitudes, it can be reasonably well fit by a power law with index of about 0.3. Therefore, the modern experimental studies undermine the physiological basis for the use of magnitudes in astronomy. Should the historical origins of magnitudes alone be reason enough for their continued use? Probably not, since astronomical magnitudes are based on outdated studies of human perception; make little sense in an era of linear optical detection; and provide a

  8. Absolute biological needs.

    PubMed

    McLeod, Stephen

    2014-07-01

    Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses. PMID:23586876

  9. Landslide seismic magnitude

    NASA Astrophysics Data System (ADS)

    Lin, C. H.; Jan, J. C.; Pu, H. C.; Tu, Y.; Chen, C. C.; Wu, Y. M.

    2015-11-01

    Landslides have become one of the most deadly natural disasters on earth, not only due to a significant increase in extreme climate change caused by global warming, but also rapid economic development in topographic relief areas. How to detect landslides using a real-time system has become an important question for reducing possible landslide impacts on human society. However, traditional detection of landslides, either through direct surveys in the field or remote sensing images obtained via aircraft or satellites, is highly time consuming. Here we analyze very long period seismic signals (20-50 s) generated by large landslides such as Typhoon Morakot, which passed though Taiwan in August 2009. In addition to successfully locating 109 large landslides, we define landslide seismic magnitude based on an empirical formula: Lm = log ⁡ (A) + 0.55 log ⁡ (Δ) + 2.44, where A is the maximum displacement (μm) recorded at one seismic station and Δ is its distance (km) from the landslide. We conclude that both the location and seismic magnitude of large landslides can be rapidly estimated from broadband seismic networks for both academic and applied purposes, similar to earthquake monitoring. We suggest a real-time algorithm be set up for routine monitoring of landslides in places where they pose a frequent threat.

  10. The absolute path command

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the absolute path to a relative directory from the current working directory.« less

  11. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  12. Magnitude correlations in global seismicity

    SciTech Connect

    Sarlis, N. V.

    2011-08-15

    By employing natural time analysis, we analyze the worldwide seismicity and study the existence of correlations between earthquake magnitudes. We find that global seismicity exhibits nontrivial magnitude correlations for earthquake magnitudes greater than M{sub w}6.5.

  13. Solar Variability Magnitudes and Timescales

    NASA Astrophysics Data System (ADS)

    Kopp, Greg

    2015-08-01

    The Sun’s net radiative output varies on timescales of minutes to many millennia. The former are directly observed as part of the on-going 37-year long total solar irradiance climate data record, while the latter are inferred from solar proxy and stellar evolution models. Since the Sun provides nearly all the energy driving the Earth’s climate system, changes in the sunlight reaching our planet can have - and have had - significant impacts on life and civilizations.Total solar irradiance has been measured from space since 1978 by a series of overlapping instruments. These have shown changes in the spatially- and spectrally-integrated radiant energy at the top of the Earth’s atmosphere from timescales as short as minutes to as long as a solar cycle. The Sun’s ~0.01% variations over a few minutes are caused by the superposition of convection and oscillations, and even occasionally by a large flare. Over days to weeks, changing surface activity affects solar brightness at the ~0.1% level. The 11-year solar cycle has comparable irradiance variations with peaks near solar maxima.Secular variations are harder to discern, being limited by instrument stability and the relatively short duration of the space-borne record. Proxy models of the Sun based on cosmogenic isotope records and inferred from Earth climate signatures indicate solar brightness changes over decades to millennia, although the magnitude of these variations depends on many assumptions. Stellar evolution affects yet longer timescales and is responsible for the greatest solar variabilities.In this talk I will summarize the Sun’s variability magnitudes over different temporal ranges, showing examples relevant for climate studies as well as detections of exo-solar planets transiting Sun-like stars.

  14. Reward Value Effects on Timing in the Peak Procedure

    ERIC Educational Resources Information Center

    Galtress, Tiffany; Kirkpatrick, Kimberly

    2009-01-01

    Three experiments examined the effect of motivational variables on timing in the peak procedure. In Experiment 1, rats received a 60-s peak procedure that was coupled with long-term, between-phase changes in reinforcer magnitude. Increases in reinforcer magnitude produced a leftward shift in the peak that persisted for 20 sessions of training. In…

  15. Portable peak flow meters.

    PubMed

    McNaughton, J P

    1997-02-01

    There are several portable peak flow meters available. These instruments vary in construction and performance. Guidelines are recommended for minimum performance and testing of portable peak flow meters, with the aim of establishing a procedure for standardizing all peak flow meters. Future studies to clarify the usefulness of mechanical test apparatus and clinical trials of peak flow meters are also recommended. PMID:9098706

  16. Magnitude and frequency of floods in Alabama

    USGS Publications Warehouse

    Atkins, J. Brian

    1996-01-01

    Methods of estimating flood magnitudes for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years are described for rural streams in Alabama that are not affected by regulation or urbanization. Flood-frequency characteristics are presented for 198 gaging stations in Alabama having 10 or more years of record through September 1991, that are used in the regional analysis. Regression relations were developed using generalized least-squares regression techniques to estimate flood magnitude and frequency on ungaged streams as a function of the drainage area of a basin. Sites on gaged streams should be weighted with gaging station data that are presented in the report. Graphical relations of peak discharges to drainage areas are also presented for sites along the Alabama, Black Warrior, Cahaba, Choctawhatchee, Conecub, and Tombigbee Rivers. Equations for estimating flood magnitudes on ungaged urban streams (taken from a previous report) that use drainage area and percentage of impervious cover as independent variables also are given.

  17. Combined Use of Absolute and Differential Seismic Arrival Time Data to Improve Absolute Event Location

    NASA Astrophysics Data System (ADS)

    Myers, S.; Johannesson, G.

    2012-12-01

    Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement

  18. ABSOLUTE POLARIMETRY AT RHIC.

    SciTech Connect

    OKADA; BRAVAR, A.; BUNCE, G.; GILL, R.; HUANG, H.; MAKDISI, Y.; NASS, A.; WOOD, J.; ZELENSKI, Z.; ET AL.

    2007-09-10

    Precise and absolute beam polarization measurements are critical for the RHIC spin physics program. Because all experimental spin-dependent results are normalized by beam polarization, the normalization uncertainty contributes directly to final physics uncertainties. We aimed to perform the beam polarization measurement to an accuracy Of {Delta}P{sub beam}/P{sub beam} < 5%. The absolute polarimeter consists of Polarized Atomic Hydrogen Gas Jet Target and left-right pairs of silicon strip detectors and was installed in the RHIC-ring in 2004. This system features proton-proton elastic scattering in the Coulomb nuclear interference (CNI) region. Precise measurements of the analyzing power A{sub N} of this process has allowed us to achieve {Delta}P{sub beam}/P{sub beam} = 4.2% in 2005 for the first long spin-physics run. In this report, we describe the entire set up and performance of the system. The procedure of beam polarization measurement and analysis results from 2004-2005 are described. Physics topics of AN in the CNI region (four-momentum transfer squared 0.001 < -t < 0.032 (GeV/c){sup 2}) are also discussed. We point out the current issues and expected optimum accuracy in 2006 and the future.

  19. Asteroid magnitudes, UBV colors, and IRAS albedos and diameters

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.

    1989-01-01

    This paper lists absolute magnitudes and slope parameters for known asteroids numbered through 3318. The values presented are those used in reducing asteroid IR flux data obtained with the IRAS. U-B colors are given for 938 asteroids, and B-V colors are given for 945 asteroids. The IRAS albedos and diameters are tabulated for 1790 asteroids.

  20. Two classes of speculative peaks

    NASA Astrophysics Data System (ADS)

    Roehner, Bertrand M.

    2001-10-01

    Speculation not only occurs in financial markets but also in numerous other markets, e.g. commodities, real estate, collectibles, and so on. Such speculative movements result in price peaks which share many common characteristics: same order of magnitude of duration with respect to amplitude, same shape (the so-called sharp-peak pattern). Such similarities suggest (at least as a first approximation) a common speculative behavior. However, a closer examination shows that in fact there are (at least) two distinct classes of speculative peaks. For the first, referred to as class U, (i) the amplitude of the peak is negatively correlated with the price at the start of the peak (ii) the ensemble coefficient of variation exhibits a trough. Opposite results are observed for the second class that we refer to as class S. Once these empirical observations have been made we try to understand how they should be interpreted. First, we show that the two properties are in fact related in the sense that the second is a consequence of the first. Secondly, by listing a number of cases belonging to each class we observe that the markets in the S-class offer collection of items from which investors can select those they prefer. On the contrary, U-markets consist of undifferentiated products for which a selection cannot be made in the same way. All prices considered in the paper are real (i.e., deflated) prices.

  1. The Effects of Reinforcer Magnitude on Timing in Rats

    ERIC Educational Resources Information Center

    Ludvig, Elliot A.; Conover, Kent; Shizgal, Peter

    2007-01-01

    The relation between reinforcer magnitude and timing behavior was studied using a peak procedure. Four rats received multiple consecutive sessions with both low and high levels of brain stimulation reward (BSR). Rats paused longer and had later start times during sessions when their responses were reinforced with low-magnitude BSR. When estimated…

  2. Absolute oral bioavailability of ciprofloxacin.

    PubMed

    Drusano, G L; Standiford, H C; Plaisance, K; Forrest, A; Leslie, J; Caldwell, J

    1986-09-01

    We evaluated the absolute bioavailability of ciprofloxacin, a new quinoline carboxylic acid, in 12 healthy male volunteers. Doses of 200 mg were given to each of the volunteers in a randomized, crossover manner 1 week apart orally and as a 10-min intravenous infusion. Half-lives (mean +/- standard deviation) for the intravenous and oral administration arms were 4.2 +/- 0.77 and 4.11 +/- 0.74 h, respectively. The serum clearance rate averaged 28.5 +/- 4.7 liters/h per 1.73 m2 for the intravenous administration arm. The renal clearance rate accounted for approximately 60% of the corresponding serum clearance rate and was 16.9 +/- 3.0 liters/h per 1.73 m2 for the intravenous arm and 17.0 +/- 2.86 liters/h per 1.73 m2 for the oral administration arm. Absorption was rapid, with peak concentrations in serum occurring at 0.71 +/- 0.15 h. Bioavailability, defined as the ratio of the area under the curve from 0 h to infinity for the oral to the intravenous dose, was 69 +/- 7%. We conclude that ciprofloxacin is rapidly absorbed and reliably bioavailable in these healthy volunteers. Further studies with ciprofloxacin should be undertaken in target patient populations under actual clinical circumstances. PMID:3777908

  3. Implants as absolute anchorage.

    PubMed

    Rungcharassaeng, Kitichai; Kan, Joseph Y K; Caruso, Joseph M

    2005-11-01

    Anchorage control is essential for successful orthodontic treatment. Each tooth has its own anchorage potential as well as propensity to move when force is applied. When teeth are used as anchorage, the untoward movements of the anchoring units may result in the prolonged treatment time, and unpredictable or less-than-ideal outcome. To maximize tooth-related anchorage, techniques such as differential torque, placing roots into the cortex of the bone, the use of various intraoral devices and/or extraoral appliances have been implemented. Implants, as they are in direct contact with bone, do not possess a periodontal ligament. As a result, they do not move when orthodontic/orthopedic force is applied, and therefore can be used as "absolute anchorage." This article describes different types of implants that have been used as orthodontic anchorage. Their clinical applications and limitations are also discussed. PMID:16463910

  4. Absolute Equilibrium Entropy

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1997-01-01

    The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.

  5. Peak flow meter (image)

    MedlinePlus

    A peak flow meter is commonly used by a person with asthma to measure the amount of air that can be ... become narrow or blocked due to asthma, peak flow values will drop because the person cannot blow ...

  6. The measurement of absolute absorption of millimeter radiation in gases - The absorption of CO and O2

    NASA Technical Reports Server (NTRS)

    Read, William G.; Cohen, Edward A.; Pickett, Herbert M.; Hillig, Kurt W., II

    1988-01-01

    An apparatus is described that will measure absolute absorption of millimeter radiation in gases. The method measures the change in the quality factor of a Fabry-Perot resonator with and without gas present. The magnitude of the change is interpreted in terms of the absorption of the lossy medium inside the resonator. Experiments have been performed on the 115-GHz CO line and the 119-GHz O2 line at two different temperatures to determine the linewidth parameter and the peak absorption value. These numbers can be combined to give the integrated intensity which can be accurately calculated from results of spectroscopy measurements. The CO results are within 2 percent percent of theoretically predicted valves. Measurements on O2 have shown that absorption can be measured as accurately as 0.5 dB/km with this technique. Results have been obtained for oxygen absolute absorption in the 60-80-GHz region.

  7. Magnitude and frequency of floods in Washington

    USGS Publications Warehouse

    Cummans, J.E.; Collings, Michael R.; Nasser, Edmund George

    1975-01-01

    Relations are provided to estimate the magnitude and frequency of floods on Washington streams. Annual-peak-flow data from stream gaging stations on unregulated streams having 1 years or more of record were used to determine a log-Pearson Type III frequency curve for each station. Flood magnitudes having recurrence intervals of 2, 5, i0, 25, 50, and 10years were then related to physical and climatic indices of the drainage basins by multiple-regression analysis using the Biomedical Computer Program BMDO2R. These regression relations are useful for estimating flood magnitudes of the specified recurrence intervals at ungaged or short-record sites. Separate sets of regression equations were defined for western and eastern parts of the State, and the State was further subdivided into 12 regions in which the annual floods exhibit similar flood characteristics. Peak flows are related most significantly in western Washington to drainage-area size and mean annual precipitation. In eastern Washington-they are related most significantly to drainage-area size, mean annual precipitation, and percentage of forest cover. Standard errors of estimate of the estimating relations range from 25 to 129 percent, and the smallest errors are generally associated with the more humid regions.

  8. The Effects of Reinforcer Magnitude on Timing in Rats

    PubMed Central

    Ludvig, Elliot A; Conover, Kent; Shizgal, Peter

    2007-01-01

    The relation between reinforcer magnitude and timing behavior was studied using a peak procedure. Four rats received multiple consecutive sessions with both low and high levels of brain stimulation reward (BSR). Rats paused longer and had later start times during sessions when their responses were reinforced with low-magnitude BSR. When estimated by a symmetric Gaussian function, peak times also were earlier; when estimated by a better-fitting asymmetric Gaussian function or by analyzing individual trials, however, these peak-time changes were determined to reflect a mixture of large effects of BSR on start times and no effect on stop times. These results pose a significant dilemma for three major theories of timing (SET, MTS, and BeT), which all predict no effects for chronic manipulations of reinforcer magnitude. We conclude that increased reinforcer magnitude influences timing in two ways: through larger immediate after-effects that delay responding and through anticipatory effects that elicit earlier responding. PMID:17465312

  9. Integrated Circuit Stellar Magnitude Simulator

    ERIC Educational Resources Information Center

    Blackburn, James A.

    1978-01-01

    Describes an electronic circuit which can be used to demonstrate the stellar magnitude scale. Six rectangular light-emitting diodes with independently adjustable duty cycles represent stars of magnitudes 1 through 6. Experimentally verifies the logarithmic response of the eye. (Author/GA)

  10. Statistical models for seismic magnitude

    NASA Astrophysics Data System (ADS)

    Christoffersson, Anders

    1980-02-01

    In this paper some statistical models in connection with seismic magnitude are presented. Two main situations are treated. The first deals with the estimation of magnitude for an event, using a fixed network of stations and taking into account the detection and bias properties of the individual stations. The second treats the problem of estimating seismicity, and detection and bias properties of individual stations. The models are applied to analyze the magnitude bias effects for an earthquake aftershock sequence from Japan, as recorded by a hypothetical network of 15 stations. It is found that network magnitudes computed by the conventional averaging technique are considerably biased, and that a maximum likelihood approach using instantaneous noise-level estimates for non-detecting stations gives the most consistent magnitude estimates. Finally, the models are applied to evaluate the detection characteristics and associated seismicity as recorded by three VELA arrays: UBO (Uinta Basin), TFO (Tonto Forest) and WMO (Wichita Mountains).

  11. Process and Object Interpretations of Vector Magnitude Mediated by Use of the Graphics Calculator.

    ERIC Educational Resources Information Center

    Forster, Patricia

    2000-01-01

    Analyzes the development of one student's understanding of vector magnitude and how her problem solving was mediated by use of the absolute value graphics calculator function. (Contains 35 references.) (Author/ASK)

  12. Scaling relations of moment magnitude, local magnitude, and duration magnitude for earthquakes originated in northeast India

    NASA Astrophysics Data System (ADS)

    Bora, Dipok K.

    2016-06-01

    In this study, we aim to improve the scaling between the moment magnitude ( M W), local magnitude ( M L), and the duration magnitude ( M D) for 162 earthquakes in Shillong-Mikir plateau and its adjoining region of northeast India by extending the M W estimates to lower magnitude earthquakes using spectral analysis of P-waves from vertical component seismograms. The M W- M L and M W- M D relationships are determined by linear regression analysis. It is found that, M W values can be considered consistent with M L and M D, within 0.1 and 0.2 magnitude units respectively, in 90 % of the cases. The scaling relationships investigated comply well with similar relationships in other regions in the world and in other seismogenic areas in the northeast India region.

  13. Peak Experience Project

    ERIC Educational Resources Information Center

    Scott, Daniel G.; Evans, Jessica

    2010-01-01

    This paper emerges from the continued analysis of data collected in a series of international studies concerning Childhood Peak Experiences (CPEs) based on developments in understanding peak experiences in Maslow's hierarchy of needs initiated by Dr Edward Hoffman. Bridging from the series of studies, Canadian researchers explore collected…

  14. Be Resolute about Absolute Value

    ERIC Educational Resources Information Center

    Kidd, Margaret L.

    2007-01-01

    This article explores how conceptualization of absolute value can start long before it is introduced. The manner in which absolute value is introduced to students in middle school has far-reaching consequences for their future mathematical understanding. It begins to lay the foundation for students' understanding of algebra, which can change…

  15. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  16. Fast Regional Magnitude Determination at INGV

    NASA Astrophysics Data System (ADS)

    Michelini, A.; Lomax, A.; Bono, A.; Amato, A.

    2006-12-01

    The recent, very large earthquakes in the Indian Ocean and Indonesia have shown the importance of rapid magnitude determination for tsunami warning. In the Mediterranean region, destructive tsunamis have occurred repeatedly in the past; however, because of the proximity of the tsunami sources to populated coasts, very rapid analysis is necessary for effective warning. Reliable estimates of the earthquake location and size should be available within tens of seconds after the first arriving P-waves are recorded at local and regional distances. Currently in Europe there is no centralized agency such as the PTWC for the Pacific Ocean dedicated to issue tsunami warnings, though, recent initiatives, such as the NEAMTWS (North-East Atlantic and Mediterranean Tsunami Warning System), aim toward the establishment of such an agency. Thus established seismic monitoring centers, such as INGV, Rome, are currently relied upon for rapid earthquake analysis and information dissemination. In this study, we describe the recent, experimental implementation at the INGV seismic center of a procedure for rapid magnitude determination at regional distances based on the Mwp methodology (Tsuboi et al., 1995), which exploits information in the P-wave train. For our Mwp determinations, we have implemented an automatic procedure that windows the relevant part of the seismograms and picks the amplitudes of the first two largest peaks, providing within seconds after each P arrival an estimate of earthquake size. Manual revision is completed using interactive software that presents an analysis with the seismograms, amplitude picks and magnitude estimates. We have compared our Mwp magnitudes for recent earthquakes within the Mediterranean region with Mw determined through the Harvard CMT procedure. For the majority of the events, the Mwp and Mw magnitudes agree closely, indicating that the rapid Mwp estimates forms a useful tool for effective tsunami warning on a regional scale.

  17. Bidirectional Modulation of Numerical Magnitude.

    PubMed

    Arshad, Qadeer; Nigmatullina, Yuliya; Nigmatullin, Ramil; Asavarut, Paladd; Goga, Usman; Khan, Sarah; Sander, Kaija; Siddiqui, Shuaib; Roberts, R E; Cohen Kadosh, Roi; Bronstein, Adolfo M; Malhotra, Paresh A

    2016-05-01

    Numerical cognition is critical for modern life; however, the precise neural mechanisms underpinning numerical magnitude allocation in humans remain obscure. Based upon previous reports demonstrating the close behavioral and neuro-anatomical relationship between number allocation and spatial attention, we hypothesized that these systems would be subject to similar control mechanisms, namely dynamic interhemispheric competition. We employed a physiological paradigm, combining visual and vestibular stimulation, to induce interhemispheric conflict and subsequent unihemispheric inhibition, as confirmed by transcranial direct current stimulation (tDCS). This allowed us to demonstrate the first systematic bidirectional modulation of numerical magnitude toward either higher or lower numbers, independently of either eye movements or spatial attention mediated biases. We incorporated both our findings and those from the most widely accepted theoretical framework for numerical cognition to present a novel unifying computational model that describes how numerical magnitude allocation is subject to dynamic interhemispheric competition. That is, numerical allocation is continually updated in a contextual manner based upon relative magnitude, with the right hemisphere responsible for smaller magnitudes and the left hemisphere for larger magnitudes. PMID:26879093

  18. Bidirectional Modulation of Numerical Magnitude

    PubMed Central

    Arshad, Qadeer; Nigmatullina, Yuliya; Nigmatullin, Ramil; Asavarut, Paladd; Goga, Usman; Khan, Sarah; Sander, Kaija; Siddiqui, Shuaib; Roberts, R. E.; Cohen Kadosh, Roi; Bronstein, Adolfo M.; Malhotra, Paresh A.

    2016-01-01

    Numerical cognition is critical for modern life; however, the precise neural mechanisms underpinning numerical magnitude allocation in humans remain obscure. Based upon previous reports demonstrating the close behavioral and neuro-anatomical relationship between number allocation and spatial attention, we hypothesized that these systems would be subject to similar control mechanisms, namely dynamic interhemispheric competition. We employed a physiological paradigm, combining visual and vestibular stimulation, to induce interhemispheric conflict and subsequent unihemispheric inhibition, as confirmed by transcranial direct current stimulation (tDCS). This allowed us to demonstrate the first systematic bidirectional modulation of numerical magnitude toward either higher or lower numbers, independently of either eye movements or spatial attention mediated biases. We incorporated both our findings and those from the most widely accepted theoretical framework for numerical cognition to present a novel unifying computational model that describes how numerical magnitude allocation is subject to dynamic interhemispheric competition. That is, numerical allocation is continually updated in a contextual manner based upon relative magnitude, with the right hemisphere responsible for smaller magnitudes and the left hemisphere for larger magnitudes. PMID:26879093

  19. A reevaluation of the 20-micron magnitude system

    NASA Technical Reports Server (NTRS)

    Tokunaga, A. T.

    1984-01-01

    The 20-micron infrared magnitude system is reexamined by observing primary infrared standards and seven A V stars. The purpose is to determine whether Alpha Lyr has colors consistent with the average of A0 stars and to determine the relative magnitude of the primary standards to that of Alpha Lyr. The data presented are consistent with the interpretation that the spectrum of Alpha Lyr is a blackbody and that it is a viable flux standard at 10 and 20 microns. The absolute flux density scale, the physical quantity of interest, is found to be consistent with an extrapolation of the Alpha Lyr spectrum from the near infrared on the basis of the comparison of stars to Mars and asteroids. Adoption of a 0.0 magnitude for Alpha Lyr requires that the magnitudes given by Morrison and Simon (1973) and by Simon et al. (1972) be revised downward by 0.14 mag.

  20. Peak power ratio generator

    DOEpatents

    Moyer, R.D.

    A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.

  1. Peak power ratio generator

    DOEpatents

    Moyer, Robert D.

    1985-01-01

    A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.

  2. Peak Oil, Peak Coal and Climate Change

    NASA Astrophysics Data System (ADS)

    Murray, J. W.

    2009-05-01

    Research on future climate change is driven by the family of scenarios developed for the IPCC assessment reports. These scenarios create projections of future energy demand using different story lines consisting of government policies, population projections, and economic models. None of these scenarios consider resources to be limiting. In many of these scenarios oil production is still increasing to 2100. Resource limitation (in a geological sense) is a real possibility that needs more serious consideration. The concept of 'Peak Oil' has been discussed since M. King Hubbert proposed in 1956 that US oil production would peak in 1970. His prediction was accurate. This concept is about production rate not reserves. For many oil producing countries (and all OPEC countries) reserves are closely guarded state secrets and appear to be overstated. Claims that the reserves are 'proven' cannot be independently verified. Hubbert's Linearization Model can be used to predict when half the ultimate oil will be produced and what the ultimate total cumulative production (Qt) will be. US oil production can be used as an example. This conceptual model shows that 90% of the ultimate US oil production (Qt = 225 billion barrels) will have occurred by 2011. This approach can then be used to suggest that total global production will be about 2200 billion barrels and that the half way point will be reached by about 2010. This amount is about 5 to 7 times less than assumed by the IPCC scenarios. The decline of Non-OPEC oil production appears to have started in 2004. Of the OPEC countries, only Saudi Arabia may have spare capacity, but even that is uncertain, because of lack of data transparency. The concept of 'Peak Coal' is more controversial, but even the US National Academy Report in 2007 concluded only a small fraction of previously estimated reserves in the US are actually minable reserves and that US reserves should be reassessed using modern methods. British coal production can be

  3. The Effects of Exercise Intensity vs. Metabolic State on the Variability and Magnitude of Left Ventricular Twist Mechanics during Exercise

    PubMed Central

    Armstrong, Craig; Samuel, Jake; Yarlett, Andrew; Cooper, Stephen-Mark; Stembridge, Mike; Stöhr, Eric J.

    2016-01-01

    Increased left ventricular (LV) twist and untwisting rate (LV twist mechanics) are essential responses of the heart to exercise. However, previously a large variability in LV twist mechanics during exercise has been observed, which complicates the interpretation of results. This study aimed to determine some of the physiological sources of variability in LV twist mechanics during exercise. Sixteen healthy males (age: 22 ± 4 years, V˙O2peak: 45.5 ± 6.9 ml∙kg-1∙min-1, range of individual anaerobic threshold (IAT): 32–69% of V˙O2peak) were assessed at rest and during exercise at: i) the same relative exercise intensity, 40%peak, ii) at 2% above IAT, and, iii) at 40%peak with hypoxia (40%peak+HYP). LV volumes were not significantly different between exercise conditions (P > 0.05). However, the mean margin of error of LV twist was significantly lower (F2,47 = 2.08, P < 0.05) during 40%peak compared with IAT (3.0 vs. 4.1 degrees). Despite the same workload and similar LV volumes, hypoxia increased LV twist and untwisting rate (P < 0.05), but the mean margin of error remained similar to that during 40%peak (3.2 degrees, P > 0.05). Overall, LV twist mechanics were linearly related to rate pressure product. During exercise, the intra-individual variability of LV twist mechanics is smaller at the same relative exercise intensity compared with IAT. However, the absolute magnitude (degrees) of LV twist mechanics appears to be associated with the prevailing rate pressure product. Exercise tests that evaluate LV twist mechanics should be standardised by relative exercise intensity and rate pressure product be taken into account when interpreting results. PMID:27100099

  4. Understanding Magnitudes to Understand Fractions

    ERIC Educational Resources Information Center

    Gabriel, Florence

    2016-01-01

    Fractions are known to be difficult to learn and difficult to teach, yet they are vital for students to have access to further mathematical concepts. This article uses evidence to support teachers employing teaching methods that focus on the conceptual understanding of the magnitude of fractions.

  5. Make peak flow a habit!

    MedlinePlus

    Asthma - make peak flow a habit; Reactive airway disease - peak flow; Bronchial asthma - peak flow ... your airways are narrowed and blocked due to asthma, your peak flow values drop. You can check ...

  6. Hale Central Peak

    NASA Technical Reports Server (NTRS)

    2004-01-01

    19 September 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows some of the mountains that make up the central peak region of Hale Crater, located near 35.8oS, 36.5oW. Dark, smooth-surfaced sand dunes are seen to be climbing up the mountainous slopes. The central peak of a crater consists of rock brought up during the impact from below the crater floor. This autumn image is illuminated from the upper left and covers an area approximately 3 km (1.9 mi) across.

  7. Stress magnitudes in the crust: constraints from stress orientation and relative magnitude data

    USGS Publications Warehouse

    Zoback, M.L.; Magee, M.

    1991-01-01

    The World Stress Map Project is a global cooperative effort to compile and interpret data on the orientation and relative magnitudes of the contemporary in situ tectonic stress field in the Earth's lithosphere. The intraplate stress field in both the oceans and continents is largely compressional with one or both of the horizontal stresses greater than the vertical stress. The regionally uniform horizontal intraplate stress orientations are generally consistent with either relative or absolute plate motions indicating that plate-boundary forces dominate the stress distribution within the plates. Current models of stresses due to whole mantle flow inferred from seismic topography models predict a general compressional stress state within continents but do not match the broad-scale horizontal stress orientations. The broad regionally uniform intraplate stress orientations are best correlated with compressional plate-boundary forces and the geometry of the plate boundaries. -from Authors

  8. An Integrated Model of Choices and Response Times in Absolute Identification

    ERIC Educational Resources Information Center

    Brown, Scott D.; Marley, A. A. J.; Donkin, Christopher; Heathcote, Andrew

    2008-01-01

    Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a…

  9. Color Magnitude Diagrams for Quasars Using SDSS, GALEX, and WISE Data

    NASA Astrophysics Data System (ADS)

    Curtis, Wendy; Gorjian, V.; Thompson, P.; Doyle, T.; Blackwell, J.; Llamas, J.; Mauduit, J.; Chanda, R.; Glidden, A.; Gruen, A. E.; Laurence, C.; McGeeney, M.; Majercik, Z.; Mikel, T.; Mohamud, A.; Neilson, A.; Payamps, A.; Robles, R.; Uribe, G.

    2013-01-01

    Data from the Galaxy Evolution Explorer (GALEX), the Wide-Field Infrared Survey Explorer (WISE), and the Sloan Digital Sky Survey (SDSS) was used to construct color-magnitude diagrams for Type I quasars at redshift values of 0.1absolute magnitude at a variety of wavelengths, from near ultraviolet to infrared. No tight correlations were found when comparing any of the UV or optical colors to the various infrared absolute magnitudes. However, a relationship was found using the NUV (GALEX) - z band (SDSS) color vs NUV (GALEX) absolute magnitude.

  10. Impact Crater with Peak

    NASA Technical Reports Server (NTRS)

    2002-01-01

    (Released 14 June 2002) The Science This THEMIS visible image shows a classic example of a martian impact crater with a central peak. Central peaks are common in large, fresh craters on both Mars and the Moon. This peak formed during the extremely high-energy impact cratering event. In many martian craters the central peak has been either eroded or buried by later sedimentary processes, so the presence of a peak in this crater indicates that the crater is relatively young and has experienced little degradation. Observations of large craters on the Earth and the Moon, as well as computer modeling of the impact process, show that the central peak contains material brought from deep beneath the surface. The material exposed in these peaks will provide an excellent opportunity to study the composition of the martian interior using THEMIS multi-spectral infrared observations. The ejecta material around the crater can is well preserved, again indicating relatively little modification of this landform since its initial creation. The inner walls of this approximately 18 km diameter crater show complex slumping that likely occurred during the impact event. Since that time there has been some downslope movement of material to form the small chutes and gullies that can be seen on the inner crater wall. Small (50-100 m) mega-ripples composed of mobile material can be seen on the floor of the crater. Much of this material may have come from the walls of the crater itself, or may have been blown into the crater by the wind. The Story When a meteor smacked into the surface of Mars with extremely high energy, pow! Not only did it punch an 11-mile-wide crater in the smoother terrain, it created a central peak in the middle of the crater. This peak forms kind of on the 'rebound.' You can see this same effect if you drop a single drop of milk into a glass of milk. With craters, in the heat and fury of the impact, some of the land material can even liquefy. Central peaks like the one

  11. Absolute Timing of the Crab Pulsar: X-ray, Radio, and Optical Observations

    NASA Astrophysics Data System (ADS)

    Ray, P. S.; Wood, K. S.; Wolff, M. T.; Lovellette, M. N.; Sheikh, S.; Moon, D.-S.; Eikenberry, S. S.; Roberts, M.; Bloom, E. D.; Tournear, D.; Saz Parkinson, P.; Reilly, K.

    2002-12-01

    We report on multiwavelength observations of the Crab Pulsar and compare the pulse arrival time at radio, IR, optical, and X-ray wavelengths. Comparing absolute arrival times at multiple energies can provide clues to the magnetospheric structure and emission region geometry. Absolute time calibration of each observing system is of paramount importance for these observations and we describe how this is done for each system. We directly compare arrival time determinations for 2--10 keV X-ray observations made contemporaneously with the PCA on the Rossi X-ray Timing Explorer and the USA Experiment on ARGOS. These two X-ray measurements employ very different means of measuring time and satellite position and thus have different systematic error budgets. The comparison with other wavelengths requires additional steps such as dispersion measure corrections and a precise definition of the ``peak'' of the light curve since the light curve shape varies with observing wavelength. We will describe each of these effects and quantify the magnitude of the systematic error that each may contribute. Basic research on X-ray Astronomy at NRL is funded by NRL/ONR.

  12. Error magnitude estimation in model-reference adaptive systems

    NASA Technical Reports Server (NTRS)

    Colburn, B. K.; Boland, J. S., III

    1975-01-01

    A second order approximation is derived from a linearized error characteristic equation for Lyapunov designed model-reference adaptive systems and is used to estimate the maximum error between the model and plant states, and the time to reach this peak following a plant perturbation. The results are applicable in the analysis of plants containing magnitude-dependent nonlinearities.

  13. INDIAN PEAKS WILDERNESS, COLORADO.

    USGS Publications Warehouse

    Pearson, Robert C.; Speltz, Charles N.

    1984-01-01

    The Indian Peaks Wilderness northwest of Denver is partly within the Colorado Mineral Belt, and the southeast part of it contains all the geologic characteristics associated with the several nearby mining districts. Two deposits have demonstrated mineral resources, one of copper and the other of uranium; both are surrounded by areas with probable potential. Two other areas have probable resource potential for copper, gold, and possibly molydenum. Detailed gravity and magnetic studies in the southeast part of the Indian Peaks Wilderness might detect in the subsurface igneous bodies that may be mineralized. Physical exploration such as drilling would be necessary to determine more precisely the copper resources at the Roaring Fork locality and uranium resources at Wheeler Basin.

  14. Peak of Desire

    PubMed Central

    Huang, Julie Y.; Bargh, John A.

    2008-01-01

    In three studies, we explore the existence of an evolved sensitivity to the peak as consistent with the evolutionary origins of many of our basic preferences. Activating the evolved motive of mating activates related adaptive mechanisms, including a general sensitivity to cues of growth and decay associated with determining mate value in human courtship. We establish that priming the mating goal also activates as well an evaluative bias that influences how people evaluate cues of growth. Specifically, living kinds that are immature or past their prime are devalued, whereas living kinds at their peak become increasingly valued. Study 1 establishes this goal-driven effect for human stimuli indirectly related to the mating goal. Studies 2 and 3 establish that the evaluative bias produced by the active mating goal extends to living kinds but not artifacts. PMID:18578847

  15. PEAK LIMITING AMPLIFIER

    DOEpatents

    Goldsworthy, W.W.; Robinson, J.B.

    1959-03-31

    A peak voltage amplitude limiting system adapted for use with a cascade type amplifier is described. In its detailed aspects, the invention includes an amplifier having at least a first triode tube and a second triode tube, the cathode of the second tube being connected to the anode of the first tube. A peak limiter triode tube has its control grid coupled to thc anode of the second tube and its anode connected to the cathode of the second tube. The operation of the limiter is controlled by a bias voltage source connected to the control grid of the limiter tube and the output of the system is taken from the anode of the second tube.

  16. DIAMOND PEAK WILDERNESS, OREGON.

    USGS Publications Warehouse

    Sherrod, David R.; Moyle, Phillip R.

    1984-01-01

    No metallic mineral resources were identified during a mineral survey of the Diamond Peak Wilderness in Oregon. Cinder cones within the wilderness contain substantial cinder resources, but similar deposits that are more accessible occur outside the wilderness. The area could have geothermal resources, but available data are insufficient to evaluate their potential. Several deep holes could be drilled in areas of the High Cascades outside the wilderness, from which extrapolations of the geothermal potential of the several Cascade wilderness could be made.

  17. The representation of numerical magnitude

    PubMed Central

    Brannon, Elizabeth M

    2006-01-01

    The combined efforts of many fields are advancing our understanding of how number is represented. Researchers studying numerical reasoning in adult humans, developing humans and non-human animals are using a suite of behavioral and neurobiological methods to uncover similarities and differences in how each population enumerates and compares quantities to identify the neural substrates of numerical cognition. An important picture emerging from this research is that adult humans share with non-human animals a system for representing number as language-independent mental magnitudes and that this system emerges early in development. PMID:16546373

  18. Long-Period Ground Motion Prediction Equations for Relative, Pseudo-Relative and Absolute Velocity Response Spectra in Japan

    NASA Astrophysics Data System (ADS)

    Dhakal, Y. P.; Kunugi, T.; Suzuki, W.; Aoi, S.

    2014-12-01

    Many of the empirical ground motion prediction equations (GMPE) also known as attenuation relations have been developed for absolute acceleration or pseudo relative velocity response spectra. For a small damping, pseudo and absolute acceleration response spectra are nearly identical and hence interchangeable. It is generally known that the relative and pseudo relative velocity response spectra differ considerably at very short or very long periods, and the two are often considered similar at intermediate periods. However, observations show that the period range at which the two spectra become comparable is different from site to site. Also, the relationship of the above two types of velocity response spectra with absolute velocity response spectra are not discussed well in literature. The absolute velocity response spectra are the peak values of time histories obtained by adding the ground velocities to relative velocity response time histories at individual natural periods. There exists many tall buildings on huge and deep sedimentary basins such as the Kanto basin, and the number of such buildings is growing. Recently, Japan Meteorological Agency (JMA) has proposed four classes of long-period ground motion intensity (http://www.data.jma.go.jp/svd/eew/data/ltpgm/) based on absolute velocity response spectra, which correlate to the difficulty of movement of people in tall buildings. As the researchers are using various types of response spectra for long-period ground motions, it is important to understand the relationships between them to take appropriate measures for disaster prevention applications. In this paper, we, therefore, obtain and discuss the empirical attenuation relationships using the same functional forms for the three types of velocity response spectra computed from observed strong motion records from moderate to large earthquakes in relation to JMA magnitude, hypocentral distance, sediment depths, and AVS30 as predictor variables at periods between

  19. Absolute transition probabilities of phosphorus.

    NASA Technical Reports Server (NTRS)

    Miller, M. H.; Roig, R. A.; Bengtson, R. D.

    1971-01-01

    Use of a gas-driven shock tube to measure the absolute strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-

  20. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  1. Hitting the target: relatively easy, yet absolutely difficult.

    PubMed

    Mapp, Alistair P; Ono, Hiroshi; Khokhotva, Mykola

    2007-01-01

    It is generally agreed that absolute-direction judgments require information about eye position, whereas relative-direction judgments do not. The source of this eye-position information, particularly during monocular viewing, is a matter of debate. It may be either binocular eye position, or the position of the viewing-eye only, that is crucial. Using more ecologically valid stimulus situations than the traditional LED in the dark, we performed two experiments. In experiment 1, observers threw darts at targets that were fixated either monocularly or binocularly. In experiment 2, observers aimed a laser gun at targets while fixating either the rear or the front gunsight monocularly, or the target either monocularly or binocularly. We measured the accuracy and precision of the observers' absolute- and relative-direction judgments. We found that (a) relative-direction judgments were precise and independent of phoria, and (b) monocular absolute-direction judgments were inaccurate, and the magnitude of the inaccuracy was predictable from the magnitude of phoria. These results confirm that relative-direction judgments do not require information about eye position. Moreover, they show that binocular eye-position information is crucial when judging the absolute direction of both monocular and binocular targets. PMID:17972479

  2. Optomechanics for absolute rotation detection

    NASA Astrophysics Data System (ADS)

    Davuluri, Sankar

    2016-07-01

    In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.

  3. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  4. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  5. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  6. Effects of Numerical Versus Foreground-Only Icon Displays on Understanding of Risk Magnitudes.

    PubMed

    Stone, Eric R; Gabard, Alexis R; Groves, Aislinn E; Lipkus, Isaac M

    2015-01-01

    The aim of this work is to advance knowledge of how to measure gist and verbatim understanding of risk magnitude information and to apply this knowledge to address whether graphics that focus on the number of people affected (the numerator of the risk ratio, i.e., the foreground) are effective displays for increasing (a) understanding of absolute and relative risk magnitudes and (b) risk avoidance. In 2 experiments, the authors examined the effects of a graphical display that used icons to represent the foreground information on measures of understanding (Experiments 1 and 2) and on perceived risk, affect, and risk aversion (Experiment 2). Consistent with prior findings, this foreground-only graphical display increased perceived risk and risk aversion; however, it also led to decreased understanding of absolute (although not relative) risk magnitudes. Methodologically, this work shows the importance of distinguishing understanding of absolute risk from understanding of relative risk magnitudes, and the need to assess gist knowledge of both types of risk. Substantively, this work shows that although using foreground-only graphical displays is an appealing risk communication strategy to increase risk aversion, doing so comes at the cost of decreased understanding of absolute risk magnitudes. PMID:26065633

  7. The AFGL absolute gravity program

    NASA Technical Reports Server (NTRS)

    Hammond, J. A.; Iliff, R. L.

    1978-01-01

    A brief discussion of the AFGL's (Air Force Geophysics Laboratory) program in absolute gravity is presented. Support of outside work and in-house studies relating to gravity instrumentation are discussed. A description of the current transportable system is included and the latest results are presented. These results show good agreement with measurements at the AFGL site by an Italian system. The accuracy obtained by the transportable apparatus is better than 0.1 microns sq sec 10 microgal and agreement with previous measurements is within the combined uncertainties of the measurements.

  8. Astronomical Limiting Magnitude at Langkawi Observatory

    NASA Astrophysics Data System (ADS)

    Zainuddin, Mohd. Zambri; Loon, Chin Wei; Harun, Saedah

    2010-07-01

    Astronomical limiting magnitude is an indicator for astronomer to conduct astronomical measurement at a particular site. It gives an idea to astronomer of that site what magnitude of celestial object can be measured. Langkawi National Observatory (LNO) is situated at Bukit Malut with latitude 6°18' 25'' North and longitude 99°46' 52'' East in Langkawi Island. Sky brightness measurement has been performed at this site using the standard astronomical technique. The value of the limiting magnitude measured is V = 18.6+/-1.0 magnitude. This will indicate that astronomical measurement at Langkawi observatory can only be done for celestial objects having magnitude less than V = 18.6 magnitudes.

  9. Make peak flow a habit!

    MedlinePlus

    Checking your peak flow is one of the best ways to control your asthma and to keep it from getting worse. Asthma attacks ... Most times, they build slowly. Checking your peak flow can tell you if an attack is coming, ...

  10. Climate Absolute Radiance and Refractivity Observatory (CLARREO)

    NASA Technical Reports Server (NTRS)

    Leckey, John P.

    2015-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a mission, led and developed by NASA, that will measure a variety of climate variables with an unprecedented accuracy to quantify and attribute climate change. CLARREO consists of three separate instruments: an infrared (IR) spectrometer, a reflected solar (RS) spectrometer, and a radio occultation (RO) instrument. The mission will contain orbiting radiometers with sufficient accuracy, including on orbit verification, to calibrate other space-based instrumentation, increasing their respective accuracy by as much as an order of magnitude. The IR spectrometer is a Fourier Transform spectrometer (FTS) working in the 5 to 50 microns wavelength region with a goal of 0.1 K (k = 3) accuracy. The FTS will achieve this accuracy using phase change cells to verify thermistor accuracy and heated halos to verify blackbody emissivity, both on orbit. The RS spectrometer will measure the reflectance of the atmosphere in the 0.32 to 2.3 microns wavelength region with an accuracy of 0.3% (k = 2). The status of the instrumentation packages and potential mission options will be presented.

  11. Magnitude correlations and dynamical scaling for seismicity

    SciTech Connect

    Godano, Cataldo; Lippiello, Eugenio; De Arcangelis, Lucilla

    2007-12-06

    We analyze the experimental seismic catalog of Southern California and we show the existence of correlations between earthquake magnitudes. We propose a dynamical scaling hypothesis relating time and magnitude as the physical mechanism responsible of the observed magnitude correlations. We show that experimental distributions in size and time naturally originate solely from this scaling hypothesis. Furthermore we generate a synthetic catalog reproducing the organization in time and magnitude of experimental data.

  12. Influence of storm magnitude and watershed size on runoff nonlinearity

    NASA Astrophysics Data System (ADS)

    Lee, Kwan Tun; Huang, Jen-Kuo

    2016-06-01

    The inherent nonlinear characteristics of the watershed runoff process related to storm magnitude and watershed size are discussed in detail in this study. The first type of nonlinearity is referred to rainfall-runoff dynamic process and the second type is with respect to a Power-law relation between peak discharge and upstream drainage area. The dynamic nonlinearity induced by storm magnitude was first demonstrated by inspecting rainfall-runoff records at three watersheds in Taiwan. Then the derivation of the watershed unit hydrograph (UH) using two linear hydrological models shows that the peak discharge and time to peak discharge that characterize the shape of UH vary event-to-event. Hence, the intention of deriving a unique and universal UH for all rainfall-runoff simulation cases is questionable. In contrast, the UHs by the other two adopted nonlinear hydrological models were responsive to rainfall intensity without relying on linear proportion principle, and are excellent in presenting dynamic nonlinearity. Based on the two-segment regression, the scaling nonlinearity between peak discharge and drainage area was investigated by analyzing the variation of Power-law exponent. The results demonstrate that the scaling nonlinearity is particularly significant for a watershed having larger area and subjecting to a small-size of storm. For three study watersheds, a large tributary that contributes relatively great drainage area or inflow is found to cause a transition break in scaling relationship and convert the scaling relationship from linearity to nonlinearity.

  13. Absolute calibration of forces in optical tweezers

    NASA Astrophysics Data System (ADS)

    Dutra, R. S.; Viana, N. B.; Maia Neto, P. A.; Nussenzveig, H. M.

    2014-07-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past 15 years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spot, adapting frequently employed video microscopy techniques. Combined with interface spherical aberration, it reveals a previously unknown window of instability for trapping. Comparison with experimental data leads to an overall agreement within error bars, with no fitting, for a broad range of microsphere radii, from the Rayleigh regime to the ray optics one, for different polarizations and trapping heights, including all commonly employed parameter domains. Besides signaling full first-principles theoretical understanding of optical tweezers operation, the results may lead to improved instrument design and control over experiments, as well as to an extended domain of applicability, allowing reliable force measurements, in principle, from femtonewtons to nanonewtons.

  14. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < ‑1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  15. Magnitude and sign correlations in heartbeat fluctuations

    NASA Technical Reports Server (NTRS)

    Ashkenazy, Y.; Ivanov, P. C.; Havlin, S.; Peng, C. K.; Goldberger, A. L.; Stanley, H. E.

    2001-01-01

    We propose an approach for analyzing signals with long-range correlations by decomposing the signal increment series into magnitude and sign series and analyzing their scaling properties. We show that signals with identical long-range correlations can exhibit different time organization for the magnitude and sign. We find that the magnitude series relates to the nonlinear properties of the original time series, while the sign series relates to the linear properties. We apply our approach to the heartbeat interval series and find that the magnitude series is long-range correlated, while the sign series is anticorrelated and that both magnitude and sign series may have clinical applications.

  16. Absolute charge calibration of scintillating screens for relativistic electron detection

    SciTech Connect

    Buck, A.; Popp, A.; Schmid, K.; Karsch, S.; Krausz, F.; Zeil, K.; Jochmann, A.; Kraft, S. D.; Sauerbrey, R.; Cowan, T.; Schramm, U.; Hidding, B.; Kudyakov, T.; Sears, C. M. S.; Veisz, L.; Pawelke, J.

    2010-03-15

    We report on new charge calibrations and linearity tests with high-dynamic range for eight different scintillating screens typically used for the detection of relativistic electrons from laser-plasma based acceleration schemes. The absolute charge calibration was done with picosecond electron bunches at the ELBE linear accelerator in Dresden. The lower detection limit in our setup for the most sensitive scintillating screen (KODAK Biomax MS) was 10 fC/mm{sup 2}. The screens showed a linear photon-to-charge dependency over several orders of magnitude. An onset of saturation effects starting around 10-100 pC/mm{sup 2} was found for some of the screens. Additionally, a constant light source was employed as a luminosity reference to simplify the transfer of a one-time absolute calibration to different experimental setups.

  17. Absolute charge calibration of scintillating screens for relativistic electron detection

    NASA Astrophysics Data System (ADS)

    Buck, A.; Zeil, K.; Popp, A.; Schmid, K.; Jochmann, A.; Kraft, S. D.; Hidding, B.; Kudyakov, T.; Sears, C. M. S.; Veisz, L.; Karsch, S.; Pawelke, J.; Sauerbrey, R.; Cowan, T.; Krausz, F.; Schramm, U.

    2010-03-01

    We report on new charge calibrations and linearity tests with high-dynamic range for eight different scintillating screens typically used for the detection of relativistic electrons from laser-plasma based acceleration schemes. The absolute charge calibration was done with picosecond electron bunches at the ELBE linear accelerator in Dresden. The lower detection limit in our setup for the most sensitive scintillating screen (KODAK Biomax MS) was 10 fC/mm2. The screens showed a linear photon-to-charge dependency over several orders of magnitude. An onset of saturation effects starting around 10-100 pC/mm2 was found for some of the screens. Additionally, a constant light source was employed as a luminosity reference to simplify the transfer of a one-time absolute calibration to different experimental setups.

  18. The discovery and comparison of symbolic magnitudes.

    PubMed

    Chen, Dawn; Lu, Hongjing; Holyoak, Keith J

    2014-06-01

    Humans and other primates are able to make relative magnitude comparisons, both with perceptual stimuli and with symbolic inputs that convey magnitude information. Although numerous models of magnitude comparison have been proposed, the basic question of how symbolic magnitudes (e.g., size or intelligence of animals) are derived and represented in memory has received little attention. We argue that symbolic magnitudes often will not correspond directly to elementary features of individual concepts. Rather, magnitudes may be formed in working memory based on computations over more basic features stored in long-term memory. We present a model of how magnitudes can be acquired and compared based on BARTlet, a representationally simpler version of Bayesian Analogy with Relational Transformations (BART; Lu, Chen, & Holyoak, 2012). BARTlet operates on distributions of magnitude variables created by applying dimension-specific weights (learned with the aid of empirical priors derived from pre-categorical comparisons) to more primitive features of objects. The resulting magnitude distributions, formed and maintained in working memory, are sensitive to contextual influences such as the range of stimuli and polarity of the question. By incorporating psychological reference points that control the precision of magnitudes in working memory and applying the tools of signal detection theory, BARTlet is able to account for a wide range of empirical phenomena involving magnitude comparisons, including the symbolic distance effect and the semantic congruity effect. We discuss the role of reference points in cognitive and social decision-making, and implications for the evolution of relational representations. PMID:24531498

  19. Magnitude systems in old star catalogues

    NASA Astrophysics Data System (ADS)

    Fujiwara, Tomoko; Yamaoka, Hitoshi

    2005-06-01

    The current system of stellar magnitudes originally introduced by Hipparchus was strictly defined by Norman Pogson in 1856. He based his system on Ptolemy's star catalogue, the Almagest, recorded in about AD137, and defined the magnitude-intensity relationship on a logarithmic scale. Stellar magnitudes observed with the naked eye recorded in seven old star catalogues were analyzed in order to examine the visual magnitude systems. Although psychophysicists have proposed that human visual sensitivity follows a power-law scale, it is shown here that the degree of agreement is far better for a logarithmic scale than for a power-law scale. It is also found that light ratios in each star catalogue are nearly equal to 2.512, if the brightest (1st magnitude) and the faintest (6th magnitude and dimmer) stars are excluded from the study. This means that the visual magnitudes in the old star catalogues agree fully with Pogson's logarithmic scale.

  20. Decoupling approximation design using the peak to peak gain

    NASA Astrophysics Data System (ADS)

    Sultan, Cornel

    2013-04-01

    Linear system design for accurate decoupling approximation is examined using the peak to peak gain of the error system. The design problem consists in finding values of system parameters to ensure that this gain is small. For this purpose a computationally inexpensive upper bound on the peak to peak gain, namely the star norm, is minimized using a stochastic method. Examples of the methodology's application to tensegrity structures design are presented. Connections between the accuracy of the approximation, the damping matrix, and the natural frequencies of the system are examined, as well as decoupling in the context of open and closed loop control.

  1. The magnitude-redshift relation in a realistic inhomogeneous universe

    SciTech Connect

    Hada, Ryuichiro; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp

    2014-12-01

    The light rays from a source are subject to a local inhomogeneous geometry generated by inhomogeneous matter distribution as well as the existence of collapsed objects. In this paper we investigate the effect of inhomogeneities and the existence of collapsed objects on the propagation of light rays and evaluate changes in the magnitude-redshift relation from the standard relationship found in a homogeneous FRW universe. We give the expression of the correlation function and the variance for the perturbation of apparent magnitude, and calculate it numerically by using the non-linear matter power spectrum. We use the lognormal probability distribution function for the density contrast and spherical collapse model to truncate the power spectrum in order to estimate the blocking effect by collapsed objects. We find that the uncertainties in Ω{sub m} is ∼ 0.02, and that of w is ∼ 0.04 . We also discuss a possible method to extract these effects from real data which contains intrinsic ambiguities associated with the absolute magnitude.

  2. Determination of absolute internal conversion coefficients using the SAGE spectrometer

    NASA Astrophysics Data System (ADS)

    Sorri, J.; Greenlees, P. T.; Papadakis, P.; Konki, J.; Cox, D. M.; Auranen, K.; Partanen, J.; Sandzelius, M.; Pakarinen, J.; Rahkila, P.; Uusitalo, J.; Herzberg, R.-D.; Smallcombe, J.; Davies, P. J.; Barton, C. J.; Jenkins, D. G.

    2016-03-01

    A non-reference based method to determine internal conversion coefficients using the SAGE spectrometer is carried out for transitions in the nuclei of 154Sm, 152Sm and 166Yb. The Normalised-Peak-to-Gamma method is in general an efficient tool to extract internal conversion coefficients. However, in many cases the required well-known reference transitions are not available. The data analysis steps required to determine absolute internal conversion coefficients with the SAGE spectrometer are presented. In addition, several background suppression methods are introduced and an example of how ancillary detectors can be used to select specific reaction products is given. The results obtained for ground-state band E2 transitions show that the absolute internal conversion coefficients can be extracted using the methods described with a reasonable accuracy. In some cases of less intense transitions only an upper limit for the internal conversion coefficient could be given.

  3. Absolute Gravity Datum in the Age of Cold Atom Gravimeters

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Eckl, M. C.

    2014-12-01

    The international gravity datum is defined today by the International Gravity Standardization Net of 1971 (IGSN-71). The data supporting this network was measured in the 1950s and 60s using pendulum and spring-based gravimeter ties (plus some new ballistic absolute meters) to replace the prior protocol of referencing all gravity values to the earlier Potsdam value. Since this time, gravimeter technology has advanced significantly with the development and refinement of the FG-5 (the current standard of the industry) and again with the soon-to-be-available cold atom interferometric absolute gravimeters. This latest development is anticipated to provide improvement in the range of two orders of magnitude as compared to the measurement accuracy of technology utilized to develop ISGN-71. In this presentation, we will explore how the IGSN-71 might best be "modernized" given today's requirements and available instruments and resources. The National Geodetic Survey (NGS), along with other relevant US Government agencies, is concerned about establishing gravity control to establish and maintain high order geodetic networks as part of the nation's essential infrastructure. The need to modernize the nation's geodetic infrastructure was highlighted in "Precise Geodetic Infrastructure, National Requirements for a Shared Resource" National Academy of Science, 2010. The NGS mission, as dictated by Congress, is to establish and maintain the National Spatial Reference System, which includes gravity measurements. Absolute gravimeters measure the total gravity field directly and do not involve ties to other measurements. Periodic "intercomparisons" of multiple absolute gravimeters at reference gravity sites are used to constrain the behavior of the instruments to ensure that each would yield reasonably similar measurements of the same location (i.e. yield a sufficiently consistent datum when measured in disparate locales). New atomic interferometric gravimeters promise a significant

  4. Improving HST Pointing & Absolute Astrometry

    NASA Astrophysics Data System (ADS)

    Lallo, Matthew; Nelan, E.; Kimmer, E.; Cox, C.; Casertano, S.

    2007-05-01

    Accurate absolute astrometry is becoming increasingly important in an era of multi-mission archives and virtual observatories. Hubble Space Telescope's (HST's) Guidestar Catalog II (GSC2) has reduced coordinate error to around 0.25 arcsecond, a factor 2 or more compared with GSC1. With this reduced catalog error, special attention must be given to calibrate and maintain the Fine Guidance Sensors (FGSs) and Science Instruments (SIs) alignments in HST to a level well below this in order to ensure that the accuracy of science product's astrometry keywords and target positioning are limited only by the catalog errors. After HST Servicing Mission 4, such calibrations' improvement in "blind" pointing accuracy will allow for more efficient COS acquisitions. Multiple SIs and FGSs each have their own footprints in the spatially shared HST focal plane. It is the small changes over time in primarily the whole-body positions & orientations of these instruments & guiders relative to one another that is addressed by this work. We describe the HST Cycle 15 program CAL/OTA 11021 which, along with future variants of it, determines and maintains positions and orientations of the SIs and FGSs to better than 50 milli- arcseconds and 0.04 to 0.004 degrees of roll, putting errors associated with the alignment sufficiently below GSC2 errors. We present recent alignment results and assess their errors, illustrate trends, and describe where and how the observer sees benefit from these calibrations when using HST.

  5. Forecasting magnitude, time, and location of aftershocks for aftershock hazard

    NASA Astrophysics Data System (ADS)

    Chen, K.; Tsai, Y.; Huang, M.; Chang, W.

    2011-12-01

    In this study we investigate the spatial and temporal seismicity parameters of the aftershock sequence accompanying the 17:47 20 September 1999 (UTC) 7.45 Chi-Chi earthquake Taiwan. Dividing the epicentral zone into north of the epicenter, at the epicenter, and south of the epicenter, it is found that immediately after the earthquake the area close by the epicenter had a lower value than both the northern and southern sections. This pattern suggests that at the time of the Chi-Chi earthquake, the area close by the epicenter remained prone to large magnitude aftershocks and strong shaking. However, with time the value increases. An increasing value indicates a reduced likelihood of large magnitude aftershocks. The study also shows that the value is higher at the southern section of the epicentral zone, indicating a faster rate of decay in this section. The primary purpose of this paper is to design a predictive model for forecasting the magnitude, time, and location of aftershocks to large earthquakes. The developed model is presented and applied to the 17:47 20 September 1999 7.45 Chi-Chi earthquake Taiwan, and the 09:32 5 November 2009 (UTC) Nantou 6.19, and 00:18 4 March 2010 (UTC) Jiashian 6.49 earthquake sequences. In addition, peak ground acceleration trends for the Nantou and Jiashian aftershock sequences are predicted and compared to actual trends. The results of the estimated peak ground acceleration are remarkably similar to calculations from recorded magnitudes in both trend and level. To improve the predictive skill of the model for occurrence time, we use an empirical relation to forecast the time of aftershocks. The empirical relation improves time prediction over that of random processes. The results will be of interest to seismic mitigation specialists and rescue crews. We apply also the parameters and empirical relation from Chi-Chi aftershocks of Taiwan to forecast aftershocks with magnitude M > 6.0 of 05:46 11 March 2011 (UTC) Tohoku 9

  6. Absolute Instability in Coupled-Cavity TWTs

    NASA Astrophysics Data System (ADS)

    Hung, D. M. H.; Rittersdorf, I. M.; Zhang, Peng; Lau, Y. Y.; Simon, D. H.; Gilgenbach, R. M.; Chernin, D.; Antonsen, T. M., Jr.

    2014-10-01

    This paper will present results of our analysis of absolute instability in a coupled-cavity traveling wave tube (TWT). The structure mode at the lower and upper band edges are respectively approximated by a hyperbola in the (omega, k) plane. When the Briggs-Bers criterion is applied, a threshold current for onset of absolute instability is observed at the upper band edge, but not the lower band edge. The nonexistence of absolute instability at the lower band edge is mathematically similar to the nonexistence of absolute instability that we recently demonstrated for a dielectric TWT. The existence of absolute instability at the upper band edge is mathematically similar to the existence of absolute instability in a gyroton traveling wave amplifier. These interesting observations will be discussed, and the practical implications will be explored. This work was supported by AFOSR, ONR, and L-3 Communications Electron Devices.

  7. The spatial resolution of epidemic peaks.

    PubMed

    Mills, Harriet L; Riley, Steven

    2014-04-01

    The emergence of novel respiratory pathogens can challenge the capacity of key health care resources, such as intensive care units, that are constrained to serve only specific geographical populations. An ability to predict the magnitude and timing of peak incidence at the scale of a single large population would help to accurately assess the value of interventions designed to reduce that peak. However, current disease-dynamic theory does not provide a clear understanding of the relationship between: epidemic trajectories at the scale of interest (e.g. city); population mobility; and higher resolution spatial effects (e.g. transmission within small neighbourhoods). Here, we used a spatially-explicit stochastic meta-population model of arbitrary spatial resolution to determine the effect of resolution on model-derived epidemic trajectories. We simulated an influenza-like pathogen spreading across theoretical and actual population densities and varied our assumptions about mobility using Latin-Hypercube sampling. Even though, by design, cumulative attack rates were the same for all resolutions and mobilities, peak incidences were different. Clear thresholds existed for all tested populations, such that models with resolutions lower than the threshold substantially overestimated population-wide peak incidence. The effect of resolution was most important in populations which were of lower density and lower mobility. With the expectation of accurate spatial incidence datasets in the near future, our objective was to provide a framework for how to use these data correctly in a spatial meta-population model. Our results suggest that there is a fundamental spatial resolution for any pathogen-population pair. If underlying interactions between pathogens and spatially heterogeneous populations are represented at this resolution or higher, accurate predictions of peak incidence for city-scale epidemics are feasible. PMID:24722420

  8. A rise in peak performance age in female athletes.

    PubMed

    Elmenshawy, Ahmed R; Machin, Daniel R; Tanaka, Hirofumi

    2015-06-01

    It was reported in 1980s that ages at which peak performance was observed had remained remarkably stable in the past century, although absolute levels of athletic performance increased dramatically for the same time span. The emergence of older (masters) athletes in the past few decades has changed the demographics and age-spectrum of Olympic athletes. The primary aim of the present study was to determine whether the ages at which peak performance was observed had increased in the recent decades. The data spanning 114 years from the first Olympics (1898) to the most recent Olympics (2014) were collected using the publically available data. In the present study, ages at which Olympic medals (gold, silver, and bronze) were won were used as the indicators of peak performance age. Track and field, swimming, rowing, and ice skating events were analyzed. In men, peak performance age did not change significantly in most of the sporting events (except in 100 m sprint running). In contrast, peak performance ages in women have increased significantly since 1980s and consistently in all the athletic events examined. Interestingly, as women's peak performance age increased, they became similar to men's peak ages in many events. In the last 20-30 years, ages at which peak athletic performance is observed have increased in women but not in men. PMID:26022534

  9. Absolute negative mobility of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Ou, Ya-li; Hu, Cai-tian; Wu, Jian-chun; Ai, Bao-quan

    2015-12-01

    Transport of interacting Brownian particles in a periodic potential is investigated in the presence of an ac force and a dc force. From Brownian dynamic simulations, we find that both the interaction between particles and the thermal fluctuations play key roles in the absolute negative mobility (the particle noisily moves backwards against a small constant bias). When no the interaction acts, there is only one region where the absolute negative mobility occurs. In the presence of the interaction, the absolute negative mobility may appear in multiple regions. The weak interaction can be helpful for the absolute negative mobility, while the strong interaction has a destructive impact on it.

  10. Absolute stress measurements at the rangely anticline, Northwestern Colorado

    USGS Publications Warehouse

    de la Cruz, R. V.; Raleigh, C.B.

    1972-01-01

    Five different methods of measuring absolute state of stress in rocks in situ were used at sites near Rangely, Colorado, and the results compared. For near-surface measurements, overcoring of the borehole-deformation gage is the most convenient and rapid means of obtaining reliable values for the magnitude and direction of the state of stress in rocks in situ. The magnitudes and directions of the principal stresses are compared to the geologic features of the different areas of measurement. The in situ stresses are consistent in orientation with the stress direction inferred from the earthquake focal-plane solutions and existing joint patterns but inconsistent with stress directions likely to have produced the Rangely anticline. ?? 1972.

  11. Measuring non-local Lagrangian peak bias

    NASA Astrophysics Data System (ADS)

    Biagetti, Matteo; Chan, Kwan Chuen; Desjacques, Vincent; Paranjape, Aseem

    2014-06-01

    We investigate non-local Lagrangian bias contributions involving gradients of the linear density field, for which we have predictions from the excursion set peak formalism. We begin by writing down a bias expansion which includes all the bias terms, including the non-local ones. Having checked that the model furnishes a reasonable fit to the halo mass function, we develop a one-point cross-correlation technique to measure bias factors associated with χ2-distributed quantities. We validate the method with numerical realizations of peaks of Gaussian random fields before we apply it to N-body simulations. We focus on the lowest (quadratic) order non-local contributions -2χ _{10}(k_1\\cdot k_2) and χ _{01}[3(k_1\\cdot k_2)^2-k_1^2 k_2^2], where k_1, k_2 are wave modes. We can reproduce our measurement of χ10 if we allow for an offset between the Lagrangian halo centre-of-mass and the peak position. The sign and magnitude of χ10 is consistent with Lagrangian haloes sitting near linear density maxima. The resulting contribution to the halo bias can safely be ignored for M = 1013 M⊙ h-1, but could become relevant at larger halo masses. For the second non-local bias χ01 however, we measure a much larger magnitude than predicted by our model. We speculate that some of this discrepancy might originate from non-local Lagrangian contributions induced by non-spherical collapse.

  12. How to use your peak flow meter

    MedlinePlus

    Peak flow meter - how to use; Asthma - peak flow meter; Reactive airway disease - peak flow meter; Bronchial asthma - peak flow meter ... your airways are narrowed and blocked due to asthma, your peak flow values drop. You can check ...

  13. Numerical Magnitude Representations Influence Arithmetic Learning

    ERIC Educational Resources Information Center

    Booth, Julie L.; Siegler, Robert S.

    2008-01-01

    This study examined whether the quality of first graders' (mean age = 7.2 years) numerical magnitude representations is correlated with, predictive of, and causally related to their arithmetic learning. The children's pretest numerical magnitude representations were found to be correlated with their pretest arithmetic knowledge and to be…

  14. Reward Magnitude Effects on Temporal Discrimination

    ERIC Educational Resources Information Center

    Galtress, Tiffany; Kirkpatrick, Kimberly

    2010-01-01

    Changes in reward magnitude or value have been reported to produce effects on timing behavior, which have been attributed to changes in the speed of an internal pacemaker in some instances and to attentional factors in other cases. The present experiments therefore aimed to clarify the effects of reward magnitude on timing processes. In Experiment…

  15. Representations of the Magnitudes of Fractions

    ERIC Educational Resources Information Center

    Schneider, Michael; Siegler, Robert S.

    2010-01-01

    We tested whether adults can use integrated, analog, magnitude representations to compare the values of fractions. The only previous study on this question concluded that even college students cannot form such representations and instead compare fraction magnitudes by representing numerators and denominators as separate whole numbers. However,…

  16. Local magnitudes of small contained explosions.

    SciTech Connect

    Chael, Eric Paul

    2009-12-01

    The relationship between explosive yield and seismic magnitude has been extensively studied for underground nuclear tests larger than about 1 kt. For monitoring smaller tests over local ranges (within 200 km), we need to know whether the available formulas can be extrapolated to much lower yields. Here, we review published information on amplitude decay with distance, and on the seismic magnitudes of industrial blasts and refraction explosions in the western U. S. Next we measure the magnitudes of some similar shots in the northeast. We find that local magnitudes ML of small, contained explosions are reasonably consistent with the magnitude-yield formulas developed for nuclear tests. These results are useful for estimating the detection performance of proposed local seismic networks.

  17. Reward magnitude effects on temporal discrimination

    PubMed Central

    Galtress, Tiffany; Kirkpatrick, Kimberly

    2014-01-01

    Changes in reward magnitude or value have been reported to produce effects on timing behavior, which have been attributed to changes in the speed of an internal pacemaker in some instances and to attentional factors in other cases. The present experiments therefore aimed to clarify the effects of reward magnitude on timing processes. In Experiment 1, rats were trained to discriminate a short (2 s) vs. a long (8 s) signal followed by testing with intermediate durations. Then, the reward on short or long trials was increased from 1 to 4 pellets in separate groups. Experiment 2 measured the effect of different reward magnitudes associated with the short vs. long signals throughout training. Finally, Experiment 3 controlled for satiety effects during the reward magnitude manipulation phase. A general flattening of the psychophysical function was evident in all three experiments, suggesting that unequal reward magnitudes may disrupt attention to duration. PMID:24965705

  18. Reward magnitude effects on temporal discrimination

    PubMed Central

    Galtress, Tiffany; Kirkpatrick, Kimberly

    2016-01-01

    Changes in reward magnitude or value have been reported to produce effects on timing behavior, which have been attributed to changes in the speed of an internal pacemaker in some instances and to attentional factors in other cases. The present experiments therefore aimed to clarify the effects of reward magnitude on timing processes. In Experiment 1, rats were trained to discriminate a short (2 s) vs. a long (8 s) signal followed by testing with intermediate durations. Then, the reward on short or long trials was increased from 1 to 4 pellets in separate groups. Experiment 2 measured the effect of different reward magnitudes associated with the short vs. long signals throughout training. Finally, Experiment 3 controlled for satiety effects during the reward magnitude manipulation phase. A general flattening of the psychophysical function was evident in all three experiments, suggesting that unequal reward magnitudes may disrupt attention to duration.

  19. A model of peak production in oil fields

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel M.; Wiener, Richard J.

    2010-01-01

    We developed a model for oil production on the basis of simple physical considerations. The model provides a basic understanding of Hubbert's empirical observation that the production rate for an oil-producing region reaches its maximum when approximately half the recoverable oil has been produced. According to the model, the oil production rate at a large field must peak before drilling peaks. We use the model to investigate the effects of several drilling strategies on oil production. Despite the model's simplicity, predictions for the timing and magnitude of peak production match data on oil production from major oil fields throughout the world.

  20. Determination of the total absorption peak in an electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Cheng, Jia-Hua; Wang, Zhe; Lebanowski, Logan; Lin, Guey-Lin; Chen, Shaomin

    2016-08-01

    A physically motivated function was developed to accurately determine the total absorption peak in an electromagnetic calorimeter and to overcome biases present in many commonly used methods. The function is the convolution of a detector resolution function with the sum of a delta function, which represents the complete absorption of energy, and a tail function, which describes the partial absorption of energy and depends on the detector materials and structures. Its performance was tested with the simulation of three typical cases. The accuracy of the extracted peak value, resolution, and peak area was improved by an order of magnitude on average, relative to the Crystal Ball function.

  1. Son preference in Indian families: absolute versus relative wealth effects.

    PubMed

    Gaudin, Sylvestre

    2011-02-01

    The desire for male children is prevalent in India, where son preference has been shown to affect fertility behavior and intrahousehold allocation of resources. Economic theory predicts less gender discrimination in wealthier households, but demographers and sociologists have argued that wealth can exacerbate bias in the Indian context. I argue that these apparently conflicting theories can be reconciled and simultaneously tested if one considers that they are based on two different notions of wealth: one related to resource constraints (absolute wealth), and the other to notions of local status (relative wealth). Using cross-sectional data from the 1998-1999 and 2005-2006 National Family and Health Surveys, I construct measures of absolute and relative wealth by using principal components analysis. A series of statistical models of son preference is estimated by using multilevel methods. Results consistently show that higher absolute wealth is strongly associated with lower son preference, and the effect is 20%-40% stronger when the household's community-specific wealth score is included in the regression. Coefficients on relative wealth are positive and significant although lower in magnitude. Results are robust to using different samples, alternative groupings of households in local areas, different estimation methods, and alternative dependent variables. PMID:21302027

  2. Inequalities, Absolute Value, and Logical Connectives.

    ERIC Educational Resources Information Center

    Parish, Charles R.

    1992-01-01

    Presents an approach to the concept of absolute value that alleviates students' problems with the traditional definition and the use of logical connectives in solving related problems. Uses a model that maps numbers from a horizontal number line to a vertical ray originating from the origin. Provides examples solving absolute value equations and…

  3. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  4. Monolithically integrated absolute frequency comb laser system

    DOEpatents

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  5. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  6. Investigating Absolute Value: A Real World Application

    ERIC Educational Resources Information Center

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  7. Absolute Income, Relative Income, and Happiness

    ERIC Educational Resources Information Center

    Ball, Richard; Chernova, Kateryna

    2008-01-01

    This paper uses data from the World Values Survey to investigate how an individual's self-reported happiness is related to (i) the level of her income in absolute terms, and (ii) the level of her income relative to other people in her country. The main findings are that (i) both absolute and relative income are positively and significantly…

  8. Absolute uniqueness of phase retrieval with random illumination

    NASA Astrophysics Data System (ADS)

    Fannjiang, Albert

    2012-07-01

    Random illumination is proposed to enforce absolute uniqueness and resolve all types of ambiguity, trivial or nontrivial, in phase retrieval. Almost sure irreducibility is proved for any complex-valued object whose support set has rank ⩾ 2. While the new irreducibility result can be viewed as a probabilistic version of the classical result by Bruck, Sodin and Hayes, it provides a novel perspective and an effective method for phase retrieval. In particular, almost sure uniqueness, up to a global phase, is proved for complex-valued objects under general two-point conditions. Under a tight sector constraint absolute uniqueness is proved to hold with probability exponentially close to unity as the object sparsity increases. Under a magnitude constraint with random amplitude illumination, uniqueness modulo global phase is proved to hold with probability exponentially close to unity as object sparsity increases. For general complex-valued objects without any constraint, almost sure uniqueness up to global phase is established with two sets of Fourier magnitude data under two independent illuminations. Numerical experiments suggest that random illumination essentially alleviates most, if not all, numerical problems commonly associated with the standard phasing algorithms.

  9. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  10. The Magnitude and Energy of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2003-12-01

    Several magnitudes were introduced to quantify large earthquakes better and more comprehensive than Ms: Mw (moment magnitude; Kanamori, 1977), ME (strain energy magnitude; Purcaru and Berckhemer, 1978), Mt (tsunami magnitude; Abe, 1979), Mm (mantle magnitude; Okal and Talandier, 1985), Me (seismic energy magnitude; Choy and Boatwright, 1995). Although these magnitudes are still subject to different uncertainties, various kinds of earthquakes can now be better understood in terms or combinations of them. They can also be viewd as mappings of basic source parameters: seismic moment, strain energy, seismic energy, stress drop, under certain assumptions or constraints. We studied a set of about 90 large earthquakes (shallow and deeper) occurred in different tectonic regimes, with more reliable source parameters, and compared them in terms of the above magnitudes. We found large differences between the strain energy (mapped to ME) and seismic energy (mapped to Me), and between ME of events with about the same Mw. This confirms that no 1-to-1 correspondence exists between these magnitudes (Purcaru, 2002). One major cause of differences for "normal" earthquakes is the level of the stress drop over asperities which release and partition the strain energy. We quantify the energetic balance of earthquakes in terms of strain energy Est and its components (fracture (Eg), friction (Ef) and seismic (Es) energy) using an extended Hamilton's principle. The earthquakes are thrust-interplate, strike slip, shallow in-slab, slow/tsunami, deep and continental. The (scaled) strain energy equation we derived is: Est/M0 = (1+e(g,s))(Es/M_0), e(g,s) = Eg/E_s, assuming complete stress drop, using the (static) stress drop variability, and that Est and Es are not in a 1-to-1 correspondence. With all uncertainties, our analysis reveal, for a given seismic moment, a large variation of earthquakes in terms of energies, even in the same seismic region. In view of these, for further understanding

  11. Toward Reconciling Magnitude Discrepancies Estimated from Paleoearthquake Data

    SciTech Connect

    N. Seth Carpenter; Suzette J. Payne; Annette L. Schafer

    2012-06-01

    We recognize a discrepancy in magnitudes estimated for several Basin and Range, U.S.A. faults. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths (L{sub seg}) where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements (D) along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M {approx} L{sub seg}, should equal M {approx} D). Typically, the empirical relationships are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating L{sub seg} with surface rupture length (SRL). Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M {approx} SRL relationship using L{sub seg} as SRL leads to an underestimation of magnitude and the M {approx} L{sub seg} and M {approx} D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude (Mw) and length, where length is L{sub seg} instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw {approx} L{sub seg} results are strikingly consistent

  12. Influence of menopause and Type 2 diabetes on pulmonary oxygen uptake kinetics and peak exercise performance during cycling.

    PubMed

    Kiely, Catherine; Rocha, Joel; O'Connor, Eamonn; O'Shea, Donal; Green, Simon; Egaña, Mikel

    2015-10-15

    We investigated if the magnitude of the Type 2 diabetes (T2D)-induced impairments in peak oxygen uptake (V̇O2) and V̇O2 kinetics was affected by menopausal status. Twenty-two women with T2D (8 premenopausal, 14 postmenopausal), and 22 nondiabetic (ND) women (11 premenopausal, 11 postmenopausal) matched by age (range = 30-59 yr) were recruited. Participants completed four bouts of constant-load cycling at 80% of their ventilatory threshold for the determination of V̇O2 kinetics. Cardiac output (CO) (inert gas rebreathing) was recorded at rest and at 30 s and 240 s during two additional bouts. Peak V̇O2 was significantly (P < 0.05) reduced in both groups with T2D compared with ND counterparts (premenopausal, 1.79 ± 0.16 vs. 1.55 ± 0.32 l/min; postmenopausal, 1.60 ± 0.30 vs. 1.45 ± 0.24 l/min). The time constant of phase II of the V̇O2 response was slowed (P < 0.05) in both groups with T2D compared with healthy counterparts (premenopausal, 29.1 ± 11.2 vs. 43.0 ± 12.2 s; postmenopausal, 33.0 ± 9.1 vs. 41.8 ± 17.7 s). At rest and during submaximal exercise absolute CO responses were lower, but the "gains" in CO larger (both P < 0.05) in both groups with T2D. Our results suggest that the magnitude of T2D-induced impairments in peak V̇O2 and V̇O2 kinetics is not affected by menopausal status in participants younger than 60 yr of age. PMID:26269520

  13. Measuring radon source magnitude in residential buildings

    SciTech Connect

    Nazaroff, W.W.; Boegel, M.L.; Nero, A.V.

    1981-08-01

    A description is given of procedures used in residences for rapid grab-sample and time-dependent measurements of the air-exchange rate and radon concentration. The radon source magnitude is calculated from the results of simultaneous measurements of these parameters. Grab-sample measurements in three survey groups comprising 101 US houses showed the radon source magnitude to vary approximately log-normally with a geometric mean of 0.37 and a range of 0.01 to 6.0 pCi 1/sup -1/ h/sup -1/. Successive measurements in six houses in the northeastern United States showed considerable variability in source magnitude within a given house. In two of these houses the source magnitude showed a strong correlation with the air-exchange rate, suggesting that soil gas influx can be an important transport process for indoor radon.

  14. Determination of the Meteor Limiting Magnitude

    NASA Technical Reports Server (NTRS)

    Kingery, A.; Blaauw, R.; Cooke, W. J.

    2016-01-01

    The limiting meteor magnitude of a meteor camera system will depend on the camera hardware and software, sky conditions, and the location of the meteor radiant. Some of these factors are constants for a given meteor camera system, but many change between meteor shower or sporadic source and on both long and short timescales. Since the limiting meteor magnitude ultimately gets used to calculate the limiting meteor mass for a given data set, it is important to have an understanding of these factors and to monitor how they change throughout the night, as a 0.5 magnitude uncertainty in limiting magnitude translates to a uncertainty in limiting mass by a factor of two.

  15. Absolute and relative bioavailability of oral acetaminophen preparations.

    PubMed

    Ameer, B; Divoll, M; Abernethy, D R; Greenblatt, D J; Shargel, L

    1983-08-01

    Eighteen healthy volunteers received single 650-mg doses of acetaminophen by 5-min intravenous infusion, in tablet form by mouth in the fasting state, and in elixir form orally in the fasting state in a three-way crossover study. An additional eight subjects received two 325-mg tablets from two commercial vendors in a randomized crossover fashion. Concentrations of acetaminophen in multiple plasma samples collected during the 12-hr period after each dose were determined by high-performance liquid chromatography. Following a lag time averaging 3-4 min, absorption of oral acetaminophen was first order, with apparent absorption half-life values averaging 8.4 (elixir) and 11.4 (tablet) min. The mean time-to-peak concentration was significantly longer after tablet (0.75 hr) than after elixir (0.48 hr) administration. Peak plasma concentrations and elimination half-lives were similar following both preparations. Absolute systemic availability of the elixir (87%) was significantly greater than for the tablets (79%). Two commercially available tablet formulations did not differ significantly in peak plasma concentrations, time-to-peak, or total area under the plasma concentration curve and therefore were judged to be bioequivalent. PMID:6688635

  16. Toward Reconciling Magnitude Discrepancies Estimated from Paleoearthquake Data: A New Approach for Predicting Earthquake Magnitudes from Fault Segment Lengths

    NASA Astrophysics Data System (ADS)

    Carpenter, N. S.; Payne, S. J.; Schafer, A. L.

    2011-12-01

    We recognize a discrepancy in magnitudes estimated for several Basin and Range faults in the Intermountain Seismic Belt, U.S.A. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths, Lseg, where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements, D, along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M ~ Lseg, should equal M ~ D). Typically, the empirical relationships are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating Lseg with surface rupture length, SRL. Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M ~ SRL relationship using Lseg as SRL leads to an underestimation of magnitude and the M ~ Lseg and M ~ D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude, Mw, and length, where length is Lseg instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw ~ Lseg results are strikingly consistent with Mw ~ D calculations using paleoearthquake data for

  17. Absolute optical instruments without spherical symmetry

    NASA Astrophysics Data System (ADS)

    Tyc, Tomáš; Dao, H. L.; Danner, Aaron J.

    2015-11-01

    Until now, the known set of absolute optical instruments has been limited to those containing high levels of symmetry. Here, we demonstrate a method of mathematically constructing refractive index profiles that result in asymmetric absolute optical instruments. The method is based on the analogy between geometrical optics and classical mechanics and employs Lagrangians that separate in Cartesian coordinates. In addition, our method can be used to construct the index profiles of most previously known absolute optical instruments, as well as infinitely many different ones.

  18. Hubbert's Peak: A Physicist's View

    NASA Astrophysics Data System (ADS)

    McDonald, Richard

    2011-11-01

    Oil and its by-products, as used in manufacturing, agriculture, and transportation, are the lifeblood of today's 7 billion-person population and our 65T world economy. Despite this importance, estimates of future oil production seem dominated by wishful thinking rather than quantitative analysis. Better studies are needed. In 1956, Dr. M.King Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Thus, the peak of oil production is referred to as ``Hubbert's Peak.'' Prof. Al Bartlett extended this work in publications and lectures on population and oil. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. This paper extends this line of work to include analyses of individual countries, inclusion of multiple Gaussian peaks, and analysis of reserves data. While this is not strictly a predictive theory, we will demonstrate a ``closed'' story connecting production, oil-in-place, and reserves. This gives us the ``most likely'' estimate of future oil availability. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.

  19. The effect of background galaxy contamination on the absolute magnitude and light curve speed class of type Ia supernovae

    NASA Technical Reports Server (NTRS)

    Boisseau, John R.; Wheeler, J. Craig

    1991-01-01

    Observational data are presented in support of the hypothesis that background galaxy contamination is present in the photometric data of Ia supernovae and that this effect can account for the observed dispersion in the light curve speeds of most of Ia supernovae. The implication is that the observed dispersion in beta is artificial and that most of Ia supernovae have nearly homogeneous light curves. The result supports the notion that Ia supernovae are good standard candles.

  20. Is an absolute level of cortical beta suppression required for proper movement? Magnetoencephalographic evidence from healthy aging.

    PubMed

    Heinrichs-Graham, Elizabeth; Wilson, Tony W

    2016-07-01

    Previous research has connected a specific pattern of beta oscillatory activity to proper motor execution, but no study to date has directly examined how resting beta levels affect motor-related beta oscillatory activity in the motor cortex. Understanding this relationship is imperative to determining the basic mechanisms of motor control, as well as the impact of pathological beta oscillations on movement execution. In the current study, we used magnetoencephalography (MEG) and a complex movement paradigm to quantify resting beta activity and movement-related beta oscillations in the context of healthy aging. We chose healthy aging as a model because preliminary evidence suggests that beta activity is elevated in older adults, and thus by examining older and younger adults we were able to naturally vary resting beta levels. To this end, healthy younger and older participants were recorded during motor performance and at rest. Using beamforming, we imaged the peri-movement beta event-related desynchronization (ERD) and extracted virtual sensors from the peak voxels, which enabled absolute and relative beta power to be assessed. Interestingly, absolute beta power during the pre-movement baseline was much stronger in older relative to younger adults, and older adults also exhibited proportionally large beta desynchronization (ERD) responses during motor planning and execution compared to younger adults. Crucially, we found a significant relationship between spontaneous (resting) beta power and beta ERD magnitude in both primary motor cortices, above and beyond the effects of age. A similar link was found between beta ERD magnitude and movement duration. These findings suggest a direct linkage between beta reduction during movement and spontaneous activity in the motor cortex, such that as spontaneous beta power increases, a greater reduction in beta activity is required to execute movement. We propose that, on an individual level, the primary motor cortices have an

  1. Magnitudes and timescales of total solar irradiance variability

    NASA Astrophysics Data System (ADS)

    Kopp, Greg

    2016-07-01

    The Sun's net radiative output varies on timescales of minutes to gigayears. Direct measurements of the total solar irradiance (TSI) show changes in the spatially- and spectrally-integrated radiant energy on timescales as short as minutes to as long as a solar cycle. Variations of ~0.01% over a few minutes are caused by the ever-present superposition of convection and oscillations with very large solar flares on rare occasion causing slightly-larger measurable signals. On timescales of days to weeks, changing photospheric magnetic activity affects solar brightness at the ~0.1% level. The 11-year solar cycle shows variations of comparable magnitude with irradiances peaking near solar maximum. Secular variations are more difficult to discern, being limited by instrument stability and the relatively short duration of the space-borne record. Historical reconstructions of the Sun's irradiance based on indicators of solar-surface magnetic activity, such as sunspots, faculae, and cosmogenic isotope records, suggest solar brightness changes over decades to millennia, although the magnitudes of these variations have high uncertainties due to the indirect historical records on which they rely. Stellar evolution affects yet longer timescales and is responsible for the greatest solar variabilities. In this manuscript I summarize the Sun's variability magnitudes over different temporal regimes and discuss the irradiance record's relevance for solar and climate studies as well as for detections of exo-solar planets transiting Sun-like stars.

  2. Stochastic acceleration in peaked spectrum

    SciTech Connect

    Zasenko, V.; Zagorodny, A.; Weiland, J.

    2005-06-15

    Diffusion in velocity space of test particles undergoing external random electric fields with spectra varying from low intensive and broad to high intensive and narrow (peaked) is considered. It is shown that to achieve consistency between simulation and prediction of the microscopic model, which is reduced to Fokker-Planck-type equation, it is necessary, in the case of peaked spectrum, to account for temporal variation of diffusion coefficient occurring in the early stage. An analytical approximation for the solution of the Fokker-Planck equation with a time and velocity dependent diffusion coefficients is proposed.

  3. Peak finding using biorthogonal wavelets

    SciTech Connect

    Tan, C.Y.

    2000-02-01

    The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.

  4. A New Gimmick for Assigning Absolute Configuration.

    ERIC Educational Resources Information Center

    Ayorinde, F. O.

    1983-01-01

    A five-step procedure is provided to help students in making the assignment absolute configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)

  5. WEAK-LENSING PEAK FINDING: ESTIMATORS, FILTERS, AND BIASES

    SciTech Connect

    Schmidt, Fabian

    2011-07-10

    Large catalogs of shear-selected peaks have recently become a reality. In order to properly interpret the abundance and properties of these peaks, it is necessary to take into account the effects of the clustering of source galaxies, among themselves and with the lens. In addition, the preferred selection of magnified galaxies in a flux- and size-limited sample leads to fluctuations in the apparent source density that correlate with the lensing field. In this paper, we investigate these issues for two different choices of shear estimators that are commonly in use today: globally normalized and locally normalized estimators. While in principle equivalent, in practice these estimators respond differently to systematic effects such as magnification and cluster member dilution. Furthermore, we find that the answer to the question of which estimator is statistically superior depends on the specific shape of the filter employed for peak finding; suboptimal choices of the estimator+filter combination can result in a suppression of the number of high peaks by orders of magnitude. Magnification and size bias generally act to increase the signal-to-noise {nu} of shear peaks; for high peaks the boost can be as large as {Delta}{nu} {approx} 1-2. Due to the steepness of the peak abundance function, these boosts can result in a significant increase in the observed abundance of shear peaks. A companion paper investigates these same issues within the context of stacked weak-lensing mass estimates.

  6. A statistical measure of financial crises magnitude

    NASA Astrophysics Data System (ADS)

    Negrea, Bogdan

    2014-03-01

    This paper postulates the concept of financial market energy and provides a statistical measure of the financial market crisis magnitude based on an analogy between earthquakes and market crises. The financial energy released by the market is expressed in terms of trading volume and stock market index returns. A financial “earthquake” occurs if the financial energy released by the market exceeds the estimated threshold of market energy called critical energy. Similar to the Richter scale which is used in seismology in order to measure the magnitude of an earthquake, we propose a financial Gutenberg-Richter relation in order to capture the crisis magnitude and we show that the statistical pattern of the financial market crash is given by two statistical regimes, namely Pareto and Wakeby distributions.

  7. Measuring Your Peak Flow Rate

    MedlinePlus

    ... meter. Proper cleaning with mild detergent in hot water will keep your peak flow meter working accurately and may keep you healthier. Related Content News: American Lung Association Applauds EPA’s Update to Cross-State Air Pollution Rule News: American Lung Association Invests More Than $ ...

  8. Peak Stress Testing Protocol Framework

    EPA Science Inventory

    Treatment of peak flows during wet weather is a common challenge across the country for municipal wastewater utilities with separate and/or combined sewer systems. Increases in wastewater flow resulting from infiltration and inflow (I/I) during wet weather events can result in op...

  9. Hubbert's Peak -- A Physicist's View

    NASA Astrophysics Data System (ADS)

    McDonald, Richard

    2011-04-01

    Oil, as used in agriculture and transportation, is the lifeblood of modern society. It is finite in quantity and will someday be exhausted. In 1956, Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Bartlett extended this work in publications and lectures on the finite nature of oil and its production peak and depletion. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. Central to these analyses are estimates of total ``oil in place'' obtained from engineering studies of oil reservoirs as this quantity determines the area under the Hubbert's Peak. Knowing the production history and the total oil in place allows us to make estimates of reserves, and therefore future oil availability. We will then examine reserves data for various countries, in particular OPEC countries, and see if these data tell us anything about the future availability of oil. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.

  10. Global flood hazard mapping using statistical peak flow estimates

    NASA Astrophysics Data System (ADS)

    Herold, C.; Mouton, F.

    2011-01-01

    Our aim is to produce a world map of flooded areas for a 100 year return period, using a method based on large rivers peak flow estimates derived from mean monthly discharge time-series. Therefore, the map is supposed to represent flooding that affects large river floodplains, but not events triggered by specific conditions like coastal or flash flooding for instance. We first generate for each basin a set of hydromorphometric, land cover and climatic variables. In case of an available discharge record station at the basin outlet, we base the hundred year peak flow estimate on the corresponding time-series. Peak flow magnitude for basin outlets without gauging stations is estimated by statistical means, performing several regressions on the basin variables. These peak flow estimates enable the computation of corresponding flooded areas using hydrologic GIS processing on digital elevation model.

  11. Complete identification of the Parkes half-Jansky sample of GHz peaked spectrum radio galaxies

    NASA Astrophysics Data System (ADS)

    de Vries, N.; Snellen, I. A. G.; Schilizzi, R. T.; Lehnert, M. D.; Bremer, M. N.

    2007-03-01

    Context: Gigahertz Peaked Spectrum (GPS) radio galaxies are generally thought to be the young counterparts of classical extended radio sources. Statistically complete samples of GPS sources are vital for studying the early evolution of radio-loud AGN and the trigger of their nuclear activity. The "Parkes half-Jansky" sample of GPS radio galaxies is such a sample, representing the southern counterpart of the 1998 Stanghellini sample of bright GPS sources. Aims: As a first step of the investigation of the sample, the host galaxies need to be identified and their redshifts determined. Methods: Deep R-band VLT-FORS1 and ESO 3.6 m EFOSC II images and long slit spectra have been taken for the unidentified sources in the sample. Results: We have identified all twelve previously unknown host galaxies of the radio sources in the sample. Eleven have host galaxies in the range 21.0 < RC < 23.0, while one object, PKS J0210+0419, is identified in the near infrared with a galaxy with Ks = 18.3. The redshifts of 21 host galaxies have been determined in the range 0.474 < z < 1.539, bringing the total number of redshifts to 39 (80%). Analysis of the absolute magnitudes of the GPS host galaxies show that at z>1 they are on average a magnitude fainter than classical 3C radio galaxies, as found in earlier studies. However their restframe UV luminosities indicate that there is an extra light contribution from the AGN, or from a population of young stars. Based on observations collected at the European Southern Observatory Very Large Telescope, Paranal, Chile (ESO prog. ID No. 073.B-0289(B)) and the European Southern Observatory 3.6 m Telescope, La Silla, Chile (prog. ID No. 073.B-0289(A)). Appendices are only available in electronic form at http://www.aanda.org

  12. Delta Scorpii unusual brightening to first magnitude

    NASA Astrophysics Data System (ADS)

    Sigismondi, Costantino

    2016-01-01

    The Be star delta Scorpii with a range of variability between 2.35 and 1.65 in visible light is having an unusual brightening to magnitude mV=0.8, as measured on 31 Jan 2016 at 3:56 UT and 5:36 UT from Lanciano, Italy.

  13. On the statistical analysis of maximal magnitude

    NASA Astrophysics Data System (ADS)

    Holschneider, M.; Zöller, G.; Hainzl, S.

    2012-04-01

    We show how the maximum expected magnitude within a time horizon [0,T] may be estimated from earthquake catalog data within the context of truncated Gutenberg-Richter statistics. We present the results in a frequentist and in a Bayesian setting. Instead of deriving point estimations of this parameter and reporting its performance in terms of expectation value and variance, we focus on the calculation of confidence intervals based on an imposed level of confidence α. We present an estimate of the maximum magnitude within an observational time interval T in the future, given a complete earthquake catalog for a time period Tc in the past and optionally some paleoseismic events. We argue that from a statistical point of view the maximum magnitude in a time window is a reasonable parameter for probabilistic seismic hazard assessment, while the commonly used maximum possible magnitude for all times does almost certainly not allow the calculation of useful (i.e. non-trivial) confidence intervals. In the context of an unbounded GR law we show, that Jeffreys invariant prior distribtution yields normalizable posteriors. The predictive distribution based on this prior is explicitely computed.

  14. Lamp modulator provides signal magnitude indication

    NASA Technical Reports Server (NTRS)

    Zeman, J. R.

    1970-01-01

    Lamp modulator provides visible indication of presence and magnitude of an audio signal carrying voice or data. It can be made to reflect signal variations of up to 32 decibels. Lamp life is increased by use of a bypass resistor to prevent filament failure.

  15. Global survey of star clusters in the Milky Way. V. Integrated JHKS magnitudes and luminosity functions

    NASA Astrophysics Data System (ADS)

    Kharchenko, N. V.; Piskunov, A. E.; Schilbach, E.; Röser, S.; Scholz, R.-D.

    2016-01-01

    Aims: In this study we determine absolute integrated magnitudes in the J,H,KS passbands for Galactic star clusters from the Milky Way Star Clusters survey. In the wide solar neighbourhood, we derive the open cluster luminosity function (CLF) for different cluster ages. Methods: The integrated magnitudes are based on uniform cluster membership derived from the 2MAst catalogue (a merger of the PPMXL and 2MASS) and are computed by summing up the individual luminosities of the most reliable cluster members. We discuss two different techniques of constructing the CLF, a magnitude-limited and a distance-limited approach. Results: Absolute J,H,KS integrated magnitudes are obtained for 3061 open clusters, and 147 globular clusters. The integrated magnitudes and colours are accurate to about 0.8 and 0.2 mag, respectively. Based on the sample of open clusters we construct the general cluster luminosity function in the solar neighbourhood in the three passbands. In each passband the CLF shows a linear part covering a range of 6 to 7 mag at the bright end. The CLFs reach their maxima at an absolute magnitude of -2 mag, then drop by one order of magnitude. During cluster evolution, the CLF changes its slope within tight, but well-defined limits. The CLF of the youngest clusters has a steep slope of about 0.4 at bright magnitudes and a quasi-flat portion for faint clusters. For the oldest population, we find a flatter function with a slope of about 0.2. The CLFs at Galactocentric radii smaller than that of the solar circle differ from those in the direction of the Galactic anti-centre. The CLF in the inner area is flatter and the cluster surface density higher than the local one. In contrast, the CLF is somewhat steeper than the local one in the outer disk, and the surface density is lower. The corresponding catalogue of integrated magnitudes is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc

  16. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed. PMID:19831037

  17. Maximum magnitude in the Lower Rhine Graben

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry

    2014-05-01

    Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the

  18. Peak expiratory flow at increased barometric pressure: comparison of peak flow meters and volumetric spirometer.

    PubMed

    Thomas, P S; Ng, C; Bennett, M

    2000-01-01

    Increasing numbers of patients are receiving hyperbaric oxygen therapy as an intensive care treatment, some of whom have pre-existing airway obstruction. Spirometers are the ideal instruments for measuring airway obstruction, but peak flow meters are useful and versatile devices. The behaviour of both types of device was therefore studied in a hyperbaric unit under conditions of increased pressure. It is important to have a non-electrical indicator of airway obstruction, to minimize the fire risk in the hyperoxic environment. The hypothesis was tested that, assuming that dynamic resistance is unchanged, both the Wright's standard and mini-peak flow meters would over-read peak expiratory flow (PEF) under increased pressure when compared with a volumetric spirometer, as the latter is unaffected by air density. It was postulated that a correction factor could be derived so that PEF meters could be used in this setting. Seven normal subjects performed volume-dependent spirometry to derive PEF, and manoeuvres using both standard and mini PEF meters at sea level, under hyperbaric conditions at 303, 253 and 152 kPa (3, 2.5 and 1.5 atmospheres respectively; 1 atmosphere absolute=101.08 kPa), and again at sea level. There was a progressive and significant decline in PEF with increasing pressure as measured by the spirometer (69.46+/-0.8% baseline at 303 kPa compared with 101 kPa), while the PEF meters showed a progressive increase in their readings (an increase of 7.86+/-1.69% at 303 kPa with the mini PEF meter). Using these data points, a correction factor was derived which allows appropriate values to be calculated from the Wright's meter readings under these conditions. PMID:10600666

  19. METHOD OF PEAK CURRENT MEASUREMENT

    DOEpatents

    Baker, G.E.

    1959-01-20

    The measurement and recording of peak electrical currents are described, and a method for utilizing the magnetic field of the current to erase a portion of an alternating constant frequency and amplitude signal from a magnetic mediums such as a magnetic tapes is presented. A portion of the flux from the current carrying conductor is concentrated into a magnetic path of defined area on the tape. After the current has been recorded, the tape is played back. The amplitude of the signal from the portion of the tape immediately adjacent the defined flux area and the amplitude of the signal from the portion of the tape within the area are compared with the amplitude of the signal from an unerased portion of the tape to determine the percentage of signal erasure, and thereby obtain the peak value of currents flowing in the conductor.

  20. Probability of inducing given-magnitude earthquakes by perturbing finite volumes of rocks

    NASA Astrophysics Data System (ADS)

    Shapiro, Serge A.; Krüger, Oliver S.; Dinske, Carsten

    2013-07-01

    Fluid-induced seismicity results from an activation of finite rock volumes. The finiteness of perturbed volumes influences frequency-magnitude statistics. Previously we observed that induced large-magnitude events at geothermal and hydrocarbon reservoirs are frequently underrepresented in comparison with the Gutenberg-Richter law. This is an indication that the events are more probable on rupture surfaces contained within the stimulated volume. Here we theoretically and numerically analyze this effect. We consider different possible scenarios of event triggering: rupture surfaces located completely within or intersecting only the stimulated volume. We approximate the stimulated volume by an ellipsoid or cuboid and derive the statistics of induced events from the statistics of random thin flat discs modeling rupture surfaces. We derive lower and upper bounds of the probability to induce a given-magnitude event. The bounds depend strongly on the minimum principal axis of the stimulated volume. We compare the bounds with data on seismicity induced by fluid injections in boreholes. Fitting the bounds to the frequency-magnitude distribution provides estimates of a largest expected induced magnitude and a characteristic stress drop, in addition to improved estimates of the Gutenberg-Richter a and b parameters. The observed frequency-magnitude curves seem to follow mainly the lower bound. However, in some case studies there are individual large-magnitude events clearly deviating from this statistic. We propose that such events can be interpreted as triggered ones, in contrast to the absolute majority of the induced events following the lower bound.

  1. SPANISH PEAKS PRIMITIVE AREA, MONTANA.

    USGS Publications Warehouse

    Calkins, James A.; Pattee, Eldon C.

    1984-01-01

    A mineral survey of the Spanish Peaks Primitive Area, Montana, disclosed a small low-grade deposit of demonstrated chromite and asbestos resources. The chances for discovery of additional chrome resources are uncertain and the area has little promise for the occurrence of other mineral or energy resources. A reevaluation, sampling at depth, and testing for possible extensions of the Table Mountain asbestos and chromium deposit should be undertaken in the light of recent interpretations regarding its geologic setting.

  2. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  3. Absolute Radiation Measurements in Earth and Mars Entry Conditions

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.

    2014-01-01

    This paper reports on the measurement of radiative heating for shock heated flows which simulate conditions for Mars and Earth entries. Radiation measurements are made in NASA Ames' Electric Arc Shock Tube at velocities from 3-15 km/s in mixtures of N2/O2 and CO2/N2/Ar. The technique and limitations of the measurement are summarized in some detail. The absolute measurements will be discussed in regards to spectral features, radiative magnitude and spatiotemporal trends. Via analysis of spectra it is possible to extract properties such as electron density, and rotational, vibrational and electronic temperatures. Relaxation behind the shock is analyzed to determine how these properties relax to equilibrium and are used to validate and refine kinetic models. It is found that, for some conditions, some of these values diverge from non-equilibrium indicating a lack of similarity between the shock tube and free flight conditions. Possible reasons for this are discussed.

  4. Absolute properties of the eclipsing binary star IM Persei

    SciTech Connect

    Lacy, Claud H. Sandberg; Torres, Guillermo; Fekel, Francis C.; Muterspaugh, Matthew W.; Southworth, John E-mail: gtorres@cfa.harvard.edu E-mail: matthew1@coe.tsuniv.edu

    2015-01-01

    IM Per is a detached A7 eccentric eclipsing binary star. We have obtained extensive measurements of the light curve (28,225 differential magnitude observations) and radial velocity curve (81 spectroscopic observations) which allow us to fit orbits and determine the absolute properties of the components very accurately: masses of 1.7831 ± 0.0094 and 1.7741 ± 0.0097 solar masses, and radii of 2.409 ± 0.018 and 2.366 ± 0.017 solar radii. The orbital period is 2.25422694(15) days and the eccentricity is 0.0473(26). A faint third component was detected in the analysis of the light curves, and also directly observed in the spectra. The observed rate of apsidal motion is consistent with theory (U = 151.4 ± 8.4 year). We determine a distance to the system of 566 ± 46 pc.

  5. Reduction in peak oxygen uptake after prolonged bed rest

    NASA Technical Reports Server (NTRS)

    Greenleaf, J. E.; Kozlowski, S.

    1982-01-01

    The hypothesis that the magnitude of the reduction in peak oxygen uptake (VO2) after bed rest is directly proportional to the level of pre-bed rest peak VO2 is tested. Complete pre and post-bed rest working capacity and body weight data were obtained from studies involving 24 men (19-24 years old) and 8 women (23-34 years old) who underwent bed rest for 14-20 days with no remedial treatments. Results of regression analyses of the present change in post-bed rest peak VO2 on pre-bed rest peak VO2 with 32 subjects show correlation coefficients of -0.03 (NS) for data expressed in 1/min and -0.17 for data expressed in ml/min-kg. In addition, significant correlations are found that support the hypothesis only when peak VO2 data are analyzed separately from studies that utilized the cycle ergometer, particularly with subjects in the supine position, as opposed to data obtained from treadmill peak VO2 tests. It is concluded that orthostatic factors, associated with the upright body position and relatively high levels of physical fitness from endurance training, appear to increase the variability of pre and particularly post-bed rest peak VO2 data, which would lead to rejection of the hypothesis.

  6. Absolute versus relative ascertainment of pedophilia in men.

    PubMed

    Blanchard, Ray; Kuban, Michael E; Blak, Thomas; Cantor, James M; Klassen, Philip E; Dickey, Robert

    2009-12-01

    There are at least two different criteria for assessing pedophilia in men: absolute ascertainment (their sexual interest in children is intense) and relative ascertainment (their sexual interest in children is greater than their interest in adults). The American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, 3rd edition (DSM-III) used relative ascertainment in its diagnostic criteria for pedophilia; this was abandoned and replaced by absolute ascertainment in the DSM-III-R and all subsequent editions. The present study was conducted to demonstrate the continuing need for relative ascertainment, particularly in the laboratory assessment of pedophilia. A total of 402 heterosexual men were selected from a database of patients referred to a specialty clinic. These had undergone phallometric testing, a psychophysiological procedure in which their penile blood volume was monitored while they were presented with a standardized set of laboratory stimuli depicting male and female children, pubescents, and adults.The 130 men selected for the Teleiophilic Profile group responded substantially to prepubescent girls but even more to adult women; the 272 men selected for the Pedophilic Profile group responded weakly to prepubescent girls but even less to adult women. In terms of absolute magnitude, every patient in the Pedophilic Profile group had a lesser penile response to prepubescent girls than every patient in the Teleiophilic Profile group. Nevertheless, the Pedophilic Profile group had a significantly greater number of known sexual offenses against prepubescent girls, indicating that they contained a higher proportion of true pedophiles. These results dramatically demonstrate the utility-or perhaps necessity-of relative ascertainment in the laboratory assessment of erotic age-preference. PMID:19901237

  7. Tracing multiple scattering patterns in absolute (e,2e) cross sections for H{sub 2} and He over a 4{pi} solid angle

    SciTech Connect

    Ren, X.; Senftleben, A.; Pflueger, T.; Dorn, A.; Ullrich, J.; Colgan, J.; Pindzola, M. S.; Al-Hagan, O.; Madison, D. H.; Bray, I.; Fursa, D. V.

    2010-09-15

    Absolutely normalized (e,2e) measurements for H{sub 2} and He covering the full solid angle of one ejected electron are presented for 16 eV sum energy of both final state continuum electrons. For both targets rich cross-section structures in addition to the binary and recoil lobes are identified and studied as a function of the fixed electron's emission angle and the energy sharing among both electrons. For H{sub 2} their behavior is consistent with multiple scattering of the projectile as discussed before [Al-Hagan et al., Nature Phys. 5, 59 (2009)]. For He the binary and recoil lobes are significantly larger than for H{sub 2} and partly cover the multiple scattering structures. To highlight these patterns we propose a alternative representation of the triply differential cross section. Nonperturbative calculations are in good agreement with the He results and show discrepancies for H{sub 2} in the recoil peak region. For H{sub 2} a perturbative approach reasonably reproduces the cross-section shape but deviates in absolute magnitude.

  8. Analysis of the magnitude and frequency of floods in Colorado

    USGS Publications Warehouse

    Vaill, J.E.

    2000-01-01

    Regionalized flood-frequency relations need to be updated on a regular basis (about every 10 years). The latest study on regionalized flood-frequency equations for Colorado used data collected through water year 1981. A study was begun in 1994 by the U.S. Geological Survey, in cooperation with the Colorado Department of Transportation and the Bureau of Land Management, to include streamflow data collected since water year 1981 in the regionalized flood-frequency relations for Colorado. Longer periods of streamflow data and improved statistical analysis methods were used to define regression relations for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for unregulated streams in Colorado. The regression relations can be applied to sites of interest on gaged and ungaged streams. Ordinary least-squares regression was used to determine the best explanatory basin or climatic characteristic variables for each peak-discharge characteristic, and generalized least-squares regression was used to determine the best regression relation. Drainage-basin area, mean annual precipitation, and mean basin slope were determined to be statistically significant explanatory variables in the regression relations. Separate regression relations were developed for each of five distinct hydrologic regions in the State. The mean standard errors of estimate and average standard error of prediction associated with the regression relations generally ranged from 40 to 80 percent, except for one hydrologic region where the errors ranged from about 200 to 300 percent. Methods are presented for determining the magnitude of peak discharges for sites located at gaging stations, for sites located near gaging stations on the same stream when the ratio of drainage-basin areas is between about 0.5 and 1.5, and for sites where the drainage basin crosses a flood-region boundary or a State boundary. Methods are presented for determining the magnitude of peak

  9. Absolute and relative quantification of RNA modifications via biosynthetic isotopomers

    PubMed Central

    Kellner, Stefanie; Ochel, Antonia; Thüring, Kathrin; Spenkuch, Felix; Neumann, Jennifer; Sharma, Sunny; Entian, Karl-Dieter; Schneider, Dirk; Helm, Mark

    2014-01-01

    In the resurging field of RNA modifications, quantification is a bottleneck blocking many exciting avenues. With currently over 150 known nucleoside alterations, detection and quantification methods must encompass multiple modifications for a comprehensive profile. LC–MS/MS approaches offer a perspective for comprehensive parallel quantification of all the various modifications found in total RNA of a given organism. By feeding 13C-glucose as sole carbon source, we have generated a stable isotope-labeled internal standard (SIL-IS) for bacterial RNA, which facilitates relative comparison of all modifications. While conventional SIL-IS approaches require the chemical synthesis of single modifications in weighable quantities, this SIL-IS consists of a nucleoside mixture covering all detectable RNA modifications of Escherichia coli, yet in small and initially unknown quantities. For absolute in addition to relative quantification, those quantities were determined by a combination of external calibration and sample spiking of the biosynthetic SIL-IS. For each nucleoside, we thus obtained a very robust relative response factor, which permits direct conversion of the MS signal to absolute amounts of substance. The application of the validated SIL-IS allowed highly precise quantification with standard deviations <2% during a 12-week period, and a linear dynamic range that was extended by two orders of magnitude. PMID:25129236

  10. Sensitivity to relative reinforcer rate in concurrent schedules: independence from relative and absolute reinforcer duration.

    PubMed Central

    McLean, A P; Blampied, N M

    2001-01-01

    Twelve pigeons responded on two keys under concurrent variable-interval (VI) schedules. Over several series of conditions, relative and absolute magnitudes of reinforcement were varied. Within each series, relative rate of reinforcement was varied and sensitivity of behavior ratios to reinforcer-rate ratios was assessed. When responding at both alternatives was maintained by equal-sized small reinforcers, sensitivity to variation in reinforcer-rate ratios was the same as when large reinforcers were used. This result was observed when the overall rate of reinforcement was constant over conditions, and also in another series of concurrent schedules in which one schedule was kept constant at VI ached 120 s. Similarly, reinforcer magnitude did not affect the rate at which response allocation approached asymptote within a condition. When reinforcer magnitudes differred between the two responses and reinforcer-rate ratios were varied, sensitivity of behavior allocation was unaffected although response bias favored the schedule that arranged the larger reinforcers. Analysis of absolute response rates ratio sensitivity to reinforcement occurrred on the two keys showed that this invariance of response despite changes in reinforcement interaction that were observed in absolute response rates on the constant VI 120-s schedule. Response rate on the constant VI 120-s schedule was inversely related to reinforcer rate on the varied key and the strength of this relation depended on the relative magnitude of reinforcers arranged on varied key. Independence of sensitivity to reinforcer-rate ratios from relative and absolute reinforcer magnitude is consistent with the relativity and independence assumtions of the matching law. PMID:11256865

  11. Remote detection of photoplethysmographic systolic and diastolic peaks using a digital camera.

    PubMed

    McDuff, Daniel; Gontarek, Sarah; Picard, Rosalind W

    2014-12-01

    We present a new method for measuring photoplethysmogram signals remotely using ambient light and a digital camera that allows for accurate recovery of the waveform morphology (from a distance of 3 m). In particular, we show that the peak-to-peak time between the systolic peak and diastolic peak/inflection can be automatically recovered using the second-order derivative of the remotely measured waveform. We compare measurements from the face with those captured using a contact fingertip sensor and show high agreement in peak and interval timings. Furthermore, we show that results can be significantly improved using orange, green, and cyan color channels compared to the tradition red, green, and blue channel combination. The absolute error in interbeat intervals was 26 ms and the absolute error in mean systolic-diastolic peak-to-peak times was 12 ms. The mean systolic-diastolic peak-to-peak times measured using the contact sensor and the camera were highly correlated, ρ = 0.94 (p 0.001). The results were obtained with a camera frame-rate of only 30 Hz. This technology has significant potential for advancing healthcare. PMID:25073159

  12. Absolute isotopic abundances of TI in meteorites

    NASA Astrophysics Data System (ADS)

    Niederer, F. R.; Papanastassiou, D. A.; Wasserburg, G. J.

    1985-03-01

    The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46Ti/48Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. The authors provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components.

  13. Molecular iodine absolute frequencies. Final report

    SciTech Connect

    Sansonetti, C.J.

    1990-06-25

    Fifty specified lines of {sup 127}I{sub 2} were studied by Doppler-free frequency modulation spectroscopy. For each line the classification of the molecular transition was determined, hyperfine components were identified, and one well-resolved component was selected for precise determination of its absolute frequency. In 3 cases, a nearby alternate line was selected for measurement because no well-resolved component was found for the specified line. Absolute frequency determinations were made with an estimated uncertainty of 1.1 MHz by locking a dye laser to the selected hyperfine component and measuring its wave number with a high-precision Fabry-Perot wavemeter. For each line results of the absolute measurement, the line classification, and a Doppler-free spectrum are given.

  14. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record PMID:26478959

  15. Absolute calibration in vivo measurement systems

    SciTech Connect

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs.

  16. Subitizing, Magnitude Representation, and Magnitude Retrieval in Deaf and Hearing Adults

    ERIC Educational Resources Information Center

    Bull, Rebecca; Blatto-Vallee, Gary; Fabich, Megan

    2006-01-01

    This study examines basic number processing (subitizing, automaticity, and magnitude representation) as the possible underpinning of mathematical difficulties often evidenced in deaf adults. Hearing and deaf participants completed tasks to assess the automaticity with which magnitude information was activated and retrieved from long-term memory…

  17. Evaluation of the magnitude and frequency of floods in urban watersheds in Phoenix and Tucson, Arizona

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Paretti, Nicholas V.

    2014-01-01

    Flooding in urban areas routinely causes severe damage to property and often results in loss of life. To investigate the effect of urbanization on the magnitude and frequency of flood peaks, a flood frequency analysis was carried out using data from urbanized streamgaging stations in Phoenix and Tucson, Arizona. Flood peaks at each station were predicted using the log-Pearson Type III distribution, fitted using the expected moments algorithm and the multiple Grubbs-Beck low outlier test. The station estimates were then compared to flood peaks estimated by rural-regression equations for Arizona, and to flood peaks adjusted for urbanization using a previously developed procedure for adjusting U.S. Geological Survey rural regression peak discharges in an urban setting. Only smaller, more common flood peaks at the 50-, 20-, 10-, and 4-percent annual exceedance probabilities (AEPs) demonstrate any increase in magnitude as a result of urbanization; the 1-, 0.5-, and 0.2-percent AEP flood estimates are predicted without bias by the rural-regression equations. Percent imperviousness was determined not to account for the difference in estimated flood peaks between stations, either when adjusting the rural-regression equations or when deriving urban-regression equations to predict flood peaks directly from basin characteristics. Comparison with urban adjustment equations indicates that flood peaks are systematically overestimated if the rural-regression-estimated flood peaks are adjusted upward to account for urbanization. At nearly every streamgaging station in the analysis, adjusted rural-regression estimates were greater than the estimates derived using station data. One likely reason for the lack of increase in flood peaks with urbanization is the presence of significant stormwater retention and detention structures within the watershed used in the study.

  18. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2016-06-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  19. Precise Measurement of the Absolute Fluorescence Yield

    NASA Astrophysics Data System (ADS)

    Ave, M.; Bohacova, M.; Daumiller, K.; Di Carlo, P.; di Giulio, C.; San Luis, P. Facal; Gonzales, D.; Hojvat, C.; Hörandel, J. R.; Hrabovsky, M.; Iarlori, M.; Keilhauer, B.; Klages, H.; Kleifges, M.; Kuehn, F.; Monasor, M.; Nozka, L.; Palatka, M.; Petrera, S.; Privitera, P.; Ridky, J.; Rizi, V.; D'Orfeuil, B. Rouille; Salamida, F.; Schovanek, P.; Smida, R.; Spinka, H.; Ulrich, A.; Verzi, V.; Williams, C.

    2011-09-01

    We present preliminary results of the absolute yield of fluorescence emission in atmospheric gases. Measurements were performed at the Fermilab Test Beam Facility with a variety of beam particles and gases. Absolute calibration of the fluorescence yield to 5% level was achieved by comparison with two known light sources--the Cherenkov light emitted by the beam particles, and a calibrated nitrogen laser. The uncertainty of the energy scale of current Ultra-High Energy Cosmic Rays experiments will be significantly improved by the AIRFLY measurement.

  20. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed. PMID:26022836

  1. ABSOLUTE PROPERTIES OF THE ECLIPSING BINARY STAR HY VIRGINIS

    SciTech Connect

    Sandberg Lacy, Claud H.; Fekel, Francis C. E-mail: fekel@evans.tsuniv.edu

    2011-12-15

    HY Vir is found to be a double-lined F0m+F5 binary star with relatively shallow (0.3 mag) partial eclipses. Previous studies of the system are improved with 7509 differential photometric observations from the URSA WebScope and 8862 from the NFO WebScope, and 68 high-resolution spectroscopic observations from the Tennessee State University 2 m automatic spectroscopic telescope, and the 1 m coude-feed spectrometer at Kitt Peak National Observatory. Very accurate (better than 0.5%) masses and radii are determined from analysis of the new light curves and radial velocity curves. Theoretical models match the absolute properties of the stars at an age of about 1.35 Gy.

  2. Determination of earthquake magnitude using GPS displacement waveforms from real-time precise point positioning

    NASA Astrophysics Data System (ADS)

    Fang, Rongxin; Shi, Chuang; Song, Weiwei; Wang, Guangxing; Liu, Jingnan

    2014-01-01

    For earthquake and tsunami early warning and emergency response, earthquake magnitude is the crucial parameter to be determined rapidly and correctly. However, a reliable and rapid measurement of the magnitude of an earthquake is a challenging problem, especially for large earthquakes (M > 8). Here, the magnitude is determined based on the GPS displacement waveform derived from real-time precise point positioning (RTPPP). RTPPP results are evaluated with an accuracy of 1 cm in the horizontal components and 2-3 cm in the vertical components, indicating that the RTPPP is capable of detecting seismic waves with amplitude of 1 cm horizontally and 2-3 cm vertically with a confidence level of 95 per cent. In order to estimate the magnitude, the unique information provided by the GPS displacement waveform is the horizontal peak displacement amplitude. We show that the empirical relation of Gutenberg (1945) between peak displacement and magnitude holds up to nearly magnitude 9.0 when displacements are measured with GPS. We tested the proposed method for three large earthquakes. For the 2010 Mw 7.2 El Mayor-Cucapah earthquake, our method provides a magnitude of M7.18 ± 0.18. For the 2011 Mw 9.0 Tohoku-oki earthquake the estimated magnitude is M8.74 ± 0.06, and for the 2010 Mw 8.8 Maule earthquake the value is M8.7 ± 0.1 after excluding some near-field stations. We, therefore, conclude that depending on the availability of high-rate GPS observations, a robust value of magnitude up to 9.0 for a point source earthquake can be estimated within tens of seconds or a few minutes after an event using a few GPS stations close to the epicentre. The rapid magnitude could be as a pre-requisite for tsunami early warning, fast source inversion and emergency response is feasible.

  3. Absolute cross section for recoil detection of deuterium

    NASA Astrophysics Data System (ADS)

    Besenbacher, F.; Stensgaard, I.; Vase, P.

    1986-04-01

    The D( 4He, D) 4He cross section used for recoil detection of deuterium (D) has been calibrated on an absolute scale against the cross section of the D( 3He, α)p nuclear reaction which is often used for D profiling. For 4He energies ranging from 0.8 to ~1.8 MeV. the D( 4He, D) 4He cross section varies only slightly with incident energy and recoil angle θ (for 0° ⩽ 8 ⩽ 35°) and has a value of ~ 500 mb/sr which is significantly higher than the ~ 65 mb/sr c.m.s. cross section of the D( 3He, α)p nuclear reaction. For 4He energies ranging from ~ 1.9 to ~ 2.3 MeV, the D( 4He,D) 4He cross section exhibits a fairly narrow resonance peak (fwhm ~ 70 keV), with a maximum value (for θ = 0°) of ~ 8.5 b/sr, corresponding to a 4He energy of ~ 2130 keV. The large values of the cross section in connection with the described energy dependence makes the use of forward-recoil detection of D attractive for many purposes, e.g., D Jepth profiling (with an extreme gain in sensitivity), absolute concentration or coverage measurements, and lattice-location experiments by transmission channeling.

  4. The Sacramento Peak fast microphotometer

    NASA Technical Reports Server (NTRS)

    Arrambide, M. R.; Dunn, R. B.; Healy, A. W.; Porter, R.; Widener, A. L.; November, L. J.; Spence, G. E.

    1984-01-01

    The Sacramento Peak Observatory Fast Microphotometer translates an optical system that includes a laser and photodiode detector across the film to scan the Y direction. A stepping motor moves the film gate in the X direction. This arrangement affords high positional accuracy, low noise (0.002 RMS density units), modest speed (5000 points/second), large dynamic range (4.5 density units), high stability (0.005 density units), and low scattered light. The Fast Microphotometer is interfaced to the host computer by a 6502 microprocessor.

  5. GRANITE PEAK ROADLESS AREA, CALIFORNIA.

    USGS Publications Warehouse

    Huber, Donald F.; Thurber, Horace K.

    1984-01-01

    The Granite Peak Roadless Area occupies an area of about 5 sq mi in the southern part of the Trinity Alps of the Klamath Mountains, about 12 mi north-northeast of Weaverville, California. Rock and stream-sediment samples were analyzed. All streams draining the roadless area were sampled and representative samples of the rock types in the area were collected. Background values were established for each element and anomalous values were examined within their geologic settings and evaluated for their significance. On the basis of mineral surveys there seems little likelihood for the occurrence of mineral or energy resources.

  6. Maxometers (peak wind speed anemometers)

    NASA Technical Reports Server (NTRS)

    Kaufman, J. W.; Camp, D. W.; Turner, R. E. (Inventor)

    1973-01-01

    An instrument for measuring peak wind speeds under severe environmental conditions is described, comprising an elongated cylinder housed in an outer casing. The cylinder contains a piston attached to a longitudinally movable guided rod having a pressure disk mounted on one projecting end. Wind pressure against the pressure disk depresses the movable rod. When the wind reaches its maximum speed, the rod is locked by a ball clutch mechanism in the position of maximum inward movement. Thereafter maximum wind speed or pressure readings may be taken from calibrated indexing means.

  7. Evolution and magnitudes of candidate Planet Nine

    NASA Astrophysics Data System (ADS)

    Linder, Esther F.; Mordasini, Christoph

    2016-04-01

    Context. The recently renewed interest in a possible additional major body in the outer solar system prompted us to study the thermodynamic evolution of such an object. We assumed that it is a smaller version of Uranus and Neptune. Aims: We modeled the temporal evolution of the radius, temperature, intrinsic luminosity, and the blackbody spectrum of distant ice giant planets. The aim is also to provide estimates of the magnitudes in different bands to assess whether the object might be detectable. Methods: Simulations of the cooling and contraction were conducted for ice giants with masses of 5, 10, 20, and 50 M⊕ that are located at 280, 700, and 1120 AU from the Sun. The core composition, the fraction of H/He, the efficiency of energy transport, and the initial luminosity were varied. The atmospheric opacity was set to 1, 50, and 100 times solar metallicity. Results: We find for a nominal 10 M⊕ planet at 700 AU at the current age of the solar system an effective temperature of 47 K, much higher than the equilibrium temperature of about 10 K, a radius of 3.7 R⊕, and an intrinsic luminosity of 0.006 L♃. It has estimated apparent magnitudes of Johnson V, R, I, L, N, Q of 21.7, 21.4, 21.0, 20.1, 19.9, and 10.7, and WISE W1-W4 magnitudes of 20.1, 20.1, 18.6, and 10.2. The Q and W4 band and other observations longward of about 13 μm pick up the intrinsic flux. Conclusions: If candidate Planet 9 has a significant H/He layer and an efficient energy transport in the interior, then its luminosity is dominated by the intrinsic contribution, making it a self-luminous planet. At a likely position on its orbit near aphelion, we estimate for a mass of 5, 10, 20, and 50 M⊕ a V magnitude from the reflected light of 24.3, 23.7, 23.3, and 22.6 and a Q magnitude from the intrinsic radiation of 14.6, 11.7, 9.2, and 5.8. The latter would probably have been detected by past surveys.

  8. Evolution and magnitudes of candidate Planet Nine

    NASA Astrophysics Data System (ADS)

    Linder, Esther F.; Mordasini, Christoph

    2016-05-01

    Context. The recently renewed interest in a possible additional major body in the outer solar system prompted us to study the thermodynamic evolution of such an object. We assumed that it is a smaller version of Uranus and Neptune. Aims: We modeled the temporal evolution of the radius, temperature, intrinsic luminosity, and the blackbody spectrum of distant ice giant planets. The aim is also to provide estimates of the magnitudes in different bands to assess whether the object might be detectable. Methods: Simulations of the cooling and contraction were conducted for ice giants with masses of 5, 10, 20, and 50 M⊕ that are located at 280, 700, and 1120 AU from the Sun. The core composition, the fraction of H/He, the efficiency of energy transport, and the initial luminosity were varied. The atmospheric opacity was set to 1, 50, and 100 times solar metallicity. Results: We find for a nominal 10 M⊕ planet at 700 AU at the current age of the solar system an effective temperature of 47 K, much higher than the equilibrium temperature of about 10 K, a radius of 3.7 R⊕, and an intrinsic luminosity of 0.006 L♃. It has estimated apparent magnitudes of Johnson V, R, I, L, N, Q of 21.7, 21.4, 21.0, 20.1, 19.9, and 10.7, and WISE W1-W4 magnitudes of 20.1, 20.1, 18.6, and 10.2. The Q and W4 band and other observations longward of about 13 μm pick up the intrinsic flux. Conclusions: If candidate Planet 9 has a significant H/He layer and an efficient energy transport in the interior, then its luminosity is dominated by the intrinsic contribution, making it a self-luminous planet. At a likely position on its orbit near aphelion, we estimate for a mass of 5, 10, 20, and 50 M⊕ a V magnitude from the reflected light of 24.3, 23.7, 23.3, and 22.6 and a Q magnitude from the intrinsic radiation of 14.6, 11.7, 9.2, and 5.8. The latter would probably have been detected by past surveys.

  9. The effects of wildfire on the peak streamflow magnitude and frequency, Frijoles and Capulin Canyons, Bandelier National Monument, New Mexico

    USGS Publications Warehouse

    Veenhuis, J.E.

    2004-01-01

    In June of 1977, the La Mesa fire burned 15,270 acres in and around Frijoles Canyon, Bandelier National Monument and the adjacent Santa Fe National Forest, New Mexico. The Dome fire occurred in April of 1996 in Bandelier National Monument, burned 16,516 acres in Capulin Canyon and the surrounding Dome Wilderness area. Both canyons are characterized by extensive archeological artifacts, which could be threatened by increased runoff and accelerated rates of erosion after a wildfire. The U.S. Geological Survey (USGS) in cooperation with the National Park Service monitored the fires' effects on streamflow in both canyons. Copyright 2004 ASCE.

  10. Observations on the magnitude-frequency distribution of Earth-crossing asteroids

    NASA Technical Reports Server (NTRS)

    Shoemaker, Eugene M.; Shoemaker, Carolyn S.

    1987-01-01

    During the past decade, discovery of Earth-crossing asteroids has continued at the pace of several per year; the total number of known Earth crossers reached 70 as of September, 1986. The sample of discovered Earth crossers has become large enough to provide a fairly strong statistical basis for calculations of mean probabilities of asteroid collision with the Earth, the Moon, and Venus. It is also now large enough to begin to address the more difficult question of the magnitude-frequency distribution and size distribution of the Earth-crossing asteroids. Absolute V magnitude, H, was derived from reported magnitudes for each Earth crosser on the basis of a standard algorithm that utilizes a physically realistic phase function. The derived values of H range from 12.88 for (1627) Ivar to 21.6 for the Palomar-Leiden object 6344, which is the faintest and smallest asteroid discovered.

  11. Rapid determination of the energy magnitude Me

    NASA Astrophysics Data System (ADS)

    di Giacomo, D.; Parolai, S.; Bormann, P.; Saul, J.; Grosser, H.; Wang, R.; Zschau, J.

    2009-04-01

    The magnitude of an earthquake is one of the most used parameters to evaluate the earthquake's damage potential. However, many magnitude scales developed over the past years have different meanings. Among the non-saturating magnitude scales, the energy magnitude Me is related to a well defined physical parameter of the seismic source, that is the radiated seismic energy ES (e.g. Bormann et al., 2002): Me = 2/3(log10 ES - 4.4). Me is more suitable than the moment magnitude Mw in describing an earthquake's shaking potential (Choy and Kirby, 2004). Indeed, Me is calculated over a wide frequency range of the source spectrum and represents a better measure of the shaking potential, whereas Mw is related to the low-frequency asymptote of the source spectrum and is a good measure of the fault size and hence of the static (tectonic) effect of an earthquake. The calculation of ES requires the integration over frequency of the squared P-waves velocity spectrum corrected for the energy loss experienced by the seismic waves along the path from the source to the receivers. To accout for the frequency-dependent energy loss, we computed spectral amplitude decay functions for different frequenciesby using synthetic Green's functions (Wang, 1999) based on the reference Earth model AK135Q (Kennett et al., 1995; Montagner and Kennett, 1996). By means of these functions the correction for the various propagation effects of the recorded P-wave velocity spectra is performed in a rapid and robust way, and the calculation of ES, and hence of Me, can be computed at the single station. We analyse teleseismic broadband P-waves signals in the distance range 20°-98°. We show that our procedure is suitable for implementation in rapid response systems since it could provide stable Me determinations within 10-15 minutes after the earthquake's origin time. Indeed, we use time variable cumulative energy windows starting 4 s after the first P-wave arrival in order to include the earthquake rupture

  12. The color-magnitude distribution of small Kuiper Belt objects

    NASA Astrophysics Data System (ADS)

    Wong, Ian; Brown, Michael E.

    2015-11-01

    Occupying a vast region beyond the ice giants is an extensive swarm of minor bodies known as the Kuiper Belt. Enigmatic in their formation, composition, and evolution, these Kuiper Belt objects (KBOs) lie at the intersection of many of the most important topics in planetary science. Improved instruments and large-scale surveys have revealed a complex dynamical picture of the Kuiper Belt. Meanwhile, photometric studies have indicated that small KBOs display a wide range of colors, which may reflect a chemically diverse initial accretion environment and provide important clues to constraining the surface compositions of these objects. Notably, some recent work has shown evidence for bimodality in the colors of non-cold classical KBOs, which would have major implications for the formation and subsequent evolution of the entire KBO population. However, these previous color measurements are few and mostly come from targeted observations of known objects. As a consequence, the effect of observational biases cannot be readily removed, preventing one from obtaining an accurate picture of the true color distribution of the KBOs as a whole.We carried out a survey of KBOs using the Hyper Suprime-Cam instrument on the 8.2-meter Subaru telescope. Our observing fields targeted regions away from the ecliptic plane so as to avoid contamination from cold classical KBOs. Each field was imaged in both the g’ and i’ filters, which allowed us to calculate the g’-i’ color of each detected object. We detected more than 500 KBOs over two nights of observation, with absolute magnitudes from H=6 to H=11. Our survey increases the number of KBOs fainter than H=8 with known colors by more than an order of magnitude. We find that the distribution of colors demonstrates a robust bimodality across the entire observed range of KBO sizes, from which we can categorize individual objects into two color sub-populations -- the red and very-red KBOs. We present the very first analysis of the

  13. Evaluation of containment peak pressure and structural response for a large-break loss-of-coolant accident in a VVER-440/213 NPP

    SciTech Connect

    Spencer, B.W.; Sienicki, J.J.; Kulak, R.F.; Pfeiffer, P.A.; Voeroess, L.; Techy, Z.; Katona, T.

    1998-07-01

    A collaborative effort between US and Hungarian specialists was undertaken to investigate the response of a VVER-440/213-type NPP to a maximum design-basis accident, defined as a guillotine rupture with double-ended flow from the largest pipe (500 mm) in the reactor coolant system. Analyses were performed to evaluate the magnitude of the peak containment pressure and temperature for this event; additional analyses were performed to evaluate the ultimate strength capability of the containment. Separate cases were evaluated assuming 100% effectiveness of the bubbler-condenser pressure suppression system as well as zero effectiveness. The pipe break energy release conditions were evaluated from three sources: (1) FSAR release rate based on Soviet safety calculations, (2) RETRAN-03 analysis and (3) ATHLET analysis. The findings indicated that for 100% bubbler-condenser effectiveness the peak containment pressures were less than the containment design pressure of 0.25 MPa. For the BDBA case of zero effectiveness of the bubbler-condenser system, the peak pressures were less than the calculated containment failure pressure of 0.40 MPa absolute.

  14. Absolute partial photoionization cross sections of ozone.

    SciTech Connect

    Berkowitz, J.; Chemistry

    2008-04-01

    Despite the current concerns about ozone, absolute partial photoionization cross sections for this molecule in the vacuum ultraviolet (valence) region have been unavailable. By eclectic re-evaluation of old/new data and plausible assumptions, such cross sections have been assembled to fill this void.

  15. Solving Absolute Value Equations Algebraically and Geometrically

    ERIC Educational Resources Information Center

    Shiyuan, Wei

    2005-01-01

    The way in which students can improve their comprehension by understanding the geometrical meaning of algebraic equations or solving algebraic equation geometrically is described. Students can experiment with the conditions of the absolute value equation presented, for an interesting way to form an overall understanding of the concept.

  16. Teaching Absolute Value Inequalities to Mature Students

    ERIC Educational Resources Information Center

    Sierpinska, Anna; Bobos, Georgeana; Pruncut, Andreea

    2011-01-01

    This paper gives an account of a teaching experiment on absolute value inequalities, whose aim was to identify characteristics of an approach that would realize the potential of the topic to develop theoretical thinking in students enrolled in prerequisite mathematics courses at a large, urban North American university. The potential is…

  17. Increasing Capacity: Practice Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Dodds, Pennie; Donkin, Christopher; Brown, Scott D.; Heathcote, Andrew

    2011-01-01

    In most of the long history of the study of absolute identification--since Miller's (1956) seminal article--a severe limit on performance has been observed, and this limit has resisted improvement even by extensive practice. In a startling result, Rouder, Morey, Cowan, and Pfaltz (2004) found substantially improved performance with practice in the…

  18. On Relative and Absolute Conviction in Mathematics

    ERIC Educational Resources Information Center

    Weber, Keith; Mejia-Ramos, Juan Pablo

    2015-01-01

    Conviction is a central construct in mathematics education research on justification and proof. In this paper, we claim that it is important to distinguish between absolute conviction and relative conviction. We argue that researchers in mathematics education frequently have not done so and this has lead to researchers making unwarranted claims…

  19. Absolute Points for Multiple Assignment Problems

    ERIC Educational Resources Information Center

    Adlakha, V.; Kowalski, K.

    2006-01-01

    An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…

  20. Nonequilibrium equalities in absolutely irreversible processes

    NASA Astrophysics Data System (ADS)

    Murashita, Yuto; Funo, Ken; Ueda, Masahito

    2015-03-01

    Nonequilibrium equalities have attracted considerable attention in the context of statistical mechanics and information thermodynamics. Integral nonequilibrium equalities reveal an ensemble property of the entropy production σ as = 1 . Although nonequilibrium equalities apply to rather general nonequilibrium situations, they break down in absolutely irreversible processes, where the forward-path probability vanishes and the entropy production diverges. We identify the mathematical origins of this inapplicability as the singularity of probability measure. As a result, we generalize conventional integral nonequilibrium equalities to absolutely irreversible processes as = 1 -λS , where λS is the probability of the singular part defined based on Lebesgue's decomposition theorem. The acquired equality contains two physical quantities related to irreversibility: σ characterizing ordinary irreversibility and λS describing absolute irreversibility. An inequality derived from the obtained equality demonstrates the absolute irreversibility leads to the fundamental lower bound on the entropy production. We demonstrate the validity of the obtained equality for a simple model.

  1. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  2. Precision absolute positional measurement of laser beams.

    PubMed

    Fitzsimons, Ewan D; Bogenstahl, Johanna; Hough, James; Killow, Christian J; Perreur-Lloyd, Michael; Robertson, David I; Ward, Henry

    2013-04-20

    We describe an instrument which, coupled with a suitable coordinate measuring machine, facilitates the absolute measurement within the machine frame of the propagation direction of a millimeter-scale laser beam to an accuracy of around ±4 μm in position and ±20 μrad in angle. PMID:23669658

  3. Sounding rocket measurement of the absolute solar EUV flux utilizing a silicon photodiode

    NASA Technical Reports Server (NTRS)

    Ogawa, H. S.; Mcmullin, D.; Judge, D. L.; Canfield, L. R.

    1990-01-01

    A newly developed stable and high quantum efficiency silicon photodiode was used to obtain an accurate measurement of the integrated absolute magnitude of the solar extreme UV photon flux in the spectral region between 50 and 800 A. The adjusted daily 10.7-cm solar radio flux and sunspot number were 168.4 and 121, respectively. The unattenuated absolute value of the solar EUV flux at 1 AU in the specified wavelength region was 6.81 x 10 to the 10th photons/sq cm per s. Based on a nominal probable error of 7 percent for National Institute of Standards and Technology detector efficiency measurements in the 50- to 500-A region (5 percent on longer wavelength measurements between 500 and 1216 A), and based on experimental errors associated with the present rocket instrumentation and analysis, a conservative total error estimate of about 14 percent is assigned to the absolute integral solar flux obtained.

  4. Apparent magnitude of earthshine: a simple calculation

    NASA Astrophysics Data System (ADS)

    Agrawal, Dulli Chandra

    2016-05-01

    The Sun illuminates both the Moon and the Earth with practically the same luminous fluxes which are in turn reflected by them. The Moon provides a dim light to the Earth whereas the Earth illuminates the Moon with somewhat brighter light which can be seen from the Earth and is called earthshine. As the amount of light reflected from the Earth depends on part of the Earth and the cloud cover, the strength of earthshine varies throughout the year. The measure of the earthshine light is luminance, which is defined in photometry as the total luminous flux of light hitting or passing through a surface. The expression for the earthshine light in terms of the apparent magnitude has been derived for the first time and evaluated for two extreme cases; firstly, when the Sun’s rays are reflected by the water of the oceans and secondly when the reflector is either thick clouds or snow. The corresponding values are -1.30 and -3.69, respectively. The earthshine value -3.22 reported by Jackson lies within these apparent magnitudes. This paper will motivate the students and teachers of physics to look for the illuminated Moon by earthlight during the waning or waxing crescent phase of the Moon and to reproduce the expressions derived here by making use of the inverse-square law of radiation, Planck’s expression for the power in electromagnetic radiation, photopic spectral luminous efficiency function and expression for the apparent magnitude of a body in terms of luminous fluxes.

  5. Orientation and Magnitude of Mars' Magnetic Field

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This image shows the orientation and magnitude of the magnetic field measured by the MGS magnetometer as it sped over the surface of Mars during an early aerobraking pass (Day of the year, 264; 'P6' periapsis pass). At each point along the spacecraft trajectory we've drawn vectors in the direction of the magnetic field measured at that instant; the length of the line is scaled to show the relative magnitude of the field. Imagine traveling along with the MGS spacecraft, holding a string with a magnetized needle on one end: this essentially a compass with a needle that is free to spin in all directions. As you pass over the surface the needle would swing rapidly, first pointing towards the planet and then rotating quickly towards 'up' and back down again. All in a relatively short span of time, say a minute or two, during which time the spacecraft has traveled a couple of hundred miles. You've just passed over one of many 'magnetic anomalies' thus far detected near the surface of Mars. A second major anomaly appears a little later along the spacecraft track, about 1/4 the magnitude of the first - can you find it? The short scale length of the magnetic field signature locates the source near the surface of Mars, perhaps in the crust, a 10 to 75 kilometer thick outer shell of the planet (radius 3397 km).

    The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO. JPL is an operating division of California Institute of Technology (Caltech).

  6. Characterizing Earthquake Rupture Properties Using Peak High-Frequency Offset

    NASA Astrophysics Data System (ADS)

    Wen, L.; Meng, L.

    2014-12-01

    Teleseismic array back-projection (BP) of high frequency (~1Hz) seismic waves has been recently applied to image the aftershock sequence of the Tohoku-Oki earthquake. The BP method proves to be effective in capturing early aftershocks that are difficult to be detected due to the contamination of the mainshock coda wave. Furthermore, since the event detection is based on the identification of the local peaks in time series of the BP power, the resulting event location corresponds to the peak high-frequency energy rather than the hypocenter. In this work, we show that the comparison between the BP-determined catalog and conventional phase-picking catalog provides estimates of the spatial and temporal offset between the hypocenter and the peak high-frequency radiation. We propose to measure this peak high-frequency shift of global earthquakes between M4.0 to M7.0. We average the BP locations calibrated by multiple reference events to minimize the uncertainty due to the variation of 3D path effects. In our initial effort focusing on the foreshock and aftershock sequence of the 2014 Iquique earthquake, we find systematic shifts of the peak high-frequency energy towards the down-dip direction. We find that the amount of the shift is a good indication of rupture length, which scales with the earthquake magnitude. Further investigations of the peak high frequency offset may provide constraints on earthquake source properties such as rupture directivity, rupture duration, rupture speed, and stress drop.

  7. The intensities and magnitudes of volcanic eruptions

    USGS Publications Warehouse

    Sigurdsson, H.

    1991-01-01

    Ever since 1935, when C.F Richter devised the earthquake magnitude scale that bears his name, seismologists have been able to view energy release from earthquakes in a systematic and quantitative manner. The benefits have been obvious in terms of assessing seismic gaps and the spatial and temporal trends of earthquake energy release. A similar quantitative treatment of volcanic activity is of course equally desirable, both for gaining a further understanding of the physical principles of volcanic eruptions and for volcanic-hazard assessment. A systematic volcanologic data base would be of great value in evaluating such features as volcanic gaps, and regional and temporal trends in energy release.  

  8. Precise Relative Earthquake Magnitudes from Cross Correlation

    DOE PAGESBeta

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  9. An Energy Rate Magnitude for Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Newman, A. V.; Convers, J. A.

    2008-12-01

    The ability to rapidly assess the approximate size of very large and destructive earthquakes is important for early hazard mitigation from both strong shaking and potential tsunami generation. Using a methodology to rapidly determine earthquake energy and duration using teleseismic high-frequency energy, we develop an adaptation to approximate the magnitude of a very large earthquake before the full duration of rupture can be measured at available teleseismic stations. We utilize available vertical component data to analyze the high-frequency energy growth between 0.5 and 2 Hz, minimizing the effect of later arrivals that are mostly attenuated in this range. Because events smaller than M~6.5 occur rapidly, this method is most adequate for larger events, whose rupture duration exceeds ~20 seconds. Using a catalog of about 200 large and great earthquakes we compare the high-frequency energy rate (· Ehf) to the total broad- band energy (· Ebb) to find a relationship for: Log(· Ehf)/Log(Ebb)≍ 0.85. Hence, combining this relation to the broad-band energy magnitude (Me) [Choy and Boatwright, 1995], yields a new high-frequency energy rate magnitude: M· E=⅔ log10(· Ehf)/0.85-2.9. Such an empirical approach can thus be used to obtain a reasonable assessment of an event magnitude from the initial estimate of energy growth, even before the arrival of the full direct-P rupture signal. For large shallow events thus far examined, the M· E predicts the ultimate Me to within ±0.2 units of M. For fast rupturing deep earthquakes M· E overpredicts, while for slow-rupturing tsunami earthquakes M· E underpredicts Me likely due to material strength changes at the source rupture. We will report on the utility of this method in both research mode, and in real-time scenarios when data availability is limited. Because the high-frequency energy is clearly discernable in real-time, this result suggests that the growth of energy can be used as a good initial indicator of the

  10. The peak electromagnetic power radiated by lightning return strokes

    NASA Technical Reports Server (NTRS)

    Krider, E. P.; Guo, C.

    1983-01-01

    Estimates of the peak electromagnetic (EM) power radiated by return strokes have been made by integrating the Poynting vector of measured fields over an imaginary hemispherical surface that is centered on the lightning source, assuming that ground losses are negligible. Values of the peak EM power from first and subsequent strokes have means and standard deviations of 2 + or - 2 x 10 to the 10th and 3 + or - 4 x 10 to the 9th W, respectively. The average EM power that is radiated by subsequent strokes, at the time of the field peak, is about 2 orders of magnitude larger than the optical power that is radiated by these strokes in the wavelength interval from 0.4 to 1.1 micron; hence an upper limit to the radiative efficiency of a subsequent stroke is of the order of 1 percent or less at this time.

  11. Understanding the magnitude dependence of PGA and PGV in NGA-West 2 data

    USGS Publications Warehouse

    Baltay, Annemarie S.; Hanks, Thomas C.

    2014-01-01

    The Next Generation Attenuation‐West 2 (NGA‐West 2) 2014 ground‐motion prediction equations (GMPEs) model ground motions as a function of magnitude and distance, using empirically derived coefficients (e.g., Bozorgniaet al., 2014); as such, these GMPEs do not clearly employ earthquake source parameters beyond moment magnitude (M) and focal mechanism. To better understand the magnitude‐dependent trends in the GMPEs, we build a comprehensive earthquake source‐based model to explain the magnitude dependence of peak ground acceleration and peak ground velocity in the NGA‐West 2 ground‐motion databases and GMPEs. Our model employs existing models (Hanks and McGuire, 1981; Boore, 1983, 1986; Anderson and Hough, 1984) that incorporate a point‐source Brune model, including a constant stress drop and the high‐frequency attenuation parameter κ0, random vibration theory, and a finite‐fault assumption at the large magnitudes to describe the data from magnitudes 3 to 8. We partition this range into four different magnitude regions, each of which has different functional dependences on M. Use of the four magnitude partitions separately allows greater understanding of what happens in any one subrange, as well as the limiting conditions between the subranges. This model provides a remarkably good fit to the NGA data for magnitudes from 3magnitude data, for which the corner frequency is masked by the attenuation of high frequencies. That this simple, source‐based model matches the NGA‐West 2 GMPEs and data so well suggests that considerable simplicity underlies the parametrically complex NGA GMPEs.

  12. Extreme Magnitude Earthquakes and their Economical Consequences

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.

    2011-12-01

    The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.

  13. Strong motion duration and earthquake magnitude relationships

    SciTech Connect

    Salmon, M.W.; Short, S.A.; Kennedy, R.P.

    1992-06-01

    Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ``strong motion duration`` has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions.

  14. Rapid determination of the energy magnitude Me

    NASA Astrophysics Data System (ADS)

    di Giacomo, D.; Parolai, S.; Bormann, P.; Grosser, H.; Saul, J.; Wang, R.; Zschau, J.

    2009-12-01

    The magnitude of an earthquake is one of the most used parameters to evaluate the earthquake’s damage potential. Among the non-saturating magnitude scales, the energy magnitude Me is related to a well defined physical parameter of the seismic source, that is the radiated seismic energy Es (e.g. Bormann et al., 2002): Me = 2/3(log10 Es - 4.4). Me is more suitable than the moment magnitude Mw in describing an earthquake's shaking potential (Choy and Kirby, 2004). Indeed, Me is calculated over a wide frequency range of the source spectrum and represents a better measure of the shaking potential, whereas Mw is related to the low-frequency asymptote of the source spectrum and is a good measure of the fault size and hence of the static (tectonic) effect of an earthquake. We analyse teleseismic broadband P-waves signals in the distance range 20°-98° to calculate Es. To correct the frequency-dependent energy loss experienced by the P-waves during the propagation path, we use pre-calculated spectral amplitude decay functions for different frequencies obtained from numerical simulations of Green’s functions (Wang, 1999) given the reference Earth model AK135Q (Kennett et al., 1995; Montagner and Kennett, 1996). By means of these functions the correction for the various propagation effects of the recorded P-wave velocity spectra is performed in a rapid and robust way, and the calculation of ES, and hence of Me, can be computed at the single station. We show that our procedure is suitable for implementation in rapid response systems since it could provide stable Me determinations within 10-15 minutes after the earthquake’s origin time, even in case of great earthquakes. We tested our procedure for a large dataset composed by about 770 earthquakes globally distributed in the Mw range 5.5-9.3 recorded at the broadband stations managed by the IRIS, GEOFON, and GEOSCOPE global networks, as well as other regional seismic networks. Me and Mw express two different aspects of

  15. VizieR Online Data Catalog: Absolute Proper motions Outside the Plane (APOP) (Qi+, 2015)

    NASA Astrophysics Data System (ADS)

    Qi, Z. X.; Yu, Y.; Bucciasrelli, B.; Lattanzi, M. G.; Smart, R. L.; Spagna, A.; McLean, B. J.; Tang, Z. H.; Jones, H. R. A.; Morbidelli, R.; Nicastro, L.; Vacchiato, A.

    2015-09-01

    The APOP is a absolute proper motion catalog achieved on the Digitized Sky Survey Schmidt plates data established by GSC-II project that outside the galactic plane (|b|>27°). The sky cover of this catalog is 22,525 square degree, the mean density is 4473 objects/sq.deg. and the magnitude limit is around R=20.8mag. The systematic errors of absolute proper motions related to the position, magnitude and color are practically all removed by using the extragalactic objects. The zero point error of absolute proper motions is less than 0.6mas/yr, and the accuracy is better than 4.0mas/yr for objects bright than R=18.5, and rises to 9.0mas/yr for objects with magnitude 18.5-30 degree and is not very well for others, the reason is that the epoch difference is large for Declination>-30° (45 years) but South than that is only around 12 years. It is fine for statistical studies for objects with Declination<-30° that people could find and remove obviously incorrect entries. (1 data file).

  16. Absolute instabilities in a high-order-mode gyrotron traveling-wave amplifier.

    PubMed

    Tsai, W C; Chang, T H; Chen, N C; Chu, K R; Song, H H; Luhmann, N C

    2004-11-01

    The absolute instability is a subject of considerable physics interest as well as a major source of self-oscillations in the gyrotron traveling-wave amplifier (gyro-TWT). We present a theoretical study of the absolute instabilities in a TE01 mode, fundamental cyclotron harmonic gyro-TWT with distributed wall losses. In this high-order-mode circuit, absolute instabilities arise in a variety of ways, including overdrive of the operating mode, fundamental cyclotron harmonic interactions with lower-order modes, and second cyclotron harmonic interaction with a higher-order mode. The distributed losses, on the other hand, provide an effective means for their stabilization. The combined configuration thus allows a rich display of absolute instability behavior together with the demonstration of its control. We begin with a study of the field profiles of absolute instabilities, which exhibit a range of characteristics depending in large measure upon the sign and magnitude of the synchronous value of the propagation constant. These profiles in turn explain the sensitivity of oscillation thresholds to the beam and circuit parameters. A general recipe for oscillation stabilization has resulted from these studies and its significance to the current TE01 -mode, 94-GHz gyro-TWT experiment at UC Davis is discussed. PMID:15600760

  17. Extracting parameters from colour-magnitude diagrams

    NASA Astrophysics Data System (ADS)

    Bonatto, C.; Campos, F.; Kepler, S. O.; Bica, E.

    2015-07-01

    We present a simple approach for obtaining robust values of astrophysical parameters from the observed colour-magnitude diagrams (CMDs) of star clusters. The basic inputs are the Hess diagram built with the photometric measurements of a star cluster and a set of isochrones covering wide ranges of age and metallicity. In short, each isochrone is shifted in apparent distance modulus and colour excess until it crosses over the maximum possible Hess density. Repeating this step for all available isochrones leads to the construction of the solution map, in which the optimum values of age and metallicity - as well as foreground/background reddening and distance from the Sun - can be searched for. Controlled tests with simulated CMDs show that the approach is efficient in recovering the input values. We apply the approach to the open clusters M 67, NGC 6791 and NGC 2635, which are characterized by different ages, metallicities and distances from the Sun.

  18. Violence against women: global scope and magnitude.

    PubMed

    Watts, Charlotte; Zimmerman, Cathy

    2002-04-01

    An increasing amount of research is beginning to offer a global overview of the extent of violence against women. In this paper we discuss the magnitude of some of the most common and most severe forms of violence against women: intimate partner violence; sexual abuse by non-intimate partners; trafficking, forced prostitution, exploitation of labour, and debt bondage of women and girls; physical and sexual violence against prostitutes; sex selective abortion, female infanticide, and the deliberate neglect of girls; and rape in war. There are many potential perpetrators, including spouses and partners, parents, other family members, neighbours, and men in positions of power or influence. Most forms of violence are not unique incidents but are ongoing, and can even continue for decades. Because of the sensitivity of the subject, violence is almost universally under-reported. Nevertheless, the prevalence of such violence suggests that globally, millions of women are experiencing violence or living with its consequences. PMID:11955557

  19. Absolute and relative dosimetry for ELIMED

    SciTech Connect

    Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F.; Carpinelli, M.; Presti, D. Lo; Raffaele, L.; Tramontana, A.; Cirio, R.; Sacchi, R.; Monaco, V.; Marchetto, F.; Giordanengo, S.

    2013-07-26

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  20. Probing absolute spin polarization at the nanoscale.

    PubMed

    Eltschka, Matthias; Jäck, Berthold; Assig, Maximilian; Kondrashov, Oleg V; Skvortsov, Mikhail A; Etzkorn, Markus; Ast, Christian R; Kern, Klaus

    2014-12-10

    Probing absolute values of spin polarization at the nanoscale offers insight into the fundamental mechanisms of spin-dependent transport. Employing the Zeeman splitting in superconducting tips (Meservey-Tedrow-Fulde effect), we introduce a novel spin-polarized scanning tunneling microscopy that combines the probing capability of the absolute values of spin polarization with precise control at the atomic scale. We utilize our novel approach to measure the locally resolved spin polarization of magnetic Co nanoislands on Cu(111). We find that the spin polarization is enhanced by 65% when increasing the width of the tunnel barrier by only 2.3 Å due to the different decay of the electron orbitals into vacuum. PMID:25423049

  1. Absolute radiometry and the solar constant

    NASA Technical Reports Server (NTRS)

    Willson, R. C.

    1974-01-01

    A series of active cavity radiometers (ACRs) are described which have been developed as standard detectors for the accurate measurement of irradiance in absolute units. It is noted that the ACR is an electrical substitution calorimeter, is designed for automatic remote operation in any environment, and can make irradiance measurements in the range from low-level IR fluxes up to 30 solar constants with small absolute uncertainty. The instrument operates in a differential mode by chopping the radiant flux to be measured at a slow rate, and irradiance is determined from two electrical power measurements together with the instrumental constant. Results are reported for measurements of the solar constant with two types of ACRs. The more accurate measurement yielded a value of 136.6 plus or minus 0.7 mW/sq cm (1.958 plus or minus 0.010 cal/sq cm per min).

  2. Absolute calibration of TFTR helium proportional counters

    SciTech Connect

    Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Barnes, C.W. |; Loughlin, M. |

    1995-06-01

    The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments.

  3. Absolute enantioselective separation: optical activity ex machina.

    PubMed

    Bielski, Roman; Tencer, Michal

    2005-11-01

    The paper describes methodology of using three independent macroscopic factors affecting molecular orientation to accomplish separation of a racemic mixture without the presence of any other chiral compounds, i. e., absolute enantioselective separation (AES) which is an extension of a concept of applying these factors to absolute asymmetric synthesis. The three factors may be applied simultaneously or, if their effects can be retained, consecutively. The resulting three mutually orthogonal or near orthogonal directors constitute a true chiral influence and their scalar triple product is the measure of the chirality of the system. AES can be executed in a chromatography-like microfluidic process in the presence of an electric field. It may be carried out on a chemically modified flat surface, a monolithic polymer column made of a mesoporous material, each having imparted directional properties. Separation parameters were estimated for these media and possible implications for the natural homochirality are discussed. PMID:16342798

  4. An absolute measure for a key currency

    NASA Astrophysics Data System (ADS)

    Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito

    It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no absolute general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an absolute measure for a key currency based on the strength of the periodicity. Moreover, we analyze the time evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.

  5. From Hubble's NGSL to Absolute Fluxes

    NASA Technical Reports Server (NTRS)

    Heap, Sara R.; Lindler, Don

    2012-01-01

    Hubble's Next Generation Spectral Library (NGSL) consists of R-l000 spectra of 374 stars of assorted temperature, gravity, and metallicity. Each spectrum covers the wavelength range, 0.18-1.00 microns. The library can be viewed and/or downloaded from the website, http://archive.stsci.edu/prepds/stisngsll. Stars in the NGSL are now being used as absolute flux standards at ground-based observatories. However, the uncertainty in the absolute flux is about 2%, which does not meet the requirements of dark-energy surveys. We are therefore developing an observing procedure that should yield fluxes with uncertainties less than 1 % and will take part in an HST proposal to observe up to 15 stars using this new procedure.

  6. Peak load management: Potential options

    SciTech Connect

    Englin, J.E.; De Steese, J.G.; Schultz, R.W.; Kellogg, M.A.

    1989-10-01

    This report reviews options that may be alternatives to transmission construction (ATT) applicable both generally and at specific locations in the service area of the Bonneville Power Administration (BPA). Some of these options have potential as specific alternatives to the Shelton-Fairmount 230-kV Reinforcement Project, which is the focus of this study. A listing of 31 peak load management (PLM) options is included. Estimated costs and normalized hourly load shapes, corresponding to the respective base load and controlled load cases, are considered for 15 of the above options. A summary page is presented for each of these options, grouped with respect to its applicability in the residential, commercial, industrial, and agricultural sectors. The report contains comments on PLM measures for which load shape management characteristics are not yet available. These comments address the potential relevance of the options and the possible difficulty that may be encountered in characterizing their value should be of interest in this investigation. The report also identifies options that could improve the efficiency of the three customer utility distribution systems supplied by the Shelton-Fairmount Reinforcement Project. Potential cogeneration options in the Olympic Peninsula are also discussed. These discussions focus on the options that appear to be most promising on the Olympic Peninsula. Finally, a short list of options is recommended for investigation in the next phase of this study. 9 refs., 24 tabs.

  7. Metallic Magnetic Calorimeters for Absolute Activity Measurement

    NASA Astrophysics Data System (ADS)

    Loidl, M.; Leblanc, E.; Rodrigues, M.; Bouchard, J.; Censier, B.; Branger, T.; Lacour, D.

    2008-05-01

    We present a prototype of metallic magnetic calorimeters that we are developing for absolute activity measurements of low energy emitting radionuclides. We give a detailed description of the realization of the prototype, containing an 55Fe source inside the detector absorber. We present the analysis of first data taken with this detector and compare the result of activity measurement with liquid scintillation counting. We also propose some ways for reducing the uncertainty on the activity determination with this new technique.

  8. Absolute photoionization cross sections of atomic oxygen

    NASA Technical Reports Server (NTRS)

    Samson, J. A. R.; Pareek, P. N.

    1985-01-01

    The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.

  9. Absolute photoionization cross sections of atomic oxygen

    NASA Technical Reports Server (NTRS)

    Samson, J. A. R.; Pareek, P. N.

    1982-01-01

    The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.

  10. Silicon Absolute X-Ray Detectors

    SciTech Connect

    Seely, John F.; Korde, Raj; Sprunck, Jacob; Medjoubi, Kadda; Hustache, Stephanie

    2010-06-23

    The responsivity of silicon photodiodes having no loss in the entrance window, measured using synchrotron radiation in the 1.75 to 60 keV range, was compared to the responsivity calculated using the silicon thickness measured using near-infrared light. The measured and calculated responsivities agree with an average difference of 1.3%. This enables their use as absolute x-ray detectors.

  11. Blood pressure targets and absolute cardiovascular risk.

    PubMed

    Odutayo, Ayodele; Rahimi, Kazem; Hsiao, Allan J; Emdin, Connor A

    2015-08-01

    In the Eighth Joint National Committee guideline on hypertension, the threshold for the initiation of blood pressure-lowering treatment for elderly adults (≥60 years) without chronic kidney disease or diabetes mellitus was raised from 140/90 mm Hg to 150/90 mm Hg. However, the committee was not unanimous in this decision, particularly because a large proportion of adults ≥60 years may be at high cardiovascular risk. On the basis of Eighth Joint National Committee guideline, we sought to determine the absolute 10-year risk of cardiovascular disease among these adults through analyzing the National Health and Nutrition Examination Survey (2005-2012). The primary outcome measure was the proportion of adults who were at ≥20% predicted absolute cardiovascular risk and above goals for the Seventh Joint National Committee guideline but reclassified as at target under the Eighth Joint National Committee guideline (reclassified). The Framingham General Cardiovascular Disease Risk Score was used. From 2005 to 2012, the surveys included 12 963 adults aged 30 to 74 years with blood pressure measurements, of which 914 were reclassified based on the guideline. Among individuals reclassified as not in need of additional treatment, the proportion of adults 60 to 74 years without chronic kidney disease or diabetes mellitus at ≥20% absolute risk was 44.8%. This corresponds to 0.8 million adults. The proportion at high cardiovascular risk remained sizable among adults who were not receiving blood pressure-lowering treatment. Taken together, a sizable proportion of reclassified adults 60 to 74 years without chronic kidney disease or diabetes mellitus was at ≥20% absolute cardiovascular risk. PMID:26056340

  12. Relative errors can cue absolute visuomotor mappings.

    PubMed

    van Dam, Loes C J; Ernst, Marc O

    2015-12-01

    When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315

  13. Absolute distance measurements by variable wavelength interferometry

    NASA Astrophysics Data System (ADS)

    Bien, F.; Camac, M.; Caulfield, H. J.; Ezekiel, S.

    1981-02-01

    This paper describes a laser interferometer which provides absolute distance measurements using tunable lasers. An active feedback loop system, in which the laser frequency is locked to the optical path length difference of the interferometer, is used to tune the laser wavelengths. If the two wavelengths are very close, electronic frequency counters can be used to measure the beat frequency between the two laser frequencies and thus to determine the optical path difference between the two legs of the interferometer.

  14. Absolute dosimetry for extreme-ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Berger, Kurt W.; Campiotti, Richard H.

    2000-06-01

    The accurate measurement of an exposure dose reaching the wafer on an extreme ultraviolet (EUV) lithographic system has been a technical challenge directly applicable to the evaluation of candidate EUV resist materials and calculating lithography system throughputs. We have developed a dose monitoring sensor system that can directly measure EUV intensities at the wafer plane of a prototype EUV lithographic system. This sensor system, located on the wafer stage adjacent to the electrostatic chuck used to grip wafers, operates by translating the sensor into the aerial image, typically illuminating an 'open' (unpatterned) area on the reticle. The absolute signal strength can be related to energy density at the wafer, and thus used to determine resist sensitivity, and the signal as a function of position can be used to determine illumination uniformity at the wafer plane. Spectral filtering to enhance the detection of 13.4 nm radiation was incorporated into the sensor. Other critical design parameters include the packaging and amplification technologies required to place this device into the space and vacuum constraints of a EUV lithography environment. We describe two approaches used to determine the absolute calibration of this sensor. The first conventional approach requires separate characterization of each element of the sensor. A second novel approach uses x-ray emission from a mildly radioactive iron source to calibrate the absolute response of the entire sensor system (detector and electronics) in a single measurement.

  15. Absolute standardization of the impurity (121)Te associated to the production of the radiopharmaceutical (123)I.

    PubMed

    Araújo, M T F; Poledna, R; Delgado, J U; Silva, R L; Iwahara, A; da Silva, C J; Tauhata, L; Oliveira, A E; de Almeida, M C M; Lopes, R T

    2016-03-01

    (123)I is widely used for radiodiagnostic procedures. It is produced by reaction of (124)Xe (p,2n) (123)Cs →(123)Xe →(123)I in cyclotrons. (121)Te and (125)I appear in a photon energy spectrum as impurities. An activity of (121)Te was calibrated absolutely by sum-peak method and its photon emitting probability was estimated, whose results were consistent with published results. PMID:26805708

  16. Determination of peak expiratory flow.

    PubMed

    Kano, S; Burton, D L; Lanteri, C J; Sly, P D

    1993-10-01

    It is still unknown whether peak expiratory flow (PEF) is determined by "wave speed" flow limitation in the airways. To investigate the influences of airway mechanical properties on PEF, five healthy adults performed maximal forced expiratory effort (MFEE) manoeuvres, in the standard manner and following breathholds at total lung capacity (TLC) of 2 s and 10 s. Oesophageal pressure (Poes) was measured as an index of respiratory effort. Subjects also performed a MFEE following a 10 s breathhold during which intrathoracic pressure was voluntarily raised by a Valsalva manoeuvre, which would increase transmural pressure and cross-sectional area of the extrathoracic airway. Additional MFEEs were performed with the neck fully flexed and extended, to change longitudinal tracheal tension. In separate studies, PEF was measured with a spirometer and with a pneumotachograph. Breathholds at TLC (2 s and 10 s), and neck flexion reduced PEF by a mean of 9.8% (SD 2.9%), 9.6% (SD 1.6%), and 8.7% (SD 2.8%), respectively, when measured with the spirometer. The same pattern of results was seen when measured with the pneumotachograph. These reductions occurred despite similar respiratory effort. Voluntarily raising intrathoracic pressure during a 10 s breathhold did not reverse a fall in PEF. MFEE manoeuvre with neck extension did not result in an increase in PEF, the group mean % changes being -3.0% (SD 5.0%). We conclude that these results do not allow the hypothesis that "wave-speed" (Vws) is reached at PEF to be rejected. A breathhold at TLC could increase airway wall compliance by allowing stress-relaxation of the airway, thus reducing the "Vws" achievable. PMID:8287953

  17. The accuracy of portable peak flow meters.

    PubMed Central

    Miller, M R; Dickinson, S A; Hitchings, D J

    1992-01-01

    BACKGROUND: The variability of peak expiratory flow (PEF) is now commonly used in the diagnosis and management of asthma. It is essential for PEF meters to have a linear response in order to obtain an unbiased measurement of PEF variability. As the accuracy and linearity of portable PEF meters have not been rigorously tested in recent years this aspect of their performance has been investigated. METHODS: The response of several portable PEF meters was tested with absolute standards of flow generated by a computer driven, servo controlled pump and their response was compared with that of a pneumotachograph. RESULTS: For each device tested the readings were highly repeatable to within the limits of accuracy with which the pointer position can be assessed by eye. The between instrument variation in reading for six identical devices expressed as a 95% confidence limit was, on average across the range of flows, +/- 8.5 l/min for the Mini-Wright, +/- 7.9 l/min for the Vitalograph, and +/- 6.4 l/min for the Ferraris. PEF meters based on the Wright meter all had similar error profiles with overreading of up to 80 l/min in the mid flow range from 300 to 500 l/min. This overreading was greatest for the Mini-Wright and Ferraris devices, and less so for the original Wright and Vitalograph meters. A Micro-Medical Turbine meter was accurate up to 400 l/min and then began to underread by up to 60 l/min at 720 l/min. For the low range devices the Vitalograph device was accurate to within 10 l/min up to 200 l/min, with the Mini-Wright overreading by up to 30 l/min above 150 l/min. CONCLUSION: Although the Mini-Wright, Ferraris, and Vitalograph meters gave remarkably repeatable results their error profiles for the full range meters will lead to important errors in recording PEF variability. This may lead to incorrect diagnosis and bias in implementing strategies of asthma treatment based on PEF measurement. PMID:1465746

  18. On the Error Sources in Absolute Individual Antenna Calibrations

    NASA Astrophysics Data System (ADS)

    Aerts, Wim; Baire, Quentin; Bilich, Andria; Bruyninx, Carine; Legrand, Juliette

    2013-04-01

    The two main methods for antenna calibration currently in use, are anechoic chamber measurements on the one hand and outdoor robot calibration on the other hand. Both techniques differ completely in approach, setup and data processing. Consequently, the error sources for both techniques are totally different as well. Except for the (near field) multi path error, caused by the antenna positioning device, that alters results for both calibration methods. But not necessarily with the same order of magnitude. Literature states a (maximum deviation) repeatability for robot calibration of choke ring antennas of 0.5 mm on L1 and 1 mm on L2 [1]. For anechoic chamber calibration, a value of 1.5 mm on L2 for a resistive ground plane antenna can be found in [2]. Repeatability however masks systematic errors linked with the calibration technique. Hence, comparing an individual calibration obtained with a robot to a calibration of the same antenna in an anechoic chamber, may result in differences that surpass these repeatability thresholds. This was the case at least for all six choke ring antennas studied. The order of magnitude of the differences moreover corresponded well to the values given for a LEIAT504GG in [3]. For some error sources, such as the GNSS receiver measurement noise or the VNA measurement noise, estimates can be obtained from manufacturer specifications in data sheets. For other error sources, such as the finite distance between transmit and receive antenna, or the limited attenuation of reflections on wall absorber, back-of-the-envelope calculations can be made to estimate their order of magnitude. For the error due to (near field) multi path this is harder to do, if not impossible. The more because this strongly depends on the antenna type and its mount. Unfortunately it is, again, this (near field) multi path influence that might void the calibration once the antenna is installed at the station. Hence it can be concluded that at present, due to (near

  19. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the “observability” of absolute time direction as well as absolute handedness-left or right- are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical- chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discus as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  20. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the 'observability' of absolute time direction as well as absolute handedness - left or right - are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical-chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discuss as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  1. Magnitude 8.1 Earthquake off the Solomon Islands

    NASA Technical Reports Server (NTRS)

    2007-01-01

    On April 1, 2007, a magnitude 8.1 earthquake rattled the Solomon Islands, 2,145 kilometers (1,330 miles) northeast of Brisbane, Australia. Centered less than ten kilometers beneath the Earth's surface, the earthquake displaced enough water in the ocean above to trigger a small tsunami. Though officials were still assessing damage to remote island communities on April 3, Reuters reported that the earthquake and the tsunami killed an estimated 22 people and left as many as 5,409 homeless. The most serious damage occurred on the island of Gizo, northwest of the earthquake epicenter, where the tsunami damaged the hospital, schools, and hundreds of houses, said Reuters. This image, captured by the Landsat-7 satellite, shows the location of the earthquake epicenter in relation to the nearest islands in the Solomon Island group. Gizo is beyond the left edge of the image, but its triangular fringing coral reefs are shown in the upper left corner. Though dense rain forest hides volcanic features from view, the very shape of the islands testifies to the geologic activity of the region. The circular Kolombangara Island is the tip of a dormant volcano, and other circular volcanic peaks are visible in the image. The image also shows that the Solomon Islands run on a northwest-southeast axis parallel to the edge of the Pacific plate, the section of the Earth's crust that carries the Pacific Ocean and its islands. The earthquake occurred along the plate boundary, where the Australia/Woodlark/Solomon Sea plates slide beneath the denser Pacific plate. Friction between the sinking (subducting) plates and the overriding Pacific plate led to the large earthquake on April 1, said the United States Geological Survey (USGS) summary of the earthquake. Large earthquakes are common in the region, though the section of the plate that produced the April 1 earthquake had not caused any quakes of magnitude 7 or larger since the early 20th century, said the USGS.

  2. Magnitude of Treatment Abandonment in Childhood Cancer

    PubMed Central

    Friedrich, Paola; Lam, Catherine G.; Itriago, Elena; Perez, Rafael; Ribeiro, Raul C.; Arora, Ramandeep S.

    2015-01-01

    high burden of TxA in LMC, and illustrate the negative impact of poverty on its occurrence. The present estimates may appear small compared to the global burden of child death from malnutrition and infection (measured in millions). However, absolute numbers suggest the burden of TxA in LMC is nearly equivalent to annually losing all kids diagnosed with cancer in HIC just to TxA, without even considering deaths from disease progression, relapse or toxicity–the main causes of childhood cancer mortality in HIC. Results document the importance of monitoring and addressing TxA as part of childhood cancer outcomes in at-risk settings. PMID:26422208

  3. Estimating magnitude and duration of incident delays

    SciTech Connect

    Garib, A.; Radwan, A.E.; Al-Deek, H.

    1997-11-01

    Traffic congestion is a major operational problem on urban freeways. In the case of recurring congestion, travelers can plan their trips according to the expected occurrence and severity of recurring congestion. However, nonrecurring congestion cannot be managed without real-time prediction. Evaluating the efficiency of intelligent transportation systems (ITS) technologies in reducing incident effects requires developing models that can accurately predict incident duration along with the magnitude of nonrecurring congestion. This paper provides two statistical models for estimating incident delay and a model for predicting incident duration. The incident delay models showed that up to 85% of variation in incident delay can be explained by incident duration, number of lanes affected, number of vehicles involved, and traffic demand before the incident. The incident duration prediction model showed that 81% of variation in incident duration can be predicted by number of lanes affected, number of vehicles involved, truck involvement, time of day, police response time, and weather condition. These findings have implications for on-line applications within the context of advanced traveler information systems (ATIS).

  4. The magnitude distribution of dynamically triggered earthquakes

    NASA Astrophysics Data System (ADS)

    Hernandez, Stephen

    Large dynamic strains carried by seismic waves are known to trigger seismicity far from their source region. It is unknown, however, whether surface waves trigger only small earthquakes, or whether they can also trigger large, societally significant earthquakes. To address this question, we use a mixing model approach in which total seismicity is decomposed into 2 broad subclasses: "triggered" events initiated or advanced by far-field dynamic strains, and "untriggered" spontaneous events consisting of everything else. The b-value of a mixed data set, b MIX, is decomposed into a weighted sum of b-values of its constituent components, bT and bU. For populations of earthquakes subjected to dynamic strain, the fraction of earthquakes that are likely triggered, f T, is estimated via inter-event time ratios and used to invert for bT. The confidence bounds on b T are estimated by multiple inversions of bootstrap resamplings of bMIX and fT. For Californian seismicity, data are consistent with a single-parameter Gutenberg-Richter hypothesis governing the magnitudes of both triggered and untriggered earthquakes. Triggered earthquakes therefore seem just as likely to be societally significant as any other population of earthquakes.

  5. Extended arrays for nonlinear susceptibility magnitude imaging.

    PubMed

    Ficko, Bradley W; Giacometti, Paolo; Diamond, Solomon G

    2015-10-01

    This study implements nonlinear susceptibility magnitude imaging (SMI) with multifrequency intermodulation and phase encoding. An imaging grid was constructed of cylindrical wells of 3.5-mm diameter and 4.2-mm height on a hexagonal two-dimensional 61-voxel pattern with 5-mm spacing. Patterns of sample wells were filled with 40-μl volumes of Fe3O4 starch-coated magnetic nanoparticles (mNPs) with a hydrodynamic diameter of 100 nm and a concentration of 25 mg/ml. The imaging hardware was configured with three excitation coils and three detection coils in anticipation that a larger imaging system will have arrays of excitation and detection coils. Hexagonal and bar patterns of mNP were successfully imaged (R2>0.9) at several orientations. This SMI demonstration extends our prior work to feature a larger coil array, enlarged field-of-view, effective phase encoding scheme, reduced mNP sample size, and more complex imaging patterns to test the feasibility of extending the method beyond the pilot scale. The results presented in this study show that nonlinear SMI holds promise for further development into a practical imaging system for medical applications. PMID:26124044

  6. Demographic factors predict magnitude of conditioned fear.

    PubMed

    Rosenbaum, Blake L; Bui, Eric; Marin, Marie-France; Holt, Daphne J; Lasko, Natasha B; Pitman, Roger K; Orr, Scott P; Milad, Mohammed R

    2015-10-01

    There is substantial variability across individuals in the magnitudes of their skin conductance (SC) responses during the acquisition and extinction of conditioned fear. To manage this variability, subjects may be matched for demographic variables, such as age, gender and education. However, limited data exist addressing how much variability in conditioned SC responses is actually explained by these variables. The present study assessed the influence of age, gender and education on the SC responses of 222 subjects who underwent the same differential conditioning paradigm. The demographic variables were found to predict a small but significant amount of variability in conditioned responding during fear acquisition, but not fear extinction learning or extinction recall. A larger differential change in SC during acquisition was associated with more education. Older participants and women showed smaller differential SC during acquisition. Our findings support the need to consider age, gender and education when studying fear acquisition but not necessarily when examining fear extinction learning and recall. Variability in demographic factors across studies may partially explain the difficulty in reproducing some SC findings. PMID:26151498

  7. Peak Ring Craters and Multiring Basins

    NASA Astrophysics Data System (ADS)

    Melosh, H. J.

    2015-09-01

    Understanding of the mechanics of peak-ring crater and basin formation has expanded greatly due to the high precision data on lunar gravity from GRAIL. Peak rings coincide with the edges of underlying mantle uplifts on the Moon.

  8. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  9. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  10. Nonlinear Susceptibility Magnitude Imaging of Magnetic Nanoparticles

    PubMed Central

    Ficko, Bradley W.; Giacometti, Paolo; Diamond, Solomon G.

    2014-01-01

    This study demonstrates a method for improving the resolution of susceptibility magnitude imaging (SMI) using spatial information that arises from the nonlinear magnetization characteristics of magnetic nanoparticles (mNPs). In this proof-of-concept study of nonlinear SMI, a pair of drive coils and several permanent magnets generate applied magnetic fields and a coil is used as a magnetic field sensor. Sinusoidal alternating current (AC) in the drive coils results in linear mNP magnetization responses at primary frequencies, and nonlinear responses at harmonic frequencies and intermodulation frequencies. The spatial information content of the nonlinear responses is evaluated by reconstructing tomographic images with sequentially increasing voxel counts using the combined linear and nonlinear data. Using the linear data alone it is not possible to accurately reconstruct more than 2 voxels with a pair of drive coils and a single sensor. However, nonlinear SMI is found to accurately reconstruct 12 voxels (R2 = 0.99, CNR = 84.9) using the same physical configuration. Several time-multiplexing methods are then explored to determine if additional spatial information can be obtained by varying the amplitude, phase and frequency of the applied magnetic fields from the two drive coils. Asynchronous phase modulation, amplitude modulation, intermodulation phase modulation, and frequency modulation all resulted in accurate reconstruction of 6 voxels (R2 > 0.9) indicating that time multiplexing is a valid approach to further increase the resolution of nonlinear SMI. The spatial information content of nonlinear mNP responses and the potential for resolution enhancement with time multiplexing demonstrate the concept and advantages of nonlinear SMI. PMID:25505816

  11. The National Geodetic Survey absolute gravity program

    NASA Astrophysics Data System (ADS)

    Peter, George; Moose, Robert E.; Wessells, Claude W.

    1989-03-01

    The National Geodetic Survey absolute gravity program will utilize the high precision afforded by the JILAG-4 instrument to support geodetic and geophysical research, which involves studies of vertical motions, identification and modeling of other temporal variations, and establishment of reference values. The scientific rationale of these objectives is given, the procedures used to collect gravity and environmental data in the field are defined, and the steps necessary to correct and remove unwanted environmental effects are stated. In addition, site selection criteria, methods of concomitant environmental data collection and relative gravity observations, and schedule and logistics are discussed.

  12. An absolute radius scale for Saturn's rings

    NASA Technical Reports Server (NTRS)

    Nicholson, Philip D.; Cooke, Maren L.; Pelton, Emily

    1990-01-01

    Radio and stellar occultation observations of Saturn's rings made by the Voyager spacecraft are discussed. The data reveal systematic discrepancies of almost 10 km in some parts of the rings, limiting some of the investigations. A revised solution for Saturn's rotation pole has been proposed which removes the discrepancies between the stellar and radio occultation profiles. Corrections to previously published radii vary from -2 to -10 km for the radio occultation, and +5 to -6 km for the stellar occultation. An examination of spiral density waves in the outer A Ring supports that the revised absolute radii are in error by no more than 2 km.

  13. Characterization of the DARA solar absolute radiometer

    NASA Astrophysics Data System (ADS)

    Finsterle, W.; Suter, M.; Fehlmann, A.; Kopp, G.

    2011-12-01

    The Davos Absolute Radiometer (DARA) prototype is an Electrical Substitution Radiometer (ESR) which has been developed as a successor of the PMO6 type on future space missions and ground based TSI measurements. The DARA implements an improved thermal design of the cavity detector and heat sink assembly to minimize air-vacuum differences and to maximize thermal symmetry of measuring and compensating cavity. The DARA also employs an inverted viewing geometry to reduce internal stray light. We will report on the characterization and calibration experiments which were carried out at PMOD/WRC and LASP (TRF).

  14. Absolute calibration of the Auger fluorescence detectors

    SciTech Connect

    Bauleo, P.; Brack, J.; Garrard, L.; Harton, J.; Knapik, R.; Meyhandan, R.; Rovero, A.C.; Tamashiro, A.; Warner, D.

    2005-07-01

    Absolute calibration of the Pierre Auger Observatory fluorescence detectors uses a light source at the telescope aperture. The technique accounts for the combined effects of all detector components in a single measurement. The calibrated 2.5 m diameter light source fills the aperture, providing uniform illumination to each pixel. The known flux from the light source and the response of the acquisition system give the required calibration for each pixel. In the lab, light source uniformity is studied using CCD images and the intensity is measured relative to NIST-calibrated photodiodes. Overall uncertainties are presently 12%, and are dominated by systematics.

  15. Absolute angular positioning in ultrahigh vacuum

    SciTech Connect

    Schief, H.; Marsico, V.; Kern, K.

    1996-05-01

    Commercially available angular resolvers, which are routinely used in machine tools and robotics, are modified and adapted to be used under ultrahigh-vacuum (UHV) conditions. They provide straightforward and reliable measurements of angular positions for any kind of UHV sample manipulators. The corresponding absolute reproducibility is on the order of 0.005{degree}, whereas the relative resolution is better than 0.001{degree}, as demonstrated by high-resolution helium-reflectivity measurements. The mechanical setup and possible applications are discussed. {copyright} {ital 1996 American Institute of Physics.}

  16. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  17. Absolute Priority for a Vehicle in VANET

    NASA Astrophysics Data System (ADS)

    Shirani, Rostam; Hendessi, Faramarz; Montazeri, Mohammad Ali; Sheikh Zefreh, Mohammad

    In today's world, traffic jams waste hundreds of hours of our life. This causes many researchers try to resolve the problem with the idea of Intelligent Transportation System. For some applications like a travelling ambulance, it is important to reduce delay even for a second. In this paper, we propose a completely infrastructure-less approach for finding shortest path and controlling traffic light to provide absolute priority for an emergency vehicle. We use the idea of vehicular ad-hoc networking to reduce the imposed travelling time. Then, we simulate our proposed protocol and compare it with a centrally controlled traffic light system.

  18. Peak-flow characteristics of Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.

  19. Discourse Peak as Zone of Turbulence.

    ERIC Educational Resources Information Center

    Longacre, Robert E.

    Defining peak as the climax of discourse, this paper argues that it is important to identify peak in order to get at the overall grammar of a given discourse. The paper presents case studies in which four instances of peak in narrative discourses occur in languages from four different parts of the world. It also illustrates the occurrence of a…

  20. Determination of the absolute contours of optical flats

    NASA Technical Reports Server (NTRS)

    Primak, W.

    1969-01-01

    Emersons procedure is used to determine true absolute contours of optical flats. Absolute contours of standard flats are determined and a comparison is then made between standard and unknown flats. Contour differences are determined by deviation of Fizeau fringe.

  1. Determining Distances for Active Galactic Nuclei using an Optical and Near-Infrared Color-Magnitude Diagram

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Gorjian, V.; Richter, K. L.; Pruett, L.

    2015-12-01

    Active Galactic Nuclei, or AGN, are extremely luminous bodies that emit large quantities of light via accretion onto supermassive black holes at the centers of galaxies. This project investigated the relationship between color (ratio of dust emission to accretion disk emission) and magnitude of AGN in order to establish a predictive correlation between the two, similar to the relationship between the color and magnitude of stars seen in the Hertzsprung-Russell diagram. This relationship will prove beneficial in creating a standard candle for determining interstellar distances between AGN bodies. Photometry data surrounding Type 1 Seyferts and quasars from the 2 Micron All Sky Survey (2MASS) and the Sloan Digital Sky Survey (SDSS) were studied. Using this data, color-magnitude diagrams comparing the ratio of two wavelengths to the absolute magnitude of another were created. Overall, many of the diagrams created indicated a clear correlation between color and luminosity of AGN. Several of the diagrams, focused on portions of the visible and near infrared (NIR) wavelength bands, showed the strongest correlations. When the z-k bands were plotted against the absolute magnitude of the k band, specifically surrounding the bodies with redshifts between 0.1 and 0.15, a strong predictive relationship was seen, with a high slope (0.75) and R2 close to 1 (0.69). Additionally, the diagram comparing the i-j bands to the absolute magnitude of the j band, specifically surrounding the bodies with redshifts between 0.05 and 0.1, also demonstrated a strong predictive relationship with a high slope (0.64) and R2 close to 1 (0.58). These correlations have several real-world applications, as they help determine cosmic distances, and, resultantly, age of the bodies in the universe.

  2. Standardization of the cumulative absolute velocity

    SciTech Connect

    O'Hara, T.F.; Jacobson, J.P. )

    1991-12-01

    EPRI NP-5930, A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set.

  3. Absolute rates of hole transfer in DNA.

    PubMed

    Senthilkumar, Kittusamy; Grozema, Ferdinand C; Guerra, Célia Fonseca; Bickelhaupt, F Matthias; Lewis, Frederick D; Berlin, Yuri A; Ratner, Mark A; Siebbeles, Laurens D A

    2005-10-26

    Absolute rates of hole transfer between guanine nucleobases separated by one or two A:T base pairs in stilbenedicarboxamide-linked DNA hairpins were obtained by improved kinetic analysis of experimental data. The charge-transfer rates in four different DNA sequences were calculated using a density-functional-based tight-binding model and a semiclassical superexchange model. Site energies and charge-transfer integrals were calculated directly as the diagonal and off-diagonal matrix elements of the Kohn-Sham Hamiltonian, respectively, for all possible combinations of nucleobases. Taking into account the Coulomb interaction between the negative charge on the stilbenedicarboxamide linker and the hole on the DNA strand as well as effects of base pair twisting, the relative order of the experimental rates for hole transfer in different hairpins could be reproduced by tight-binding calculations. To reproduce quantitatively the absolute values of the measured rate constants, the effect of the reorganization energy was taken into account within the semiclassical superexchange model for charge transfer. The experimental rates could be reproduced with reorganization energies near 1 eV. The quantum chemical data obtained were used to discuss charge carrier mobility and hole-transport equilibria in DNA. PMID:16231945

  4. Transient absolute robustness in stochastic biochemical networks.

    PubMed

    Enciso, German A

    2016-08-01

    Absolute robustness allows biochemical networks to sustain a consistent steady-state output in the face of protein concentration variability from cell to cell. This property is structural and can be determined from the topology of the network alone regardless of rate parameters. An important question regarding these systems is the effect of discrete biochemical noise in the dynamical behaviour. In this paper, a variable freezing technique is developed to show that under mild hypotheses the corresponding stochastic system has a transiently robust behaviour. Specifically, after finite time the distribution of the output approximates a Poisson distribution, centred around the deterministic mean. The approximation becomes increasingly accurate, and it holds for increasingly long finite times, as the total protein concentrations grow to infinity. In particular, the stochastic system retains a transient, absolutely robust behaviour corresponding to the deterministic case. This result contrasts with the long-term dynamics of the stochastic system, which eventually must undergo an extinction event that eliminates robustness and is completely different from the deterministic dynamics. The transiently robust behaviour may be sufficient to carry out many forms of robust signal transduction and cellular decision-making in cellular organisms. PMID:27581485

  5. Absolute Electron Extraction Efficiency of Liquid Xenon

    NASA Astrophysics Data System (ADS)

    Kamdin, Katayun; Mizrachi, Eli; Morad, James; Sorensen, Peter

    2016-03-01

    Dual phase liquid/gas xenon time projection chambers (TPCs) currently set the world's most sensitive limits on weakly interacting massive particles (WIMPs), a favored dark matter candidate. These detectors rely on extracting electrons from liquid xenon into gaseous xenon, where they produce proportional scintillation. The proportional scintillation from the extracted electrons serves to internally amplify the WIMP signal; even a single extracted electron is detectable. Credible dark matter searches can proceed with electron extraction efficiency (EEE) lower than 100%. However, electrons systematically left at the liquid/gas boundary are a concern. Possible effects include spontaneous single or multi-electron proportional scintillation signals in the gas, or charging of the liquid/gas interface or detector materials. Understanding EEE is consequently a serious concern for this class of rare event search detectors. Previous EEE measurements have mostly been relative, not absolute, assuming efficiency plateaus at 100%. I will present an absolute EEE measurement with a small liquid/gas xenon TPC test bed located at Lawrence Berkeley National Laboratory.

  6. Sentinel-2/MSI absolute calibration: first results

    NASA Astrophysics Data System (ADS)

    Lonjou, V.; Lachérade, S.; Fougnie, B.; Gamet, P.; Marcq, S.; Raynaud, J.-L.; Tremas, T.

    2015-10-01

    Sentinel-2 is an optical imaging mission devoted to the operational monitoring of land and coastal areas. It is developed in partnership between the European Commission and the European Space Agency. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. It will offer a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). CNES is involved in the instrument commissioning in collaboration with ESA. This paper reviews all the techniques that will be used to insure an absolute calibration of the 13 spectral bands better than 5% (target 3%), and will present the first results if available. First, the nominal calibration technique, based on an on-board sun diffuser, is detailed. Then, we show how vicarious calibration methods based on acquisitions over natural targets (oceans, deserts, and Antarctica during winter) will be used to check and improve the accuracy of the absolute calibration coefficients. Finally, the verification scheme, exploiting photometer in-situ measurements over Lacrau plain, is described. A synthesis, including spectral coherence, inter-methods agreement and temporal evolution, will conclude the paper.

  7. Absolute Spectrophotometry of 237 Open Cluster Stars

    NASA Astrophysics Data System (ADS)

    Clampitt, L.; Burstein, D.

    1994-12-01

    We present absolute spectrophotometry of 237 stars in 7 nearby open clusters: Hyades, Pleiades, Alpha Persei, Praesepe, Coma Berenices, IC 4665, and M 39. The observations were taken using the Wampler single-channel scanner (Wampler 1966) on the Crossley 0.9m telescope at Lick Observatory from July 1973 through December 1974. 21 bandpasses spanning the spectral range 3500 Angstroms to 7780 Angstroms were observed for each star, with bandwiths ranging from 32Angstroms to 64 Angstroms. Data are standardized to the Hayes--Latham (1975) system. Our measurements are compared to filter colors on the Johnson BV, Stromgren ubvy, and Geneva U V B_1 B_2 V_1 G systems, as well as to spectrophotometry of a few stars published by Gunn, Stryker & Tinsley and in the Spectrophotometric Standards Catalog (Adelman; as distributed by the NSSDC). Both internal and external comparisons to the filter systems indicate a formal statistical accuracy per bandpass of 0.01 to 0.02 mag, with apparent larger ( ~ 0.03 mag) differences in absolute calibration between this data set and existing spectrophotometry. These data will comprise part of the spectrophotometry that will be used to calibrate the Beijing-Arizona-Taipei-Connecticut Color Survey of the Sky (see separate paper by Burstein et al. at this meeting).

  8. Peak-Seeking Control Using Gradient and Hessian Estimates

    NASA Technical Reports Server (NTRS)

    Ryan, John J.; Speyer, Jason L.

    2010-01-01

    A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.

  9. A Conceptual Approach to Absolute Value Equations and Inequalities

    ERIC Educational Resources Information Center

    Ellis, Mark W.; Bryson, Janet L.

    2011-01-01

    The absolute value learning objective in high school mathematics requires students to solve far more complex absolute value equations and inequalities. When absolute value problems become more complex, students often do not have sufficient conceptual understanding to make any sense of what is happening mathematically. The authors suggest that the…

  10. Using, Seeing, Feeling, and Doing Absolute Value for Deeper Understanding

    ERIC Educational Resources Information Center

    Ponce, Gregorio A.

    2008-01-01

    Using sticky notes and number lines, a hands-on activity is shared that anchors initial student thinking about absolute value. The initial point of reference should help students successfully evaluate numeric problems involving absolute value. They should also be able to solve absolute value equations and inequalities that are typically found in…

  11. 20 CFR 404.1205 - Absolute coverage groups.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Absolute coverage groups. 404.1205 Section... INSURANCE (1950- ) Coverage of Employees of State and Local Governments What Groups of Employees May Be Covered § 404.1205 Absolute coverage groups. (a) General. An absolute coverage group is a...

  12. Effects of the 1993 flood on the determination of flood magnitude and frequency in Iowa

    USGS Publications Warehouse

    Eash, David A.

    1997-01-01

    Several factors, which included recurrence intervals for the 1993 peak discharges and the effective record lengths for 1993, were investigated for the 62 selected streamflow-gaging stations to evaluate their possible effect on the computed flood-frequency discharges. The combined effect of these two factors on the computed 100-year recurrence-interval discharges was significant. Gaging stations were grouped into four discrete categories on the basis ofrecurrence intervals for the 1993 peak discharges and the effective record lengths for 1993 . Of the 28 gaging stations that had small flood magnitudes in 1993 and long record lengths, the difference between the 1992 and the 1993 flood-frequency analyses for 100- year recurrence-interval discharges at 22 gaging stations was less than 5 percent. Of the 10 gaging stations that had large flood magnitudes in 1993 and short record lengths, the increase in 100-year recurrence-interval discharges at 9 gaging stations was greater than 15 percent.

  13. An empirical reevaluation of absolute pitch: behavioral and electrophysiological measurements.

    PubMed

    Elmer, Stefan; Sollberger, Silja; Meyer, Martin; Jäncke, Lutz

    2013-10-01

    Here, we reevaluated the "two-component" model of absolute pitch (AP) by combining behavioral and electrophysiological measurements. This specific model postulates that AP is driven by a perceptual encoding ability (i.e., pitch memory) plus an associative memory component (i.e., pitch labeling). To test these predictions, during EEG measurements AP and non-AP (NAP) musicians were passively exposed to piano tones (first component of the model) and additionally instructed to judge whether combinations of tones and labels were conceptually associated or not (second component of the model). Auditory-evoked N1/P2 potentials did not reveal differences between the two groups, thus indicating that AP is not necessarily driven by a differential pitch encoding ability at the processing level of the auditory cortex. Otherwise, AP musicians performed the conceptual association task with an order of magnitude better accuracy and shorter RTs than NAP musicians did, this result clearly pointing to distinctive conceptual associations in AP possessors. Most notably, this behavioral superiority was reflected by an increased N400 effect and accompanied by a subsequent late positive component, the latter not being distinguishable in NAP musicians. PMID:23647515

  14. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  15. Absolute properties of the eclipsing binary star AP Andromedae

    SciTech Connect

    Sandberg Lacy, Claud H.; Torres, Guillermo; Fekel, Francis C.; Muterspaugh, Matthew W. E-mail: gtorres@cfa.harvard.edu E-mail: matthew1@coe.tsuniv.edu

    2014-06-01

    AP And is a well-detached F5 eclipsing binary star for which only a very limited amount of information was available before this publication. We have obtained very extensive measurements of the light curve (19,097 differential V magnitude observations) and a radial velocity curve (83 spectroscopic observations) which allow us to fit orbits and determine the absolute properties of the components very accurately: masses of 1.277 ± 0.004 and 1.251 ± 0.004 M {sub ☉}, radii of 1.233 ± 0.006 and 1.1953 ± 0.005 R {sub ☉}, and temperatures of 6565 ± 150 K and 6495 ± 150 K. The distance to the system is about 400 ± 30 pc. Comparison with the theoretical properties of the stellar evolutionary models of the Yonsei-Yale series of Yi et al. shows good agreement between the observations and the theory at an age of about 500 Myr and a slightly sub-solar metallicity.

  16. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    NASA Astrophysics Data System (ADS)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  17. Design method of planar vibration system for specified ratio of energy peaks

    NASA Astrophysics Data System (ADS)

    Kim, Jun Woo; Lee, Sungon; Choi, Yong Je

    2015-05-01

    The magnitudes of the resonant peaks should be considered in the design stage of any bandwidth-relevant applications to widen the working bandwidth. This paper presents a new design method for a planar vibration system that satisfies any desired ratio of peak magnitudes at target resonant frequencies. An important geometric property of a modal triangle formed from three vibration centers representing vibration modes is found. Utilizing the property, the analytical expressions for the vibration energy generated by external forces are derived in terms of the geometrical data of vibration centers. When any desired ratio of peak magnitudes is specified, the locations of the vibration centers are found from their analytical relations. The corresponding stiffness matrix can be determined and realized accordingly. The systematic design methods for direct- and base-excitation systems are developed, and one numerical example is presented to illustrate the proposed design method.

  18. On the trail of double peak hydrographs

    NASA Astrophysics Data System (ADS)

    Martínez-Carreras, Núria; Hissler, Christophe; Gourdol, Laurent; Klaus, Julian; Juilleret, Jérôme; François Iffly, Jean; McDonnell, Jeffrey J.; Pfister, Laurent

    2016-04-01

    A double peak hydrograph features two peaks as a response to a unique rainfall pulse. The first peak occurs at the same time or shortly after the precipitation has started and it corresponds to a fast catchment response to precipitation. The delayed peak normally starts during the recession of the first peak, when the precipitation has already ceased. Double peak hydrographs may occur for various reasons. They can occur (i) in large catchments when lag times in tributary responses are large, (ii) in urban catchments where the first peak is often caused by direct surface runoff on impervious land cover, and the delayed peak to slower subsurface flow, and (iii) in non-urban catchments, where the first and the delayed discharge peaks are explained by different runoff mechanisms (e.g. overland flow, subsurface flow and/or deep groundwater flow) that have different response times. Here we focus on the third case, as a formal description of the different hydrological mechanisms explaining these complex hydrological dynamics across catchments with diverse physiographic characteristics is still needed. Based on a review of studies documenting double peak events we have established a formal classification of catchments presenting double peak events based on their regolith structure (geological substratum and/or its weathered products). We describe the different hydrological mechanisms that trigger these complex hydrological dynamics across each catchment type. We then use hydrometric time series of precipitation, runoff, soil moisture and groundwater levels collected in the Weierbach (0.46 km2) headwater catchment (Luxembourg) to better understand double peak hydrograph generation. Specifically, we aim to find out (1) if the generation of a double peak hydrograph is a threshold process, (2) if the hysteretic relationships between storage and discharge are consistent during single and double peak hydrographs, and (3) if different functional landscape units (the hillslopes

  19. Use of Absolute and Comparative Performance Feedback in Absolute and Comparative Judgments and Decisions

    ERIC Educational Resources Information Center

    Moore, Don A.; Klein, William M. P.

    2008-01-01

    Which matters more--beliefs about absolute ability or ability relative to others? This study set out to compare the effects of such beliefs on satisfaction with performance, self-evaluations, and bets on future performance. In Experiment 1, undergraduate participants were told they had answered 20% correct, 80% correct, or were not given their…

  20. ABSOLUTE PROPERTIES OF THE ECLIPSING BINARY STAR BF DRACONIS

    SciTech Connect

    Sandberg Lacy, Claud H.; Torres, Guillermo; Fekel, Francis C.; Sabby, Jeffrey A.; Claret, Antonio E-mail: gtorres@cfa.harvard.edu E-mail: jsabby@siue.edu

    2012-06-15

    BF Dra is now known to be an eccentric double-lined F6+F6 binary star with relatively deep (0.7 mag) partial eclipses. Previous studies of the system are improved with 7494 differential photometric observations from the URSA WebScope and 9700 from the NFO WebScope, 106 high-resolution spectroscopic observations from the Tennessee State University 2 m automatic spectroscopic telescope and the 1 m coude-feed spectrometer at Kitt Peak National Observatory, and 31 accurate radial velocities from the CfA. Very accurate (better than 0.6%) masses and radii are determined from analysis of the two new light curves and four radial velocity curves. Theoretical models match the absolute properties of the stars at an age of about 2.72 Gyr and [Fe/H] = -0.17, and tidal theory correctly confirms that the orbit should still be eccentric. Our observations of BF Dra constrain the convective core overshooting parameter to be larger than about 0.13 H{sub p}. We find, however, that standard tidal theory is unable to match the observed slow rotation rates of the components' surface layers.

  1. The Implications for Higher-Accuracy Absolute Measurements for NGS and its GRAV-D Project

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Winester, D.; Roman, D. R.; Eckl, M. C.; Smith, D. A.

    2013-12-01

    absolute gravimetry, we expect that GRAV-D may be affected in a number of ways. 1) Areas requiring re-measurement as a result of poor quality data or temporal change could be measured with such a new meter. With a meter capable of field measurement with observation times that are very short, surveys previously conducted only with the relative meters could be performed with the absolute meter with no loss of time and a significant increase in accuracy. 2) Regions of rapid change due to hydrological change associated with aquifers could be measured and re-measured rather quickly. Such accuracy may provide more accurate snapshots of the aquifers over time. 3) NGS conducts absolute gravity comparisons at its Table Mountain facility for validating the performance of absolute meters through their co-located operation at gravity piers. An increase in accuracy of an order of magnitude may change the entire nature of absolute meter performance evaluation.

  2. Sources and magnitude of bias associated with determination of polychlorinated biphenyls in environmental samples

    USGS Publications Warehouse

    Eganhouse, R.P.; Gossett, R.W.

    1991-01-01

    Recently complled data on the composition of commercial Aroclor mixtures and ECD (electron capture detector) response factors for all 209 PCB congeners are used to develop estimates of the bias associated with determination of polychlorinated blphenyis. During quantitation of multlcomponent peaks by congener-specific procedures error is introduced because of variable ECD response to isomeric PCBs. Under worst case conditions, the magnitude of this bias can range from less than 2% to as much as 600%. Multicomponent peaks containing the more highly and the lower chlorinated congeners experience the most bias. For this reason, quantitation of ??PCB in Aroclor mixtures dominated by these species (e.g. 1016) are potentially subject to the greatest error. Comparison of response factor data for ECDs from two laboratories shows that the sign and magnitude of calibration bias for a given multicomponent peak is variable and depends, in part, on the response characteristics of individual detectors. By using the most abundant congener (of each multicomponent peak) for purposes of calibration, one can reduce the maximum bias to less than 55%. Moreover, due to cancellation of errors, the bias resulting from summation of all peak concentrations (i.e. ??PCB) becomes vanishingly small (200%) and highly variable in sign and magnitude. In this case, bias originates not only from the incomplete chromatographic resolution of PCB congeners but also the overlapping patterns of the Aroclor mixtures. Together these results illustrate the advantages of the congener-specific method of PCB quantitation over the traditional Aroclor Method and the extreme difficulty of estimating bias incurred by the latter procedure on a post hoc basis.

  3. Quantifying Surface Processes and Stratigraphic Characteristics Resulting from Large Magnitude High Frequency and Small Magnitude Low Frequency Relative Sea Level Cycles: An Experimental Study

    NASA Astrophysics Data System (ADS)

    Yu, L.; Li, Q.; Esposito, C. R.; Straub, K. M.

    2015-12-01

    Relative Sea-Level (RSL) change, which is a primary control on sequence stratigraphic architecture, has a close relationship with climate change. In order to explore the influence of RSL change on the stratigraphic record, we conducted three physical experiments which shared identical boundary conditions but differed in their RSL characteristics. Specifically, the three experiments differed with respect to two non-dimensional numbers that compare the magnitude and periodicity of RSL cycles to the spatial and temporal scales of autogenic processes, respectively. The magnitude of RSL change is quantified with H*, defined as the peak to trough difference in RSL during a cycle divided by a system's maximum autogenic channel depth. The periodicity of RSL change is quantified with T*, defined as the period of RSL cycles divided by the time required to deposit one channel depth of sediment, on average, everywhere in the basin. Experiments performed included: 1) a control experiment lacking RSL cycles, used to define a system's autogenics, 2) a high magnitude, high frequency RSL cycles experiment, and 3) a low magnitude, low frequency cycles experiment. We observe that the high magnitude, high frequency experiment resulted in the thickest channel bodies with the lowest width-to-depth ratios, while the low magnitude, long period experiment preserves a record of gradual shoreline transgression and regression producing facies that are the most continuous in space. We plan to integrate our experimental results with Delft3D numerical experiments models that sample similar non-dimensional characteristics of RSL cycles. Quantifying the influence of RSL change, normalized as a function of the spatial and temporal scales of autogenic processes will strengthen our ability to predict stratigraphic architecture and invert stratigraphy for paleo-environmental conditions.

  4. Absolute calibration of ultraviolet filter photometry

    NASA Technical Reports Server (NTRS)

    Bless, R. C.; Fairchild, T.; Code, A. D.

    1972-01-01

    The essential features of the calibration procedure can be divided into three parts. First, the shape of the bandpass of each photometer was determined by measuring the transmissions of the individual optical components and also by measuring the response of the photometer as a whole. Secondly, each photometer was placed in the essentially-collimated synchrotron radiation bundle maintained at a constant intensity level, and the output signal was determined from about 100 points on the objective. Finally, two or three points on the objective were illuminated by synchrotron radiation at several different intensity levels covering the dynamic range of the photometers. The output signals were placed on an absolute basis by the electron counting technique described earlier.

  5. MAGSAT: Vector magnetometer absolute sensor alignment determination

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.

    1981-01-01

    A procedure is described for accurately determining the absolute alignment of the magnetic axes of a triaxial magnetometer sensor with respect to an external, fixed, reference coordinate system. The method does not require that the magnetic field vector orientation, as generated by a triaxial calibration coil system, be known to better than a few degrees from its true position, and minimizes the number of positions through which a sensor assembly must be rotated to obtain a solution. Computer simulations show that accuracies of better than 0.4 seconds of arc can be achieved under typical test conditions associated with existing magnetic test facilities. The basic approach is similar in nature to that presented by McPherron and Snare (1978) except that only three sensor positions are required and the system of equations to be solved is considerably simplified. Applications of the method to the case of the MAGSAT Vector Magnetometer are presented and the problems encountered discussed.

  6. Absolute geostrophic currents in global tropical oceans

    NASA Astrophysics Data System (ADS)

    Yang, Lina; Yuan, Dongliang

    2016-03-01

    A set of absolute geostrophic current (AGC) data for the period January 2004 to December 2012 are calculated using the P-vector method based on monthly gridded Argo profiles in the world tropical oceans. The AGCs agree well with altimeter geostrophic currents, Ocean Surface Current Analysis-Real time currents, and moored current-meter measurements at 10-m depth, based on which the classical Sverdrup circulation theory is evaluated. Calculations have shown that errors of wind stress calculation, AGC transport, and depth ranges of vertical integration cannot explain non-Sverdrup transport, which is mainly in the subtropical western ocean basins and equatorial currents near the Equator in each ocean basin (except the North Indian Ocean, where the circulation is dominated by monsoons). The identified non-Sverdrup transport is thereby robust and attributed to the joint effect of baroclinicity and relief of the bottom (JEBAR) and mesoscale eddy nonlinearity.

  7. Absolute Measurement of Electron Cloud Density

    SciTech Connect

    Covo, M K; Molvik, A W; Cohen, R H; Friedman, A; Seidl, P A; Logan, G; Bieniosek, F; Baca, D; Vay, J; Orlando, E; Vujic, J L

    2007-06-21

    Beam interaction with background gas and walls produces ubiquitous clouds of stray electrons that frequently limit the performance of particle accelerator and storage rings. Counterintuitively we obtained the electron cloud accumulation by measuring the expelled ions that are originated from the beam-background gas interaction, rather than by measuring electrons that reach the walls. The kinetic ion energy measured with a retarding field analyzer (RFA) maps the depressed beam space-charge potential and provides the dynamic electron cloud density. Clearing electrode current measurements give the static electron cloud background that complements and corroborates with the RFA measurements, providing an absolute measurement of electron cloud density during a 5 {micro}s duration beam pulse in a drift region of the magnetic transport section of the High-Current Experiment (HCX) at LBNL.

  8. Absolute instability of a viscous hollow jet

    NASA Astrophysics Data System (ADS)

    Gañán-Calvo, Alfonso M.

    2007-02-01

    An investigation of the spatiotemporal stability of hollow jets in unbounded coflowing liquids, using a general dispersion relation previously derived, shows them to be absolutely unstable for all physical values of the Reynolds and Weber numbers. The roots of the symmetry breakdown with respect to the liquid jet case, and the validity of asymptotic models are here studied in detail. Asymptotic analyses for low and high Reynolds numbers are provided, showing that old and well-established limiting dispersion relations [J. W. S. Rayleigh, The Theory of Sound (Dover, New York, 1945); S. Chandrasekhar, Hydrodynamic and Hydromagnetic Stability (Dover, New York, 1961)] should be used with caution. In the creeping flow limit, the analysis shows that, if the hollow jet is filled with any finite density and viscosity fluid, a steady jet could be made arbitrarily small (compatible with the continuum hypothesis) if the coflowing liquid moves faster than a critical velocity.

  9. Stitching interferometry: recent results and absolute calibration

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2004-02-01

    Stitching Interferometry is a method of analysing large optical components using a standard "small" interferometer. This result is obtained by taking multiple overlapping images of the large component, and numerically "stitching" these sub-apertures together. We have already reported the industrial use our Stitching Interferometry systems (Previous SPIE symposia), but experimental results had been lacking because this technique is still new, and users needed to get accustomed to it before producing reliable measurements. We now have more results. We will report user comments and show new, unpublished results. We will discuss sources of error, and show how some of these can be reduced to arbitrarily small values. These will be discussed in some detail. We conclude with a few graphical examples of absolute measurements performed by us.

  10. Swarm's Absolute Scalar Magnetometer metrological performances

    NASA Astrophysics Data System (ADS)

    Leger, J.; Fratter, I.; Bertrand, F.; Jager, T.; Morales, S.

    2012-12-01

    The Absolute Scalar Magnetometer (ASM) has been developed for the ESA Earth Observation Swarm mission, planned for launch in November 2012. As its Overhauser magnetometers forerunners flown on Oersted and Champ satellites, it will deliver high resolution scalar measurements for the in-flight calibration of the Vector Field Magnetometer manufactured by the Danish Technical University. Latest results of the ground tests carried out to fully characterize all parameters that may affect its accuracy, both at instrument and satellite level, will be presented. In addition to its baseline function, the ASM can be operated either at a much higher sampling rate (burst mode at 250 Hz) or in a dual mode where it also delivers vector field measurements as a by-product. The calibration procedure and the relevant vector performances will be discussed.

  11. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  12. Floods in Georgia, magnitude and frequency : techniques for estimating the magnitude and frequency of floods in Georgia with compilation of flood data through 1974

    USGS Publications Warehouse

    Price, McGlone

    1979-01-01

    Regional relations are defined for estimating the magnitude and frequency of floods having recurrence intervals of 2, 5, 10, 25, 50, and 100 years on streams with natural flow in Georgia. Multiple-regression analyses were used to define the relationship between the flood-discharge frequency of annual peak discharges for streams draining 0.1 to 1,000 square miles and 10 climatological and physical basin characteristics. The analyses indicate that the drainage area of the basin is the most significant characteristic. Five regions having distinct flood-discharge frequency characteristics are delineated. Individual relations of flood magnitude and frequency to drainage area are given for parts of the main stems of the major rivers without significant regulation draining more than 1,000 square miles. (Kosco-USGS)

  13. Annual peak discharges from small drainage areas in Montana through September 1976

    USGS Publications Warehouse

    Johnson, M.V.; Omang, R.J.; Hull, J.A.

    1977-01-01

    Annual peak discharge from small drainage areas is tabulated for 336 sites in Montana. The 1976 additions included data collected at 206 sites. The program which investigates the magnitude and frequency of floods from small drainage areas in Montana, was begun July 1, 1955. Originally 45 crest-stage gaging stations were established. The purpose of the program is to collect sufficient peak-flow data, which through analysis could provide methods for estimating the magnitude and frequency of floods at any point in Montana. The ultimate objective is to provide methods for estimating the 100-year flood with the reliability needed for road design. (Woodard-USGS)

  14. Climatological diurnal variation of negative CG lightning peak current over the continental United States

    NASA Astrophysics Data System (ADS)

    Chronis, T.; Cummins, K.; Said, R.; Koshak, W.; McCaul, E.; Williams, E. R.; Stano, G. T.; Grant, M.

    2015-01-01

    study provides an 11 year climatology of the diurnal variability of the cloud-to-ground (CG) lightning peak current. The local diurnal variation of peak current for negative polarity CG (-CG) flashes exhibits a highly consistent behavior, with increasing magnitudes between the late night to early morning hours and decreasing magnitudes during the afternoon. Over most regions, an inverse relationship exists between the -CG peak current and the corresponding -CG activity, although specific details can depend on region and time of day. Overall, the diurnal variation of the -CG peak current appears to reflect fundamental differences between morning and afternoon storms, but additional studies are required to clearly identify the primary cause(s).

  15. Influence of Time and Space Correlations on Earthquake Magnitude

    SciTech Connect

    Lippiello, E.; Arcangelis, L. de; Godano, C.

    2008-01-25

    A crucial point in the debate on the feasibility of earthquake predictions is the dependence of an earthquake magnitude from past seismicity. Indeed, while clustering in time and space is widely accepted, much more questionable is the existence of magnitude correlations. The standard approach generally assumes that magnitudes are independent and therefore in principle unpredictable. Here we show the existence of clustering in magnitude: earthquakes occur with higher probability close in time, space, and magnitude to previous events. More precisely, the next earthquake tends to have a magnitude similar but smaller than the previous one. A dynamical scaling relation between magnitude, time, and space distances reproduces the complex pattern of magnitude, spatial, and temporal correlations observed in experimental seismic catalogs.

  16. Exploring the relationship between the magnitudes of seismic events

    NASA Astrophysics Data System (ADS)

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-02-01

    The distribution of the magnitudes of seismic events is generally assumed to be independent on past seismicity. However, by considering events in causal relation, for example, mother-daughter, it seems natural to assume that the magnitude of a daughter event is conditionally dependent on one of the corresponding mother events. In order to find experimental evidence supporting this hypothesis, we analyze different catalogs, both real and simulated, in two different ways. From each catalog, we obtain the law of the magnitude of the triggered events by kernel density. The results obtained show that the distribution density of the magnitude of the triggered events varies with the magnitude of their corresponding mother events. As the intuition suggests, an increase of the magnitude of the mother events induces an increase of the probability of having "high" values of the magnitude of the triggered events. In addition, we see a statistically significant increasing linear dependence of the magnitude means.

  17. Functional shape of the earthquake frequency-magnitude distribution and completeness magnitude

    NASA Astrophysics Data System (ADS)

    Mignan, A.

    2012-08-01

    We investigated the functional shape of the earthquake frequency-magnitude distribution (FMD) to identify its dependence on the completeness magnitude Mc. The FMD takes the form N(m) ∝ exp(-βm)q(m) where N(m) is the event number, m the magnitude, exp(-βm) the Gutenberg-Richter law and q(m) a detection function. q(m) is commonly defined as the cumulative Normal distribution to describe the gradual curvature of bulk FMDs. Recent results however suggest that this gradual curvature is due to Mc heterogeneities, meaning that the functional shape of the elemental FMD has yet to be described. We propose a detection function of the form q(m) = exp(κ(m - Mc)) for m < Mc and q(m) = 1 for m ≥ Mc, which leads to an FMD of angular shape. The two FMD models are compared in earthquake catalogs from Southern California and Nevada and in synthetic catalogs. We show that the angular FMD model better describes the elemental FMD and that the sum of elemental angular FMDs leads to the gradually curved bulk FMD. We propose an FMD shape ontology consisting of 5 categories depending on the Mc spatial distribution, from Mc constant to Mc highly heterogeneous: (I) Angular FMD, (II) Intermediary FMD, (III) Intermediary FMD with multiple maxima, (IV) Gradually curved FMD and (V) Gradually curved FMD with multiple maxima. We also demonstrate that the gradually curved FMD model overestimates Mc. This study provides new insights into earthquake detectability properties by using seismicity as a proxy and the means to accurately estimate Mc in any given volume.

  18. New identification method for Hammerstein models based on approximate least absolute deviation

    NASA Astrophysics Data System (ADS)

    Xu, Bao-Chang; Zhang, Ying-Dan

    2016-07-01

    Disorder and peak noises or large disturbances can deteriorate the identification effects of Hammerstein non-linear models when using the least-square (LS) method. The least absolute deviation technique can be used to resolve this problem; however, its absolute value cannot meet the need of differentiability required by most algorithms. To improve robustness and resolve the non-differentiable problem, an approximate least absolute deviation (ALAD) objective function is established by introducing a deterministic function that exhibits the characteristics of absolute value under certain situations. A new identification method for Hammerstein models based on ALAD is thus developed in this paper. The basic idea of this method is to apply the stochastic approximation theory in the process of deriving the recursive equations. After identifying the parameter matrix of the Hammerstein model via the new algorithm, the product terms in the matrix are separated by calculating the average values. Finally, algorithm convergence is proven by applying the ordinary differential equation method. The proposed algorithm has a better robustness as compared to other LS methods, particularly when abnormal points exist in the measured data. Furthermore, the proposed algorithm is easier to apply and converges faster. The simulation results demonstrate the efficacy of the proposed algorithm.

  19. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  20. Passive radio frequency peak power multiplier

    DOEpatents

    Farkas, Zoltan D.; Wilson, Perry B.

    1977-01-01

    Peak power multiplication of a radio frequency source by simultaneous charging of two high-Q resonant microwave cavities by applying the source output through a directional coupler to the cavities and then reversing the phase of the source power to the coupler, thereby permitting the power in the cavities to simultaneously discharge through the coupler to the load in combination with power from the source to apply a peak power to the load that is a multiplication of the source peak power.

  1. Estimation of magnitude and frequency of floods for streams in Puerto Rico : new empirical models

    USGS Publications Warehouse

    Ramos-Gines, Orlando

    1999-01-01

    Flood-peak discharges and frequencies are presented for 57 gaged sites in Puerto Rico for recurrence intervals ranging from 2 to 500 years. The log-Pearson Type III distribution, the methodology recommended by the United States Interagency Committee on Water Data, was used to determine the magnitude and frequency of floods at the gaged sites having 10 to 43 years of record. A technique is presented for estimating flood-peak discharges at recurrence intervals ranging from 2 to 500 years for unregulated streams in Puerto Rico with contributing drainage areas ranging from 0.83 to 208 square miles. Loglinear multiple regression analyses, using climatic and basin characteristics and peak-discharge data from the 57 gaged sites, were used to construct regression equations to transfer the magnitude and frequency information from gaged to ungaged sites. The equations have contributing drainage area, depth-to-rock, and mean annual rainfall as the basin and climatic characteristics in estimating flood peak discharges. Examples are given to show a step-by-step procedure in calculating a 100-year flood at a gaged site, an ungaged site, a site near a gaged location, and a site between two gaged sites.

  2. Human intravenous pharmacokinetics and absolute oral bioavailability of cefatrizine.

    PubMed Central

    Pfeffer, M; Gaver, R C; Ximenez, J

    1983-01-01

    Cefatrizine was administered intravenously and orally at dose levels of 250, 500, and 1,000 mg to normal male volunteers in a crossover study. Intravenous pharmacokinetics were dose linear over this range; mean peak plasma concentrations at the end of 30-min infusions were, respectively, 18, 37, and 75 micrograms/ml, total body clearance was 218 ml/min per 1.73 m2, renal clearance was 176 ml/min per 1.73 m2, and mean retention time in the body was 1.11 h. Cumulative urinary excretion of intact cefatrizine was 80% of the dose, and half-lives ranged from 1 to 1.4 h. Steady-state volume of distribution was 0.22 liters/kg. On oral administration, the absolute bioavailabilities of cefatrizine were 75% at 250 and 500 mg and 50% at 1,000 mg. The mean peak plasma concentrations and peak times were, respectively, 4.9, 8.6, and 10.2 micrograms/ml at 1.4, 1.6, and 2.0 h, mean residence times were 2.4, 2.6, and 3.1 h, and mean absorption times were 1.3, 1.6, and 1.9 h. Oral renal clearance and half-life values corresponded well to the intravenous values. Cumulative urinary excretion of intact cefatrizine (as percentage of dose) was 60 at 250 mg, 56 at 500 mg, and 42 at 1,000 mg. It is hypothesized that the lack of oral dose linearity between the 500- and 1,000-mg doses is due to a component of cefatrizine absorption by a saturable transport process. Relative absorption at the high dose would be sufficiently slow that an absorption "window" would be passed before maximum bioavailability could be attained. It is not expected that the observed bioavailability decrease at doses exceeding 500 mg will have any therapeutic significance, since clinical studies are establishing efficacy for a recommended unit dosage regimen of 500 mg. PMID:6660858

  3. Human intravenous pharmacokinetics and absolute oral bioavailability of cefatrizine.

    PubMed

    Pfeffer, M; Gaver, R C; Ximenez, J

    1983-12-01

    Cefatrizine was administered intravenously and orally at dose levels of 250, 500, and 1,000 mg to normal male volunteers in a crossover study. Intravenous pharmacokinetics were dose linear over this range; mean peak plasma concentrations at the end of 30-min infusions were, respectively, 18, 37, and 75 micrograms/ml, total body clearance was 218 ml/min per 1.73 m2, renal clearance was 176 ml/min per 1.73 m2, and mean retention time in the body was 1.11 h. Cumulative urinary excretion of intact cefatrizine was 80% of the dose, and half-lives ranged from 1 to 1.4 h. Steady-state volume of distribution was 0.22 liters/kg. On oral administration, the absolute bioavailabilities of cefatrizine were 75% at 250 and 500 mg and 50% at 1,000 mg. The mean peak plasma concentrations and peak times were, respectively, 4.9, 8.6, and 10.2 micrograms/ml at 1.4, 1.6, and 2.0 h, mean residence times were 2.4, 2.6, and 3.1 h, and mean absorption times were 1.3, 1.6, and 1.9 h. Oral renal clearance and half-life values corresponded well to the intravenous values. Cumulative urinary excretion of intact cefatrizine (as percentage of dose) was 60 at 250 mg, 56 at 500 mg, and 42 at 1,000 mg. It is hypothesized that the lack of oral dose linearity between the 500- and 1,000-mg doses is due to a component of cefatrizine absorption by a saturable transport process. Relative absorption at the high dose would be sufficiently slow that an absorption "window" would be passed before maximum bioavailability could be attained. It is not expected that the observed bioavailability decrease at doses exceeding 500 mg will have any therapeutic significance, since clinical studies are establishing efficacy for a recommended unit dosage regimen of 500 mg. PMID:6660858

  4. Magnitude-range brightness variations of overactive K giants

    NASA Astrophysics Data System (ADS)

    Oláh, K.; Moór, A.; Kővári, Zs.; Granzer, T.; Strassmeier, K. G.; Kriskovics, L.; Vida, K.

    2014-12-01

    Context. Decades-long, phase-resolved photometry of overactive spotted cool stars has revealed that their long-term peak-to-peak light variations can be as large as one magnitude. Such brightness variations are too large to be solely explained by rotational modulation and/or a cyclic, or pseudo-cyclic, waxing and waning of surface spots and faculae as we see in the Sun. Aims: We study three representative, overactive spotted K giants (IL Hya, XX Tri, and DM UMa) known to exhibit V-band light variations between 0.m65-1.m05. Our aim is to find the origin of their large brightness variation. Methods: We employ long-term phase-resolved multicolor photometry, mostly from automatic telescopes, covering 42 yr for IL Hya, 28 yr for XX Tri, and 34 yr for DM UMa. For one target, IL Hya, we present a new Doppler image from NSO data taken in late 1996. Effective temperatures for our targets are determined from all well-sampled observing epochs and are based on a V - IC color-index calibration. Results: The effective temperature change between the extrema of the rotational modulation for IL Hya and XX Tri is in the range 50-200 K. The bolometric flux during maximum of the rotational modulation, i.e., the least spotted states, varied by up to 39% in IL Hya and up to 54% in XX Tri over the course of our observations. We emphasize that for IL Hya it is just about half of the total luminosity variation that can be explained by the photospheric temperature (spots/faculae) changes, while for XX Tri it is even about one third. The long-term, 0.m6 V-band variation of DM UMa is more difficult to explain because little or no B - V color index change is observed on the same timescale. Placing the three stars with their light and color variations into H-R diagrams, we find that their overall luminosities are generally too low compared to predictions from current evolutionary tracks. Conclusions: A change in the stellar radius due to strong and variable magnetic fields during activity

  5. Predictors of VO2Peak in children age 6- to 7-years-old.

    PubMed

    Dencker, Magnus; Hermansen, Bianca; Bugge, Anna; Froberg, Karsten; Andersen, Lars B

    2011-02-01

    This study investigated the predictors of aerobic fitness (VO2PEAK) in young children on a population-base. Participants were 436 children (229 boys and 207 girls) aged 6.7 ± 0.4 yrs. VO2PEAK was measured during a maximal treadmill exercise test. Physical activity was assessed by accelerometers. Total body fat and total fat free mass were estimated from skinfold measurements. Regression analyses indicated that significant predictors for VO2PEAK per kilogram body mass were total body fat, maximal heart rate, sex, and age. Physical activity explained an additional 4-7%. Further analyses showed the main contributing factors for absolute values of VO2PEAK were fat free mass, maximal heart rate, sex, and age. Physical activity explained an additional 3-6%. PMID:21467593

  6. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  7. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  8. Numerical Magnitude Processing in Children with Mild Intellectual Disabilities

    ERIC Educational Resources Information Center

    Brankaer, Carmen; Ghesquiere, Pol; De Smedt, Bert

    2011-01-01

    The present study investigated numerical magnitude processing in children with mild intellectual disabilities (MID) and examined whether these children have difficulties in the ability to represent numerical magnitudes and/or difficulties in the ability to access numerical magnitudes from formal symbols. We compared the performance of 26 children…

  9. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  10. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  11. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  12. Symbolic Magnitude Modulates Perceptual Strength in Binocular Rivalry

    ERIC Educational Resources Information Center

    Paffen, Chris L. E.; Plukaard, Sarah; Kanai, Ryota

    2011-01-01

    Basic aspects of magnitude (such as luminance contrast) are directly represented by sensory representations in early visual areas. However, it is unclear how symbolic magnitudes (such as Arabic numerals) are represented in the brain. Here we show that symbolic magnitude affects binocular rivalry: perceptual dominance of numbers and objects of…

  13. Sign-And-Magnitude Up/Down Counter

    NASA Technical Reports Server (NTRS)

    Cole, Steven W.

    1991-01-01

    Magnitude-and-sign counter includes conventional up/down counter for magnitude part and special additional circuitry for sign part. Negative numbers indicated more directly. Counter implemented by programming erasable programmable logic device (EPLD) or programmable logic array (PLA). Used in place of conventional up/down counter to provide sign and magnitude values directly to other circuits.

  14. Binocular disparity magnitude affects perceived depth magnitude despite inversion of depth order.

    PubMed

    Matthews, Harold; Hill, Harold; Palmisano, Stephen

    2011-01-01

    The hollow-face illusion involves a misperception of depth order: our perception follows our top-down knowledge that faces are convex, even though bottom-up depth information reflects the actual concave surface structure. While pictorial cues can be ambiguous, stereopsis should unambiguously indicate the actual depth order. We used computer-generated stereo images to investigate how, if at all, the sign and magnitude of binocular disparities affect the perceived depth of the illusory convex face. In experiment 1 participants adjusted the disparity of a convex comparison face until it matched a reference face. The reference face was either convex or hollow and had binocular disparities consistent with an average face or had disparities exaggerated, consistent with a face stretched in depth. We observed that apparent depth increased with disparity magnitude, even when the hollow faces were seen as convex (ie when perceived depth order was inconsistent with disparity sign). As expected, concave faces appeared flatter than convex faces, suggesting that disparity sign also affects perceived depth. In experiment 2, participants were presented with pairs of real and illusory convex faces. In each case, their task was to judge which of the two stimuli appeared to have the greater depth. Hollow faces with exaggerated disparities were again perceived as deeper. PMID:22132512

  15. Development of an Empirical Local Magnitude Formula for Northern Oklahoma

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Karimi, S.; Moores, A. O.

    2015-12-01

    In this paper we focus on determining a local magnitude formula for northern Oklahoma that is unbiased with distance by empirically constraining the attenuation properties within the region of interest based on the amplitude of observed seismograms. For regional networks detecting events over several hundred kilometres, distance correction terms play an important role in determining the magnitude of an event. Standard distance correction terms such as Hutton and Boore (1987) may have a significant bias with distance if applied in a region with different attenuation properties, resulting in an incorrect magnitude. We have presented data from a regional network of broadband seismometers installed in bedrock in northern Oklahoma. The events with magnitude in the range of 2.0 and 4.5, distributed evenly across this network are considered. We find that existing models show a bias with respect to hypocentral distance. Observed amplitude measurements demonstrate that there is a significant Moho bounce effect that mandates the use of a trilinear attenuation model in order to avoid bias in the distance correction terms. We present two different approaches of local magnitude calibration. The first maintains the classic definition of local magnitude as proposed by Richter. The second method calibrates local magnitude so that it agrees with moment magnitude where a regional moment tensor can be computed. To this end, regional moment tensor solutions and moment magnitudes are computed for events with magnitude larger than 3.5 to allow calibration of local magnitude to moment magnitude. For both methods the new formula results in magnitudes systematically lower than previous values computed with Eaton's (1992) model. We compare the resulting magnitudes and discuss the benefits and drawbacks of each method. Our results highlight the importance of correct calibration of the distance correction terms for accurate local magnitude assessment in regional networks.

  16. Increasing urban flood magnitudes: Is it the drainage network?

    NASA Astrophysics Data System (ADS)

    Zahner, J. A.; Ogden, F. L.

    2004-05-01

    It has been long thought that increases in impervious area had the greatest impact on urban runoff volume and increased flood peaks. This theory was recently challenged by a study in Charlotte, North Carolina that concluded that the increase in storm drainage connectivity and hence hydraulic efficiency played the greatest role in increasing flood magnitudes. Prediction of hydrologic conditions in urbanized watersheds is increasingly turning to distributed-parameter models, as these methods are capable of describing land-surface modifications and heterogeneity. One major deficiency of many of these models, however, is their inability to explicitly handle storm drainage networks. The purpose of this research is to examine the effect of subsurface storm drainage networks on the formation of floods. Factors considered include changes in network topology as described by the drainage width function and the relative importance of improved drainage efficiency relative to imperviousness. The Gridded Surface/Subsurface Hydrologic Analysis (GSSHA), a square-grid (raster) hydrologic model that solves the equations of transport of mass, energy, and momentum, has been modified to include storm drainage capability. This has made it possible to more accurately model the complexity of an urban watershed. The SUPERLINK scheme was chosen to model flow in closed conduits. This method solves the St. Venant equations in one dimension and employs the widely used "Preissmann slot" to extend their applicability to storm sewer flow. The SUPERLINK scheme is significantly different from the Preissmann scheme in that it is able to robustly simulate traditional flows as well as moving shocks. The coupled GSSHA SUPERLINK model will be used to simulate the effect of a subsurface drainage network on an urbanizing catchment.

  17. The Boson peak in supercooled water.

    PubMed

    Kumar, Pradeep; Wikfeldt, K Thor; Schlesinger, Daniel; Pettersson, Lars G M; Stanley, H Eugene

    2013-01-01

    We perform extensive molecular dynamics simulations of the TIP4P/2005 model of water to investigate the origin of the Boson peak reported in experiments on supercooled water in nanoconfined pores, and in hydration water around proteins. We find that the onset of the Boson peak in supercooled bulk water coincides with the crossover to a predominantly low-density-like liquid below the Widom line TW. The frequency and onset temperature of the Boson peak in our simulations of bulk water agree well with the results from experiments on nanoconfined water. Our results suggest that the Boson peak in water is not an exclusive effect of confinement. We further find that, similar to other glass-forming liquids, the vibrational modes corresponding to the Boson peak are spatially extended and are related to transverse phonons found in the parent crystal, here ice Ih. PMID:23771033

  18. Weld peaking on heavy aluminum structures

    NASA Technical Reports Server (NTRS)

    Bayless, E.; Poorman, R.; Sexton, J.

    1978-01-01

    Weld peaking is usually undesirable in any welded structure. In heavy structures, the forces involved in the welding process become very large and difficult to handle. With the shuttle's solid rocket booster, the weld peaking resulted in two major problems: (1) reduced mechanical properties across the weld joint, and (2) fit-up difficulties in subsequent assembly operation. Peaking from the weld shrinkage forces can be fairly well predicted in simple structures; however, in welding complicated assemblies, the amount of peaking is unpredictable because of unknown stresses from machining and forming, stresses induced by the fixturing, and stresses from welds in other parts of the assembly. When excessive peaking is encountered, it can be corrected using the shrinkage forces resulting from the welding process. Application of these forces is discussed in this report.

  19. Multiscale peak alignment for chromatographic datasets.

    PubMed

    Zhang, Zhi-Min; Liang, Yi-Zeng; Lu, Hong-Mei; Tan, Bin-Bin; Xu, Xiao-Na; Ferro, Miguel

    2012-02-01

    Chromatography has been extensively applied in many fields, such as metabolomics and quality control of herbal medicines. Preprocessing, especially peak alignment, is a time-consuming task prior to the extraction of useful information from the datasets by chemometrics and statistics. To accurately and rapidly align shift peaks among one-dimensional chromatograms, multiscale peak alignment (MSPA) is presented in this research. Peaks of each chromatogram were detected based on continuous wavelet transform (CWT) and aligned against a reference chromatogram from large to small scale gradually, and the aligning procedure is accelerated by fast Fourier transform cross correlation. The presented method was compared with two widely used alignment methods on chromatographic dataset, which demonstrates that MSPA can preserve the shapes of peaks and has an excellent speed during alignment. Furthermore, MSPA method is robust and not sensitive to noise and baseline. MSPA was implemented and is available at http://code.google.com/p/mspa. PMID:22222564

  20. A framework for accurate determination of the T2 distribution from multiple echo magnitude MRI images

    NASA Astrophysics Data System (ADS)

    Bai, Ruiliang; Koay, Cheng Guan; Hutchinson, Elizabeth; Basser, Peter J.

    2014-07-01

    Measurement of the T2 distribution in tissues provides biologically relevant information about normal and abnormal microstructure and organization. Typically, the T2 distribution is obtained by fitting the magnitude MR images acquired by a multi-echo MRI pulse sequence using an inverse Laplace transform (ILT) algorithm. It is well known that the ideal magnitude MR signal follows a Rician distribution. Unfortunately, studies attempting to establish the validity and efficacy of the ILT algorithm assume that these input signals are Gaussian distributed. Violation of the normality (or Gaussian) assumption introduces unexpected artifacts, including spurious cerebrospinal fluid (CSF)-like long T2 components; bias of the true geometric mean T2 values and in the relative fractions of various components; and blurring of nearby T2 peaks in the T2 distribution. Here we apply and extend our previously proposed magnitude signal transformation framework to map noisy Rician-distributed magnitude multi-echo MRI signals into Gaussian-distributed signals with high accuracy and precision. We then perform an ILT on the transformed data to obtain an accurate T2 distribution. Additionally, we demonstrate, by simulations and experiments, that this approach corrects the aforementioned artifacts in magnitude multi-echo MR images over a large range of signal-to-noise ratios.

  1. Multiscale peak detection in wavelet space.

    PubMed

    Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng

    2015-12-01

    Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at . PMID:26514234

  2. Predictors of the peak width for networks with exponential links

    USGS Publications Warehouse

    Troutman, B.M.; Karlinger, M.R.

    1989-01-01

    We investigate optimal predictors of the peak (S) and distance to peak (T) of the width function of drainage networks under the assumption that the networks are topologically random with independent and exponentially distributed link lengths. Analytical results are derived using the fact that, under these assumptions, the width function is a homogeneous Markov birth-death process. In particular, exact expressions are derived for the asymptotic conditional expectations of S and T given network magnitude N and given mainstream length H. In addition, a simulation study is performed to examine various predictors of S and T, including N, H, and basin morphometric properties; non-asymptotic conditional expectations and variances are estimated. The best single predictor of S is N, of T is H, and of the scaled peak (S divided by the area under the width function) is H. Finally, expressions tested on a set of drainage basins from the state of Wyoming perform reasonably well in predicting S and T despite probable violations of the original assumptions. ?? 1989 Springer-Verlag.

  3. Deconvolution of mixed gamma emitters using peak parameters

    SciTech Connect

    Gadd, Milan S; Garcia, Francisco; Magadalena, Vigil M

    2011-01-14

    When evaluating samples containing mixtures of nuclides using gamma spectroscopy the situation sometimes arises where the nuclides present have photon emissions that cannot be resolved by the detector. An example of this is mixtures of {sup 241}Am and plutonium that have L x-ray emissions with slightly different energies which cannot be resolved using a high-purity germanium detector. It is possible to deconvolute the americium L x-rays from those plutonium based on the {sup 241}Am 59.54 keV photon. However, this requires accurate knowledge of the relative emission yields. Also, it often results in high uncertainties in the plutonium activity estimate due to the americium yields being approximately an order of magnitude greater than those for plutonium. In this work, an alternative method of determining the relative fraction of plutonium in mixtures of {sup 241}Am and {sup 239}Pu based on L x-ray peak location and shape parameters is investigated. The sensitivity and accuracy of the peak parameter method is compared to that for conventional peak decovolution.

  4. RSAT peak-motifs: motif analysis in full-size ChIP-seq datasets.

    PubMed

    Thomas-Chollier, Morgane; Herrmann, Carl; Defrance, Matthieu; Sand, Olivier; Thieffry, Denis; van Helden, Jacques

    2012-02-01

    ChIP-seq is increasingly used to characterize transcription factor binding and chromatin marks at a genomic scale. Various tools are now available to extract binding motifs from peak data sets. However, most approaches are only available as command-line programs, or via a website but with size restrictions. We present peak-motifs, a computational pipeline that discovers motifs in peak sequences, compares them with databases, exports putative binding sites for visualization in the UCSC genome browser and generates an extensive report suited for both naive and expert users. It relies on time- and memory-efficient algorithms enabling the treatment of several thousand peaks within minutes. Regarding time efficiency, peak-motifs outperforms all comparable tools by several orders of magnitude. We demonstrate its accuracy by analyzing data sets ranging from 4000 to 1,28,000 peaks for 12 embryonic stem cell-specific transcription factors. In all cases, the program finds the expected motifs and returns additional motifs potentially bound by cofactors. We further apply peak-motifs to discover tissue-specific motifs in peak collections for the p300 transcriptional co-activator. To our knowledge, peak-motifs is the only tool that performs a complete motif analysis and offers a user-friendly web interface without any restriction on sequence size or number of peaks. PMID:22156162

  5. Rapid earthquake magnitude from real-time GPS precise point positioning for earthquake early warning and emergency response

    NASA Astrophysics Data System (ADS)

    Fang, Rongxin; Shi, Chuang; Song, Weiwei; Wang, Guangxing; Liu, Jingnan

    2014-05-01

    For earthquake early warning (EEW) and emergency response, earthquake magnitude is the crucial parameter to be determined rapidly and correctly. However, a reliable and rapid measurement of the magnitude of an earthquake is a challenging problem, especially for large earthquakes (M>8). Here, the magnitude is determined based on the GPS displacement waveform derived from real-time precise point positioning (PPP). The real-time PPP results are evaluated with an accuracy of 1 cm in the horizontal components and 2-3 cm in the vertical components, indicating that the real-time PPP is capable of detecting seismic waves with amplitude of 1cm horizontally and 2-3cm vertically with a confidence level of 95%. In order to estimate the magnitude, the unique information provided by the GPS displacement waveform is the horizontal peak displacement amplitude. We show that the empirical relation of Gutenberg (1945) between peak displacement and magnitude holds up to nearly magnitude 9.0 when displacements are measured with GPS. We tested the proposed method for three large earthquakes. For the 2010 Mw 7.2 El Mayor-Cucapah earthquake, our method provides a magnitude of M7.18±0.18. For the 2011 Mw 9.0 Tohoku-oki earthquake the estimated magnitude is M8.74±0.06, and for the 2010 Mw 8.8 Maule earthquake the value is M8.7±0.1 after excluding some near-field stations. We therefore conclude that depending on the availability of high-rate GPS observations, a robust value of magnitude up to 9.0 for a point source earthquake can be estimated within 10s of seconds or a few minutes after an event using a few GPS stations close to the epicenter. The rapid magnitude could be as a pre-requisite for tsunami early warning, fast source inversion, and emergency response is feasible.

  6. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    SciTech Connect

    Jian-Zhou Zhu and Gregory W. Hammett

    2011-01-10

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, "On some statistical properties of hydrodynamical and magnetohydrodynamical fields," Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.

  7. Absolute surface energy for zincblende semiconductors

    NASA Astrophysics Data System (ADS)

    Zhang, S. B.; Wei, Su-Huai

    2003-03-01

    Recent advance in nanosciences requires the determination of surface (or facet) energy of semiconductors, which is often difficult due to the polar nature of some of the most important surfaces such as the (111)A/(111)B surfaces. Several approaches have been developed in the past [1-3] to deal with the problem but an unambiguous division of the polar surface energies is yet to come [2]. Here we show that an accurate division is indeed possible for the zincblende semiconductors and will present the results for GaAs, ZnSe, and CuInSe2 [4], respectively. A general trend emerges, relating the absolute surface energy to the ionicity of the bulk materials. [1] N. Chetty and R. M. Martin, Phys. Rev. B 45, 6074 (1992). [2] N. Moll, et al., Phys. Rev. B 54, 8844 (1996). [3] S. Mankefors, Phys. Rev. B 59, 13151 (1999). [4] S. B. Zhang and S.-H. Wei, Phys. Rev. B 65, 081402 (2002).

  8. Absolute decay width measurements in 16O

    NASA Astrophysics Data System (ADS)

    Wheldon, C.; Ashwood, N. I.; Barr, M.; Curtis, N.; Freer, M.; Kokalova, Tz; Malcolm, J. D.; Spencer, S. J.; Ziman, V. A.; Faestermann, Th; Krücken, R.; Wirth, H.-F.; Hertenberger, R.; Lutter, R.; Bergmaier, A.

    2012-09-01

    The reaction 126C(63Li, d)168O* at a 6Li bombarding energy of 42 MeV has been used to populate excited states in 16O. The deuteron ejectiles were measured using the high-resolution Munich Q3D spectrograph. A large-acceptance silicon-strip detector array was used to register the recoil and break-up products. This complete kinematic set-up has enabled absolute α-decay widths to be measured with high-resolution in the 13.9 to 15.9 MeV excitation energy regime in 16O; many for the first time. This energy region spans the 14.4 MeV four-α breakup threshold. Monte-Carlo simulations of the detector geometry and break-up processes yield detection efficiencies for the two dominant decay modes of 40% and 37% for the α+12C(g.s.) and a+12C(2+1) break-up channels respectively.

  9. Absolute spectrophotometry of northern compact planetary nebulae

    NASA Astrophysics Data System (ADS)

    Wright, S. A.; Corradi, R. L. M.; Perinotto, M.

    2005-06-01

    We present medium-dispersion spectra and narrowband images of six northern compact planetary nebulae (PNe): BoBn 1, DdDm 1, IC 5117, M 1-5, M 1-71, and NGC 6833. From broad-slit spectra, total absolute fluxes and equivalent widths were measured for all observable emission lines. High signal-to-noise emission line fluxes of Hα, Hβ, [Oiii], [Nii], and HeI may serve as emission line flux standards for northern hemisphere observers. From narrow-slit spectra, we derive systemic radial velocities. For four PNe, available emission line fluxes were measured with sufficient signal-to-noise to probe the physical properties of their electron densities, temperatures, and chemical abundances. BoBn 1 and DdDm 1, both type IV PNe, have an Hβ flux over three sigma away from previous measurements. We report the first abundance measurements of M 1-71. NGC 6833 measured radial velocity and galactic coordinates suggest that it is associated with the outer arm or possibly the galactic halo, and its low abundance ([O/H]=1.3× 10-4) may be indicative of low metallicity within that region.

  10. PeakRanger: A cloud-enabled peak caller for ChIP-seq data

    PubMed Central

    2011-01-01

    Background Chromatin immunoprecipitation (ChIP), coupled with massively parallel short-read sequencing (seq) is used to probe chromatin dynamics. Although there are many algorithms to call peaks from ChIP-seq datasets, most are tuned either to handle punctate sites, such as transcriptional factor binding sites, or broad regions, such as histone modification marks; few can do both. Other algorithms are limited in their configurability, performance on large data sets, and ability to distinguish closely-spaced peaks. Results In this paper, we introduce PeakRanger, a peak caller software package that works equally well on punctate and broad sites, can resolve closely-spaced peaks, has excellent performance, and is easily customized. In addition, PeakRanger can be run in a parallel cloud computing environment to obtain extremely high performance on very large data sets. We present a series of benchmarks to evaluate PeakRanger against 10 other peak callers, and demonstrate the performance of PeakRanger on both real and synthetic data sets. We also present real world usages of PeakRanger, including peak-calling in the modENCODE project. Conclusions Compared to other peak callers tested, PeakRanger offers improved resolution in distinguishing extremely closely-spaced peaks. PeakRanger has above-average spatial accuracy in terms of identifying the precise location of binding events. PeakRanger also has excellent sensitivity and specificity in all benchmarks evaluated. In addition, PeakRanger offers significant improvements in run time when running on a single processor system, and very marked improvements when allowed to take advantage of the MapReduce parallel environment offered by a cloud computing resource. PeakRanger can be downloaded at the official site of modENCODE project: http://www.modencode.org/software/ranger/ PMID:21554709

  11. Automatic Locking of Laser Frequency to an Absorption Peak

    NASA Technical Reports Server (NTRS)

    Koch, Grady J.

    2006-01-01

    An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that

  12. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    2015-12-01

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  13. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  14. The differing magnitude distributions of the two Jupiter Trojan color populations

    SciTech Connect

    Wong, Ian; Brown, Michael E.; Emery, Joshua P.

    2014-12-01

    The Jupiter Trojans are a significant population of minor bodies in the middle solar system that have garnered substantial interest in recent years. Several spectroscopic studies of these objects have revealed notable bimodalities with respect to near-infrared spectra, infrared albedo, and color, which suggest the existence of two distinct groups among the Trojan population. In this paper, we analyze the magnitude distributions of these two groups, which we refer to as the red and less red color populations. By compiling spectral and photometric data from several previous works, we show that the observed bimodalities are self-consistent and categorize 221 of the 842 Trojans with absolute magnitudes in the range H<12.3 into the two color populations. We demonstrate that the magnitude distributions of the two color populations are distinct to a high confidence level (>95%) and fit them individually to a broken power law, with special attention given to evaluating and correcting for incompleteness in the Trojan catalog as well as incompleteness in our categorization of objects. A comparison of the best-fit curves shows that the faint-end power-law slopes are markedly different for the two color populations, which indicates that the red and less red Trojans likely formed in different locations. We propose a few hypotheses for the origin and evolution of the Trojan population based on the analyzed data.

  15. Magnitude and significance of the higher-order reduced density matrix cumulants

    NASA Astrophysics Data System (ADS)

    Herbert, John M.

    Using full configuration interaction wave functions for Be and LiH, in both minimal and extended basis sets, we examine the absolute magnitude and energetic significance of various contributions to the three-electron reduced density matrix (3-RDM) and its connected (size-consistent) component, the 3-RDM cumulant (3-RDMC). Minimal basis sets are shown to suppress the magnitude of the 3-RDMC in an artificial manner, whereas in extended basis sets, 3-RDMC matrix elements are often comparable in magnitude to the corresponding 3-RDM elements, even in cases where this result is not required by spin angular momentum coupling. Formal considerations suggest that these observations should generalize to higher-order p-RDMs and p-RDMCs (p > 3). This result is discussed within the context of electronic structure methods based on the contracted Schrödinger equation (CSE), as solution of the CSE relies on 3- and 4-RDM ?reconstruction functionals? that neglect the 3-RDMC, the 4-RDMC, or both. Although the 3-RDMC is responsible for at most 0.2% of the total electronic energy in Be and LiH, it accounts for up to 70% of the correlation energy, raising questions regarding whether (and how) the CSE can offer a useful computational methodology.

  16. Peaks theory and the excursion set approach

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Sheth, Ravi K.

    2012-11-01

    We describe a model of dark matter halo abundances and clustering which combines the two most widely used approaches to this problem: that based on peaks and the other based on excursion sets. Our approach can be thought of as addressing the cloud-in-cloud problem for peaks and/or modifying the excursion set approach so that it averages over a special subset, rather than all possible walks. In this respect, it seeks to account for correlations between steps in the walk as well as correlations between walks. We first show how the excursion set and peaks models can be written in the same formalism, and then use this correspondence to write our combined excursion set peaks model. We then give simple expressions for the mass function and bias, showing that even the linear halo bias factor is predicted to be k-dependent as a consequence of the non-locality associated with the peak constraint. At large masses, our model has little or no need to rescale the variable δc from the value associated with spherical collapse, and suggests a simple explanation for why the linear halo bias factor appears to lie above that based on the peak-background split at high masses when such a rescaling is assumed. Although we have concentrated on peaks, our analysis is more generally applicable to other traditionally single-scale analyses of large-scale structure.

  17. The absolute disparity anomaly and the mechanism of relative disparities.

    PubMed

    Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne

    2016-06-01

    There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1). PMID:27248566

  18. The absolute disparity anomaly and the mechanism of relative disparities

    PubMed Central

    Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne

    2016-01-01

    There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1). PMID:27248566

  19. Estimation of response-spectral values as functions of magnitude, distance, and site conditions

    USGS Publications Warehouse

    Joyner, W.B.; Boore, David M.

    1982-01-01

    We have developed empirical predictive equations for the horizontal pseudo-velocity response at 5-percent damping for 12 different periods from 0.1 to 4.0 s. Using a multiple linear-regression method similar to the one we used previously for peak horizontal acceleration and velocity, we analyzed response spectra period by period for 64 records of 12 shallow earthquakes in western North America, including the recent Coyote Lake and Imperial Valley, California, earthquakes. The resulting predictive equations show amplification of the response values at soil sites for periods greater than or equal to 0.5 s, with maximum amplification exceeding a factor of 2 at 1.5 s. For periods less than 0.5 s there is no statistically significant difference between rock sites and the soil sites represented in the data set. These results are consistent with those of several earlier studies. A particularly significant aspect of the predictive equations is that the response values at different periods are different functions of magnitude (confirming earlier results by McGuire and by Trifunac and Anderson). The slope of the least-squares straight line relating log response to moment magnitude ranges from 0.21 at a period of 0.1 s to greater than 0.5 at periods of 1 s and longer. This result indicates that the conventional practice of scaling a constant spectral shape by peak acceleration will not give accurate answers. The Newmark and Hall method of spectral scaling, using both peak acceleration and peak velocity, largely avoids this error. Comparison of our spectra with the Regulatory Guide 1.60 spectrum anchored at the same value at 0.1 s shows that the Regulatory Guide 1.60 spectrum is exceeded at soil sites for a magnitude of 7.5 at all distances for periods greater than about 0.5 s. Comparison of our spectra for soil sites with the corresponding ATC-3 curve of lateral design force coefficients for the highest seismic zone indicates that the ATC-3 curve is exceeded within about 5 km

  20. Orion Absolute Navigation System Progress and Challenge

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; D'Souza, Christopher

    2012-01-01

    The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.

  1. Evaluation of the Absolute Regional Temperature Potential

    NASA Technical Reports Server (NTRS)

    Shindell, D. T.

    2012-01-01

    The Absolute Regional Temperature Potential (ARTP) is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90-28degS, 28degS-28degN, 28-60degN and 60-90degN) as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within +/-20%of the actual responses, though there are some exceptions for 90-28degS and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the +/-20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39-45% and 9-39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.

  2. Absolute optical surface measurement with deflectometry

    NASA Astrophysics Data System (ADS)

    Li, Wansong; Sandner, Marc; Gesierich, Achim; Burke, Jan

    Deflectometry utilises the deformation and displacement of a sample pattern after reflection from a test surface to infer the surface slopes. Differentiation of the measurement data leads to a curvature map, which is very useful for surface quality checks with sensitivity down to the nanometre range. Integration of the data allows reconstruction of the absolute surface shape, but the procedure is very error-prone because systematic errors may add up to large shape deviations. In addition, there are infinitely many combinations for slope and object distance that satisfy a given observation. One solution for this ambiguity is to include information on the object's distance. It must be known very accurately. Two laser pointers can be used for positioning the object, and we also show how a confocal chromatic distance sensor can be used to define a reference point on a smooth surface from which the integration can be started. The used integration algorithm works without symmetry constraints and is therefore suitable for free-form surfaces as well. Unlike null testing, deflectometry also determines radius of curvature (ROC) or focal lengths as a direct result of the 3D surface reconstruction. This is shown by the example of a 200 mm diameter telescope mirror, whose ROC measurements by coordinate measurement machine and deflectometry coincide to within 0.27 mm (or a sag error of 1.3μm). By the example of a diamond-turned off-axis parabolic mirror, we demonstrate that the figure measurement uncertainty comes close to a well-calibrated Fizeau interferometer.

  3. Absolute determination of local tropospheric OH concentrations

    NASA Technical Reports Server (NTRS)

    Armerding, Wolfgang; Comes, Franz-Josef

    1994-01-01

    Long path absorption (LPA) according to Lambert Beer's law is a method to determine absolute concentrations of trace gases such as tropospheric OH. We have developed a LPA instrument which is based on a rapid tuning of the light source which is a frequency doubled dye laser. The laser is tuned across two or three OH absorption features around 308 nm with a scanning speed of 0.07 cm(exp -1)/microsecond and a repetition rate of 1.3 kHz. This high scanning speed greatly reduces the fluctuation of the light intensity caused by the atmosphere. To obtain the required high sensitivity the laser output power is additionally made constant and stabilized by an electro-optical modulator. The present sensitivity is of the order of a few times 10(exp 5) OH per cm(exp 3) for an acquisition time of a minute and an absorption path length of only 1200 meters so that a folding of the optical path in a multireflection cell was possible leading to a lateral dimension of the cell of a few meters. This allows local measurements to be made. Tropospheric measurements have been carried out in 1991 resulting in the determination of OH diurnal variation at specific days in late summer. Comparison with model calculations have been made. Interferences are mainly due to SO2 absorption. The problem of OH self generation in the multireflection cell is of minor extent. This could be shown by using different experimental methods. The minimum-maximum signal to noise ratio is about 8 x 10(exp -4) for a single scan. Due to the small size of the absorption cell the realization of an open air laboratory is possible in which by use of an additional UV light source or by additional fluxes of trace gases the chemistry can be changed under controlled conditions allowing kinetic studies of tropospheric photochemistry to be made in open air.

  4. Absolute Radiometric Calibration of KOMPSAT-3A

    NASA Astrophysics Data System (ADS)

    Ahn, H. Y.; Shin, D. Y.; Kim, J. S.; Seo, D. C.; Choi, C. U.

    2016-06-01

    This paper presents a vicarious radiometric calibration of the Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) performed by the Korea Aerospace Research Institute (KARI) and the Pukyong National University Remote Sensing Group (PKNU RSG) in 2015.The primary stages of this study are summarized as follows: (1) A field campaign to determine radiometric calibrated target fields was undertaken in Mongolia and South Korea. Surface reflectance data obtained in the campaign were input to a radiative transfer code that predicted at-sensor radiance. Through this process, equations and parameters were derived for the KOMPSAT-3A sensor to enable the conversion of calibrated DN to physical units, such as at-sensor radiance or TOA reflectance. (2) To validate the absolute calibration coefficients for the KOMPSAT-3A sensor, we performed a radiometric validation with a comparison of KOMPSAT-3A and Landsat-8 TOA reflectance using one of the six PICS (Libya 4). Correlations between top-of-atmosphere (TOA) radiances and the spectral band responses of the KOMPSAT-3A sensors at the Zuunmod, Mongolia and Goheung, South Korea sites were significant for multispectral bands. The average difference in TOA reflectance between KOMPSAT-3A and Landsat-8 image over the Libya 4, Libya site in the red-green-blue (RGB) region was under 3%, whereas in the NIR band, the TOA reflectance of KOMPSAT-3A was lower than the that of Landsat-8 due to the difference in the band passes of two sensors. The KOMPSAT-3Aensor includes a band pass near 940 nm that can be strongly absorbed by water vapor and therefore displayed low reflectance. Toovercome this, we need to undertake a detailed analysis using rescale methods, such as the spectral bandwidth adjustment factor.

  5. An Absolute Proper motions and position catalog in the galaxy halos

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoxiang

    2015-08-01

    We present a new catalog of absolute proper motions and updated positions derived from the same Space Telescope Science Institute digitized Schmidt survey plates utilized for the construction of the Guide Star Catalog II. As special attention was devoted to the absolutization process and removal of position, magnitude and color dependent systematic errors through the use of both stars and galaxies, this release is solely based on plate data outside the galactic plane, i.e. |b| ≥ 27o. The resulting global zero point error is less than 0.6 mas/yr, and the precision better than 4.0 mas/yr for objects brighter than RF = 18.5, rising to 9.0 mas/yr for objects with magnitude in the range 18.5 < RF < 20.0. The catalog covers 22,525 square degrees and lists 100,777,385 objects to the limiting magnitude of RF ˜ 20.8. Alignment with the International Celestial Reference System (ICRS) was made using 1288 objects common to the second realization of the International Celestial Reference Frame (ICRF2) at radio wavelengths. As a result, the coordinate axes realized by our astrometric data are believed to be aligned with the extragalactic radio frame to within ±0.2 mas at the reference epoch J2000.0. This makes our compilation one of the deepest and densest ICRF-registered astrometric catalogs outside the galactic plane. Although the Gaia mission is poised to set the new standard in catalog astronomy and will in many ways supersede this catalog, the methods and procedures reported here will prove useful to remove astrometric magnitude- and color-dependent systematic errors from the next generation of ground-based surveys reaching significantly deeper than the Gaia catalog.

  6. Spatial Characterization of Flood Magnitudes from Hurricane Irene (2011) over Delaware River Basin

    NASA Astrophysics Data System (ADS)

    Lu, P.; Smith, J. A.; Cunha, L.; Lin, N.

    2014-12-01

    Flooding from landfalling tropical cyclones can affect drainage networks over a large range of basin scales. We develop a method to characterize the spatial distribution of flood magnitudes continuously over a drainage network, with focus on flooding from landfalling tropical cyclones. We use hydrologic modeling to translate precipitation fields into a continuous representation of flood peaks over the drainage network. The CUENCAS model (Cunha 2012) is chosen because of its ability to predict flooding over various scales with minimal calibration. Taking advantage of scaling properties of flood magnitudes, a dimensionless flood index (Smith 1989, Villarini and Smith 2010) is obtained for a better representation of flood magnitudes for which the effects of basin scales are reduced. Case study analyses from Hurricane Irene are carried for the Delaware River using Stage IV radar rainfall fields. Reservoir regulation is implemented in CUENCAS since the Delaware River, like many large rivers, is strongly regulated. With limited info of dam operation and initial water level, reservoirs are represented as filters that directly reduce streamflow downstream, as a trade-off between efficiency and accuracy. Results show a correlation coefficient greater than 0.9 for all the available flood peak observations. Uncertainties are mostly from errors in rainfall fields for small watersheds, and reservoir regulation for large ones. The hydrological modeling can also be driven by simulated rainfall from historical or synthetic storms: this study fits into our long-term goal of developing a methodology to quantify the risk of inland flooding associated with landfalling tropical cyclone.

  7. Kharkiv Asteroid Magnitude-Phase Relations V1.0

    NASA Astrophysics Data System (ADS)

    Shevchenko, V. G.; Belskaya, I. N.; Lupishko, D. F.; Krugly, Yu. N.; Chiorny, V. G.; Velichko, F. P.

    2010-08-01

    A database of asteroid magnitude-phase relations compiled at the Institute of Astronomy of Kharkiv Kharazin University by Shevchenko et al., including observations from 1978 through 2008. Mainly the observations were performed at the Institute of Astronomy (Kharkiv, Ukraine) and at the Astrophysics Institute (Dushanbe, Tadjikistan). For most asteroids the magnitude-phase relations were obtained down to phase angles less than 1 deg. For some asteroids the magnitudes are presented in three (UBV) or four (BVRI) standard spectral bands.

  8. In Brief: Timing of peak oil uncertain

    NASA Astrophysics Data System (ADS)

    Zielinski, Sarah

    2007-04-01

    According to the Hubbert peak theory, oil production in any geographic area will follow a bellshaped curve. The timing of the `peak' in global oil production is important because after that point, there will be less and less oil available for consumption. A new report from the U.S. Government Accountability Office found that most studies estimate that peak oil production will occur sometime between now and 2040. The uncertainty in these estimates could be reduced with better information about worldwide supply and demand, and alternative fuels and transportation technologies could mitigate the effects of a global decline in oil production. However, the report found no coordinated U.S. federal strategy to address these issues. The report, ``Uncertainty about Future Oil Supply Makes It Important to Develop a Strategy for Addressing a Peak and Decline in Oil Production,'' is available at http://www.gao.gov/cgi-bin/getrpt?GAO-07-283

  9. Reducing Peak Demand by Time Zone Divisions

    NASA Astrophysics Data System (ADS)

    Chakrabarti, A.

    2014-09-01

    For a large country like India, the electrical power demand is also large and the infrastructure cost for power is the largest among all the core sectors of economy. India has an emerging economy which requires high rate of growth of infrastructure in the power generation, transmission and distribution. The current peak demand in the country is approximately 1,50,000 MW which shall have a planned growth of at least 50 % over the next five years (Seventeenth Electric Power Survey of India, Central Electricity Authority, Government of India, March 2007). By implementing the time zone divisions each comprising of an integral number of contiguous states based on their total peak demand and geographical location, the total peak demand of the nation can be significantly cut down by spreading the peak demand of various states over time. The projected reduction in capital expenditure over a plan period of 5 years is substantial. Also, the estimated reduction in operations expenditure cannot be ignored.

  10. Tectonics, Climate and Earth's highest peaks

    NASA Astrophysics Data System (ADS)

    Robl, Jörg; Prasicek, Günther; Hergarten, Stefan

    2016-04-01

    Prominent peaks characterized by high relief and steep slopes are among the most spectacular morphological features on Earth. In collisional orogens they result from the interplay of tectonically driven crustal thickening and climatically induced destruction of overthickened crust by erosional surface processes. The glacial buzz-saw hypothesis proposes a superior status of climate in limiting mountain relief and peak altitude due to glacial erosion. It implies that peak altitude declines with duration of glacial occupation, i.e., towards high latitudes. This is in strong contrast with high peaks existing in high latitude mountain ranges (e.g. Mt. St. Elias range) and the idea of peak uplift due to isostatic compensation of spatially variable erosional unloading an over-thickened orogenic crust. In this study we investigate landscape dissection, crustal thickness and vertical strain rates in tectonically active mountain ranges to evaluate the influence of erosion on (latitudinal) variations in peak altitude. We analyze the spatial distribution of serval thousand prominent peaks on Earth extracted from the global ETOPO1 digital elevation model with a novel numerical tool. We compare this dataset to crustal thickness, thickening rate (vertical strain rate) and mean elevation. We use the ratios of mean elevation to peak elevation (landscape dissection) and peak elevation to crustal thickness (long-term impact of erosion on crustal thickness) as indicators for the influence of erosional surface processes on peak uplift and the vertical strain rate as a proxy for the mechanical state of the orogen. Our analysis reveals that crustal thickness and peak elevation correlate well in orogens that have reached a mechanically limited state (vertical strain rate near zero) where plate convergence is already balanced by lateral extrusion and gravitational collapse and plateaus are formed. On the Tibetan Plateau crustal thickness serves to predict peak elevation up to an altitude

  11. Flu Season Hasn't Peaked Yet

    MedlinePlus

    ... nlm.nih.gov/medlineplus/news/fullstory_157852.html Flu Season Hasn't Peaked Yet This year's vaccine ... 2016 FRIDAY, March 18, 2016 (HealthDay News) -- This flu season continues to be the mildest in the ...

  12. Observing at Kitt Peak National Observatory.

    ERIC Educational Resources Information Center

    Cohen, Martin

    1981-01-01

    Presents an abridged version of a chapter from the author's book "In Quest of Telescopes." Includes personal experiences at Kitt Peak National Observatory, and comments on telescopes, photographs, and making observations. (SK)

  13. Helping System Engineers Bridge the Peaks

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Tkachuk, Oksana; Person, Suzette; Biatek, Jason; Whalen, Michael W.; Castle, Joseph; Castle, JosephGundy-Burlet, Karen

    2014-01-01

    In our experience at NASA, system engineers generally follow the Twin Peaks approach when developing safety-critical systems. However, iterations between the peaks require considerable manual, and in some cases duplicate, effort. A significant part of the manual effort stems from the fact that requirements are written in English natural language rather than a formal notation. In this work, we propose an approach that enables system engineers to leverage formal requirements and automated test generation to streamline iterations, effectively "bridging the peaks". The key to the approach is a formal language notation that a) system engineers are comfortable with, b) is supported by a family of automated V&V tools, and c) is semantically rich enough to describe the requirements of interest. We believe the combination of formalizing requirements and providing tool support to automate the iterations will lead to a more efficient Twin Peaks implementation at NASA.

  14. Mid-infrared absolute spectral responsivity scale based on an absolute cryogenic radiometer and an optical parametric oscillator laser

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Shi, Xueshun; Chen, Haidong; Liu, Yulong; Liu, Changming; Chen, Kunfeng; Li, Ligong; Gan, Haiyong; Ma, Chong

    2016-06-01

    We are reporting on a laser-based absolute spectral responsivity scale in the mid-infrared spectral range. By using a mid-infrared tunable optical parametric oscillator as the laser source, the absolute responsivity scale has been established by calibrating thin-film thermopile detectors against an absolute cryogenic radiometer. The thin-film thermopile detectors can be then used as transfer standard detectors. The extended uncertainty of the absolute spectral responsivity measurement has been analyzed to be 0.58%–0.68% (k  =  2).

  15. Preejection period can be calculated using R peak instead of Q.

    PubMed

    Seery, Mark D; Kondrak, Cheryl L; Streamer, Lindsey; Saltsman, Thomas; Lamarche, Veronica M

    2016-08-01

    Preejection period (PEP) is a common measure of sympathetic nervous system activation in psychophysiological research, which makes it important to measure reliably for as many participants as possible. PEP is typically calculated as the interval between the onset or peak of the electrocardiogram Q wave and the impedance cardiography B point, but the Q wave can lack clear definition and even its peak is not visible for all participants. We thus investigated the feasibility of using the electrocardiogram R wave peak (Rpeak ) instead of Q because it can be consistently identified with ease and precision. Across four samples (total N = 408), young adult participants completed a variety of minimally metabolically demanding laboratory tasks after a resting baseline. Results consistently supported a close relationship between absolute levels of the Rpeak -B interval and PEP (accounting for approximately 90% of the variance at baseline and 89% during task performance, on average), but for reactivity values, Rpeak -B was practically indistinguishable from PEP (accounting for over 98% of the variance, on average). Given that using Rpeak rather than the onset or peak of Q saves time, eliminates potential subjectivity, and can be applied to more participants (i.e., those without a visible Q wave), findings suggest that Rpeak -B likely provides an adequate estimate of PEP when absolute levels are of interest and clearly does so for within-person changes. PMID:27080937

  16. Estimating the Effects of Sensor Spacing on Peak Wind Measurements at Launch Complex 39

    NASA Technical Reports Server (NTRS)

    Merceret, Francis J.

    1999-01-01

    This paper presents results of an empirical study to estimate the measurement error in the peak wind speed at Shuttle Launch Complex 39 (LC-39) which results from the measurement being made by sensors 1,300 feet away. Quality controlled data taken at a height of 30 feet from an array of sensors at the Shuttle Landing Facility (SLF) were used to model differences of peak winds as a function of separation distance and time interval. The SLF data covered wind speeds from less than ten to more than 25 knots. Winds measured at the standard LC-39 site at the normal height of 60 feet were used to verify the applicability of the model to the LC-39 situation. The error in the peak wind speed resulting from separation of the sensor from the target site obeys a power law as a function of separation distance and varies linearly with mean wind speed. At large separation distances, the error becomes a constant fraction of the mean wind speed as the separation function reaches an asymptotic value. The asymptotic average of the mean of the absolute difference in the peak wind speed between the two locations is about twelve percent of the mean wind speed. The distribution of the normalized absolute differences is half-Gaussian.

  17. Forward-peaked scattering of polarized light.

    PubMed

    Clark, Julia P; Kim, Arnold D

    2014-11-15

    Polarized light propagation in a multiple scattering medium is governed by the vector radiative transfer equation. We analyze the vector radiative transfer equation in asymptotic limit of forward-peaked scattering and derive an approximate system of equations for the Stokes parameters, which we call the vector Fokker-Planck approximation. The vector Fokker-Planck approximation provides valuable insight into several outstanding issues regarding the forward-peaked scattering of polarized light such as the polarization memory phenomenon. PMID:25490484

  18. Quantification of Human Brain Metabolites from in Vivo1H NMR Magnitude Spectra Using Automated Artificial Neural Network Analysis

    NASA Astrophysics Data System (ADS)

    Hiltunen, Yrjö; Kaartinen, Jouni; Pulkkinen, Juhani; Häkkinen, Anna-Maija; Lundbom, Nina; Kauppinen, Risto A.

    2002-01-01

    Long echo time (TE=270 ms) in vivo proton NMR spectra resembling human brain metabolite patterns were simulated for lineshape fitting (LF) and quantitative artificial neural network (ANN) analyses. A set of experimental in vivo1H NMR spectra were first analyzed by the LF method to match the signal-to-noise ratios and linewidths of simulated spectra to those in the experimental data. The performance of constructed ANNs was compared for the peak area determinations of choline-containing compounds (Cho), total creatine (Cr), and N-acetyl aspartate (NAA) signals using both manually phase-corrected and magnitude spectra as inputs. The peak area data from ANN and LF analyses for simulated spectra yielded high correlation coefficients demonstrating that the peak areas quantified with ANN gave similar results as LF analysis. Thus, a fully automated ANN method based on magnitude spectra has demonstrated potential for quantification of in vivo metabolites from long echo time spectroscopic imaging.

  19. Cosmic microwave background acoustic peak locations

    NASA Astrophysics Data System (ADS)

    Pan, Z.; Knox, L.; Mulroe, B.; Narimani, A.

    2016-07-01

    The Planck collaboration has measured the temperature and polarization of the cosmic microwave background well enough to determine the locations of eight peaks in the temperature (TT) power spectrum, five peaks in the polarization (EE) power spectrum and 12 extrema in the cross (TE) power spectrum. The relative locations of these extrema give a striking, and beautiful, demonstration of what we expect from acoustic oscillations in the plasma; e.g. that EE peaks fall half way between TT peaks. We expect this because the temperature map is predominantly sourced by temperature variations in the last scattering surface, while the polarization map is predominantly sourced by gradients in the velocity field, and the harmonic oscillations have temperature and velocity 90 deg out of phase. However, there are large differences in expectations for extrema locations from simple analytic models versus numerical calculations. Here, we quantitatively explore the origin of these differences in gravitational potential transients, neutrino free-streaming, the breakdown of tight coupling, the shape of the primordial power spectrum, details of the geometric projection from three to two dimensions, and the thickness of the last scattering surface. We also compare the peak locations determined from Planck measurements to expectations under the Λ cold dark matter model. Taking into account how the peak locations were determined, we find them to be in agreement.

  20. Peak Effect in High-Tc Superconductors

    NASA Astrophysics Data System (ADS)

    Ling, Xinsheng

    1996-03-01

    Like many low-Tc superconductors, high-quality YBCO single crystals are found(X.S. Ling and J.I. Budnick, in Magnetic Susceptibility of Superconductors and Other Spin Systems), edited by R.A. Hein, T.L. Francavilla, and D.H. Liebenberg (Plenum Press, New York, 1991), p.377. to exhibit a striking peak effect. In a magnetic field, the temperature dependence of the critical current has a pronounced peak below T_c(H). Pippard(A.B. Pippard, Phil. Mag. 19), 217 (1969)., and subsequently Larkin and Ovchinnikov(A.I. Larkin and Yu.N. Ovchinnikov, J. Low Temp. Phys. 34), 409 (1979)., attributed the onset of the peak effect to a softening of the vortex lattice. In this talk, the experimental discovery^1 of the peak effect in high-Tc superconductors will be described, followed by a brief historical perspective of the understanding of this phenomenon and a discussion of a new model(X.S. Ling, C. Tang, S. Bhattacharya, and P.M. Chaikin, cond-mat/9504109, (NEC Preprint 1995).) for the peak effect. In this model, the peak effect is an interesting manifestation of the vortex-lattice melting in the presence of weak random pinning potentials. The rise of critical current with increasing temperature is a signature of the ``melting'' of the Larkin domains. This work is done in collaboration with Joe Budnick, Chao Tang, Shobo Bhattacharya, Paul Chaikin, and Boyd Veal.

  1. Integrated magnitudes and mean colors of DDO dwarf galaxies in the UBV system. II - Distances, luminosities, and H I properties

    NASA Astrophysics Data System (ADS)

    de Vaucouleurs, G.; de Vaucouleurs, A.; Buta, R.

    1983-06-01

    An analysis of the properties of the DDO magellanic irregular "dwarf" galaxies for which U, B, V photometry was reported in Paper I leads to the following conclusions: (1) The mean effective surface brightness (m'e)0 varies with zenith distance as expected from the dependence on sec z of atmospheric extinction and sky brightness. (2) The mean surface brightness is independent of distance modulus (25 < μ0 < 33). (3) The mean absolute magnitude M0T is brighter than average (and the mean effective linear diameter larger) south of +10° declination (dwarfs fainter than -15 are missing) and fainter near the zenith at Palomar. There is a curious near absence of DDO objects in the zone +20° < δ < + 30°. The range of absolute magnitudes (-19 < M0T < -12) is largest north of +60° declination. (4) The mean corrected effective color index (U - V )0e is independent of absolute magnitude at M 0T < -15, but fainter systems tend to be bluer. (5) The mean absolute magnitude M0T is correlated with distance modulus at μ0 > 28; we confirm that the DDO sample is not restricted to dwarf systems and includes galaxies as bright as M0T <˜ -18 at μ0 > 30.(6) We confirm also that the luminosity class L and luminosity index Λ are not valid indicators of absolute magnitude for late-type systems beyond μ0 ≃ (Δ = 4 Mpc). (7) Effective linear diameter and absolute magnitude are highly correlated (i.e., mean surface brightness is not a distance indicator). (8) Apparent magnitude and 21-cm line flux are loosely correlated, but there is a large range of HI index (hydrogen/luminosity ratio) among the DDO objects at a given Hubble type (Sd to Im). (9) The relative excess or deficiency in hydrogen/luminosity ratio is uncorrelated with mean effective surface brightness (except for selection effects). (10) The hydrogen/luminosity ratio residuals (at a given type) are loosely correlated with color residuals: DDO objects richer than average in neutral hydrogen tend to be bluer in (U - V )0e

  2. An implementation for the detection and analysis of negative peaks in an applied current signal across a silicon nanopore

    NASA Astrophysics Data System (ADS)

    Billo, Joseph A.; Asghar, Waseem; Iqbal, Samir M.

    2011-06-01

    Translocation of DNA through a silicon nanopore with an applied voltage bias causes the ionic current signal to spike sharply downward as molecules block the flow of ions through the pore. Proper processing of the sampled signal is paramount in obtaining accurate translocation kinetics from the negative peaks, but manual analysis is time-consuming. Here, an algorithm is reported that automates the process. It imports the signal from a tab-delimited text file, automatically zero-baselines, filters noise, detects negative peaks, and estimates each peak's start and end time. The imported signal is processed using a zero-overlap sampling window. Peaks are detected by comparison of the window's standard deviation to a threshold standard deviation in addition to a comparison against a peak magnitude threshold. Zero-baselining and noise removal is accomplished through calculation of the mean of non-peak window values. The start and end times of a peak are approximated by checking where the signal becomes positive on either side of the peak. The program then stores the magnitude, sample number, approximate start time, and approximate end time of each peak in a matrix. All these tasks are automatically done by the program, requiring only the following initial input from the user: window size, file path to sampled signal data file, standard deviation threshold, peak magnitude threshold, and sampling frequency of the sampled signal. Trials with signals from an 11-micron pore sampled at 100 kHz for 30 seconds yielded a high rate of successful peak detection with a magnitude threshold of 600, a standard deviation threshold of 1.25, and a window size of 100.

  3. Robust control design with real parameter uncertainty using absolute stability theory. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    How, Jonathan P.; Hall, Steven R.

    1993-01-01

    The purpose of this thesis is to investigate an extension of mu theory for robust control design by considering systems with linear and nonlinear real parameter uncertainties. In the process, explicit connections are made between mixed mu and absolute stability theory. In particular, it is shown that the upper bounds for mixed mu are a generalization of results from absolute stability theory. Both state space and frequency domain criteria are developed for several nonlinearities and stability multipliers using the wealth of literature on absolute stability theory and the concepts of supply rates and storage functions. The state space conditions are expressed in terms of Riccati equations and parameter-dependent Lyapunov functions. For controller synthesis, these stability conditions are used to form an overbound of the H2 performance objective. A geometric interpretation of the equivalent frequency domain criteria in terms of off-axis circles clarifies the important role of the multiplier and shows that both the magnitude and phase of the uncertainty are considered. A numerical algorithm is developed to design robust controllers that minimize the bound on an H2 cost functional and satisfy an analysis test based on the Popov stability multiplier. The controller and multiplier coefficients are optimized simultaneously, which avoids the iteration and curve-fitting procedures required by the D-K procedure of mu synthesis. Several benchmark problems and experiments on the Middeck Active Control Experiment at M.I.T. demonstrate that these controllers achieve good robust performance and guaranteed stability bounds.

  4. Improved statistical determination of absolute neutrino masses via radiative emission of neutrino pairs from atoms

    NASA Astrophysics Data System (ADS)

    Zhang, Jue; Zhou, Shun

    2016-06-01

    The atomic transition from an excited state |e ⟩ to the ground state |g ⟩ by emitting a neutrino pair and a photon, i.e., |e ⟩→|g ⟩+|γ ⟩+|νi⟩+|ν¯j⟩ with i , j =1 , 2, 3, has been proposed by Yoshimura and his collaborators as an alternative way to determine the absolute scale m0 of neutrino masses. More recently, a statistical analysis of the fine structure of the photon spectrum from this atomic process has been performed [N. Song et al. Phys. Rev. D 93, 013020 (2016)] to quantitatively examine the experimental requirements for a realistic determination of absolute neutrino masses. In this paper, we show how to improve the statistical analysis and demonstrate that the previously required detection time can be reduced by one order of magnitude for the case of a 3 σ determination of m0˜0.01 eV with an accuracy better than 10%. Such an improvement is very encouraging for further investigations on measuring absolute neutrino masses through atomic processes.

  5. Supplementary and Enrichment Series: Absolute Value. Teachers' Commentary. SP-25.

    ERIC Educational Resources Information Center

    Bridgess, M. Philbrick, Ed.

    This is one in a series of manuals for teachers using SMSG high school supplementary materials. The pamphlet includes commentaries on the sections of the student's booklet, answers to the exercises, and sample test questions. Topics covered include addition and multiplication in terms of absolute value, graphs of absolute value in the Cartesian…

  6. Supplementary and Enrichment Series: Absolute Value. SP-24.

    ERIC Educational Resources Information Center

    Bridgess, M. Philbrick, Ed.

    This is one in a series of SMSG supplementary and enrichment pamphlets for high school students. This series is designed to make material for the study of topics of special interest to students readily accessible in classroom quantity. Topics covered include absolute value, addition and multiplication in terms of absolute value, graphs of absolute…

  7. Novalis' Poetic Uncertainty: A "Bildung" with the Absolute

    ERIC Educational Resources Information Center

    Mika, Carl

    2016-01-01

    Novalis, the Early German Romantic poet and philosopher, had at the core of his work a mysterious depiction of the "absolute." The absolute is Novalis' name for a substance that defies precise knowledge yet calls for a tentative and sensitive speculation. How one asserts a truth, represents an object, and sets about encountering things…

  8. Absolute Humidity and the Seasonality of Influenza (Invited)

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.

    2010-12-01

    Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.

  9. Comparison of local magnitude scales in Central Europe

    NASA Astrophysics Data System (ADS)

    Kysel, Robert; Kristek, Jozef; Moczo, Peter; Cipciar, Andrej; Csicsay, Kristian; Srbecky, Miroslav; Kristekova, Miriam

    2015-04-01

    Efficient monitoring of earthquakes and determination of their magnitudes are necessary for developing earthquake catalogues at a regional and national levels. Unification and homogenization of the catalogues in terms of magnitudes has great importance for seismic hazard assessment. Calibrated local earthquake magnitude scales are commonly used for determining magnitudes of regional earthquakes by all national seismological services in the Central Europe. However, at the local scale, each seismological service uses its own magnitude determination procedure. There is no systematic comparison of the approaches and there is no unified procedure. We present a comparison of the local magnitude scales used by the national seismological services of Slovakia (Geophysical Institute, Slovak Academy of Sciences), Czech Republic (Institute of Geophysics, Academy of Sciences of the Czech Republic), Austria (ZAMG), Hungary (Geodetic and Geophysical Institute, Hungarian Academy of Sciences) and Poland (Institute of Geophysics, Polish Academy of Sciences), and by the local network of seismic stations located around the Nuclear Power Plant Jaslovske Bohunice, Slovakia. The comparison is based on the national earthquake catalogues and annually published earthquake bulletins for the period from 1985 to 2011. A data set of earthquakes has been compiled based on identification of common events in the national earthquake catalogues and bulletins. For each pair of seismic networks, magnitude differences have been determined and investigated as a function of time. The mean and standard deviations of the magnitude differences as well as regression coefficients between local magnitudes from the national seismological networks have been computed. Results show relatively big scatter between different national local magnitudes and its considerable time variation. A conversion between different national local magnitudes in a scale 1:1 seems inappropriate, especially for the compilation of the

  10. Understanding the Magnitude Dependence of PGA and PGV: A look at differences between mainshocks and aftershocks in the NGA-West2 data and ground motion from small magnitude Anza data

    NASA Astrophysics Data System (ADS)

    Baltay, A.; Hanks, T. C.

    2013-12-01

    We build an earthquake-source based model to explain the magnitude dependence of PGA (peak ground acceleration) and PGV (peak ground velocity) observable in the NGA-West2 ground motion database and empirically based ground motion prediction equations (GMPEs). This simple model is based on a point-source, constant stress drop (Δσ) Brune model, including the high-frequency attenuation parameter (fmax or κ), random vibration theory (RVT) and a finite-fault assumption. This simple approach explains the magnitude dependence of PGA and PGV in the NGA-West2 ground motion data, and matches the GMPEs well. Using this model as a reference condition, we explore secondary dependencies, such as the difference in ground motion between mainshocks and aftershocks, and the magnitude dependence of ground motion from small magnitude earthquakes. In the NGA-West2 database, which consists of over 20,000 records of events from 3< M < 8, mainshocks are defined as Class 1 events, and on-fault aftershocks as Class 2. By comparing the median ground motion of these events with our source-based model, we can distinguish overall differences in stress drop associated with Class 1 vs. Class 2 events. We find that when taken all together, Class 2 events have slightly lower stress drop than Class 1 events. This suggests that the Class 2 events, on-fault aftershocks, may be re-rupturing damaged fault areas yielding lower stress drops. We observe more scatter in this model-based stress drop for the Class 2 events as compared to the mainshocks, similar to what is observed in seismologically based source studies. We also consider the magnitude dependence of very small magnitude data, 0magnitude, a consistent magnitude scale is necessary. Anza data are typically reported with local magnitude, ML, so we make a theoretically based empirical correction to convert to moment magnitude M. The

  11. Karst Water System Investigated by Absolute Gravimetry

    NASA Astrophysics Data System (ADS)

    Quinif, Y.; Meus, P.; van Camp, M.; Kaufmann, O.; van Ruymbeke, M.; Vandiepenbeeck, M.; Camelbeeck, T.

    2006-12-01

    The highly anisotropic and heterogeneous hydrogeological characteristics of karst aquifers are difficult to characterize and present challenges for modeling of storage capacities. Little is known about the surface and groundwater interconnection, about the connection between the porous formations and the draining cave and conduits, and about the variability of groundwater volume within the system. Usually, an aquifer is considered as a black box, where water fluxes are monitored as input and output. However, water inflow and outflow are highly variable and cannot be measured directly. A recent project, begun in 2006 sought to constrain the water budget in a Belgian karst aquifer and to assess the porosity and water dynamics, combining absolute gravity (AG) measurements and piezometric levels around the Rochefort cave. The advantage of gravity measurements is that they integrate all the subsystems in the karst system. This is not the case with traditional geophysical tools like boring or monitoring wells, which are soundings affected by their near environment and its heterogeneity. The investigated cave results from the meander cutoff system of the Lomme River. The main inputs are swallow holes of the river crossing the limestone massif. The river is canalized and the karst system is partly disconnected from the hydraulic system. In February and March 2006, when the river spilled over its dyke and sank into the most important swallow hole, this resulted in dramatic and nearly instantaneous increases in the piezometric levels in the cave, reaching up to 13 meters. Meanwhile, gravity increased by 50 and 90 nms-2 in February and March, respectively. A first conclusion is that during these sudden floods, the pores and fine fissures were poorly connected with the enlarged fractures, cave, and conduits. With a rise of 13 meters in the water level and a 5% porosity, a gravity change of 250 nms-2 should have been expected. This moderate gravity variation suggests either a

  12. Cometary magnitude distribution and the ratio between the numbers of long- and short-period comets

    NASA Astrophysics Data System (ADS)

    Hughes, D. W.

    1988-01-01

    Comets are presently divided into three groups on the basis of orbital period: of more than 200 years, of 15-200 years, and of less than 15 years. It is noted that the number of bright, short-period comets presently visible from the earth may be accounted for through the postulation of certain Jovian capture parameters, in conjunction with a flux of bright long-period comets tending to the vicinity of the earth that is equal to the observed, 0.83 + or - 0.11/year flux. Attention is given to the relationship between absolute magnitude, on the one hand, and on the other the size, shape, and surface structure of the nucleus.

  13. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  14. Testing the quasi-absolute method in photon activation analysis

    SciTech Connect

    Sun, Z. J.; Wells, D.; Starovoitova, V.; Segebade, C.

    2013-04-19

    In photon activation analysis (PAA), relative methods are widely used because of their accuracy and precision. Absolute methods, which are conducted without any assistance from calibration materials, are seldom applied for the difficulty in obtaining photon flux in measurements. This research is an attempt to perform a new absolute approach in PAA - quasi-absolute method - by retrieving photon flux in the sample through Monte Carlo simulation. With simulated photon flux and database of experimental cross sections, it is possible to calculate the concentration of target elements in the sample directly. The QA/QC procedures to solidify the research are discussed in detail. Our results show that the accuracy of the method for certain elements is close to a useful level in practice. Furthermore, the future results from the quasi-absolute method can also serve as a validation technique for experimental data on cross sections. The quasi-absolute method looks promising.

  15. Learning in the temporal bisection task: Relative or absolute?

    PubMed

    de Carvalho, Marilia Pinheiro; Machado, Armando; Tonneau, François

    2016-01-01

    We examined whether temporal learning in a bisection task is absolute or relational. Eight pigeons learned to choose a red key after a t-seconds sample and a green key after a 3t-seconds sample. To determine whether they had learned a relative mapping (short→Red, long→Green) or an absolute mapping (t-seconds→Red, 3t-seconds→Green), the pigeons then learned a series of new discriminations in which either the relative or the absolute mapping was maintained. Results showed that the generalization gradient obtained at the end of a discrimination predicted the pattern of choices made during the first session of a new discrimination. Moreover, most acquisition curves and generalization gradients were consistent with the predictions of the learning-to-time model, a Spencean model that instantiates absolute learning with temporal generalization. In the bisection task, the basis of temporal discrimination seems to be absolute, not relational. PMID:26752233

  16. Analysis of changes in the magnitude, frequency, and seasonality of heavy precipitation over the contiguous USA

    NASA Astrophysics Data System (ADS)

    Mallakpour, Iman; Villarini, Gabriele

    2016-08-01

    Auc(bstract) Gridded daily precipitation observations over the contiguous USA are used to investigate the past observed changes in the frequency and magnitude of heavy precipitation, and to examine its seasonality. Analyses are based on the Climate Prediction Center (CPC) daily precipitation data from 1948 to 2012. We use a block maxima approach to identify changes in the magnitude of heavy precipitation and a peak-over-threshold (POT) approach for the changes in the frequency. The results of this study show that there is a stronger signal of change in the frequency rather than in the magnitude of heavy precipitation events. Also, results show an increasing trend in the frequency of heavy precipitation over large areas of the contiguous USA with the most notable exception of the US Northwest. These results indicate that over the last 65 years, the stronger storms are not getting stronger, but a larger number of heavy precipitation events have been observed. The annual maximum precipitation and annual frequency of heavy precipitation reveal a marked seasonality over the contiguous USA. However, we could not find any evidence suggesting shifting in the seasonality of annual maximum precipitation by investigating whether the day of the year at which the maximum precipitation occurs has changed over time. Furthermore, we examine whether the year-to-year variations in the frequency and magnitude of heavy precipitation can be explained in terms of climate variability driven by the influence of the Atlantic and Pacific Oceans. Our findings indicate that the climate variability of both the Atlantic and Pacific Oceans can exert a large control on the precipitation frequency and magnitude over the contiguous USA. Also, the results indicate that part of the spatial and temporal features of the relationship between climate variability and heavy precipitation magnitude and frequency can be described by one or more of the climate indices considered here.

  17. Predicting Peak Flows following Forest Fires

    NASA Astrophysics Data System (ADS)

    Elliot, William J.; Miller, Mary Ellen; Dobre, Mariana

    2016-04-01

    Following forest fires, peak flows in perennial and ephemeral streams often increase by a factor of 10 or more. This increase in peak flow rate may overwhelm existing downstream structures, such as road culverts, causing serious damage to road fills at stream crossings. In order to predict peak flow rates following wildfires, we have applied two different tools. One is based on the U.S.D.A Natural Resource Conservation Service Curve Number Method (CN), and the other is by applying the Water Erosion Prediction Project (WEPP) to the watershed. In our presentation, we will describe the science behind the two methods, and present the main variables for each model. We will then provide an example of a comparison of the two methods to a fire-prone watershed upstream of the City of Flagstaff, Arizona, USA, where a fire spread model was applied for current fuel loads, and for likely fuel loads following a fuel reduction treatment. When applying the curve number method, determining the time to peak flow can be problematic for low severity fires because the runoff flow paths are both surface and through shallow lateral flow. The WEPP watershed version incorporates shallow lateral flow into stream channels. However, the version of the WEPP model that was used for this study did not have channel routing capabilities, but rather relied on regression relationships to estimate peak flows from individual hillslope polygon peak runoff rates. We found that the two methods gave similar results if applied correctly, with the WEPP predictions somewhat greater than the CN predictions. Later releases of the WEPP model have incorporated alternative methods for routing peak flows that need to be evaluated.

  18. Vibrotactile difference thresholds: effects of vibration frequency, vibration magnitude, contact area, and body location.

    PubMed

    Forta, Nazim Gizem; Griffin, Michael J; Morioka, Miyuki

    2012-01-01

    It has not been established whether the smallest perceptible change in the intensity of vibrotactile stimuli depends on the somatosensory channel mediating the sensation. This study investigated intensity difference thresholds for vibration using contact conditions (different frequencies, magnitudes, contact areas, body locations) selected so that perception would be mediated by more than one psychophysical channel. It was hypothesized that difference thresholds mediated by the non-Pacinian I (NPI) channel and the Pacinian (P) channel would differ. Using two different contactors (1-mm diameter contactor with 1-mm gap to a fixed surround; 10-mm diameter contactor with 2-mm gap to the surround) vibration was applied to the thenar eminence and the volar forearm at two frequencies (10 and 125 Hz). The up-down-transformed-response method with a three-down-one-up rule provided absolute thresholds and also difference thresholds at various levels above the absolute thresholds of 12 subjects (i.e., sensation levels, SLs) selected to activate preferentially either single channels or multiple channels. Median difference thresholds varied from 0.20 (thenar eminence with 125-Hz vibration at 10 dB SL) to 0.58 (thenar eminence with 10-Hz vibration at 20 dB SL). Median difference thresholds tended to be lower for the P channel than the NPI channel. The NPII channel may have reduced difference thresholds with the smaller contactor at 125 Hz. It is concluded that there are large and systematic variations in difference thresholds associated with the frequency, the magnitude, the area of contact, and the location of contact with vibrotactile stimuli that cannot be explained without increased understanding of the perception of supra-threshold vibrotactile stimuli. PMID:22416802

  19. The Construction of a Magnitude Estimation Scale of Adult Learning.

    ERIC Educational Resources Information Center

    Blunt, Adrian

    The psychophysical technique of magnitude estimation was used to develop a ratio scale of subjective estimations of adult learning in various adult education activities. A rank order of 26 learning activities and the magnitude estimations in "units of learning" that are expected to occur in each activity were obtained from 146 adult education…

  20. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…

  1. Congruency Effects between Number Magnitude and Response Force

    ERIC Educational Resources Information Center

    Vierck, Esther; Kiesel, Andrea

    2010-01-01

    Numbers are thought to be represented in space along a mental left-right oriented number line. Number magnitude has also been associated with the size of grip aperture, which might suggest a connection between number magnitude and intensity. The present experiment aimed to confirm this possibility more directly by using force as a response…

  2. Some Effects of Magnitude of Reinforcement on Persistence of Responding

    ERIC Educational Resources Information Center

    McComas, Jennifer J.; Hartman, Ellie C.; Jimenez, Angel

    2008-01-01

    The influence of magnitude of reinforcement was examined on both response rate and behavioral persistence. During Phase 1, a multiple schedule of concurrent reinforcement was implemented in which reinforcement for one response option was held constant at VI 30 s across both components, while magnitude of reinforcement for the other response option…

  3. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…

  4. Number Games, Magnitude Representation, and Basic Number Skills in Preschoolers

    ERIC Educational Resources Information Center

    Whyte, Jemma Catherine; Bull, Rebecca

    2008-01-01

    The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was…

  5. The Weight of Time: Affordances for an Integrated Magnitude System

    ERIC Educational Resources Information Center

    Lu, Aitao; Mo, Lei; Hodges, Bert H.

    2011-01-01

    In five experiments we explored the effects of weight on time in different action contexts to test the hypothesis that an integrated magnitude system is tuned to affordances. Larger magnitudes generally seem longer; however, Lu and colleagues (2009) found that if numbers were presented as weights in a range heavy enough to affect lifting, the…

  6. Reinforcement Magnitude: An Evaluation of Preference and Reinforcer Efficacy

    ERIC Educational Resources Information Center

    Trosclair-Lasserre, Nicole M.; Lerman, Dorothea C.; Call, Nathan A.; Addison, Laura R.; Kodak, Tiffany

    2008-01-01

    Consideration of reinforcer magnitude may be important for maximizing the efficacy of treatment for problem behavior. Nonetheless, relatively little is known about children's preferences for different magnitudes of social reinforcement or the extent to which preference is related to differences in reinforcer efficacy. The purpose of the current…

  7. Absolute cross sections for binary-encounter electron ejection by 95-MeV/u {sup 36}Ar{sup 18+} penetrating carbon foils

    SciTech Connect

    De Filippo, E.; Lanzano, G.; Aiello, S.; Arena, N.; Geraci, M.; Pagano, A.; Rothard, H.; Volant, C.; Anzalone, A.; Giustolisi, F.

    2003-08-01

    Doubly differential electron velocity spectra induced by 95-MeV/u {sup 36}Ar{sup 18+} from thin carbon foils were measured at GANIL (Caen, France) by means of the ARGOS multidetector and the time-of-flight technique. The spectra allow us to determine absolute singly differential cross sections as a function of the emission angle. Absolute doubly differential cross sections for binary encounter electron ejection from C targets are compared to a transport theory, which is based on the relativistic electron impact approximation for electron production and which accounts for angular deflection, energy loss, and also energy straggling of the transmitted electrons. For the thinnest targets, the measured peak width is in good agreement with experimental data obtained with a different detection technique. The theory underestimates the peak width but provides (within a factor of 2) the correct peak intensity. For the thickest target, even the peak shape is well reproduced by theory.

  8. Effects of urbanization on the magnitude and frequency of floods in northeastern Illinois

    USGS Publications Warehouse

    Allen, Howard E.; Bejcek, Richard M.

    1979-01-01

    Changes in land use associated with urbanization have increased flood-peak discharges in northeastern Illinois by factors up to 3.2. Techniques are presented for estimating the magnitude and frequency of floods in the urban environment of northeastern Illinois, and for estimating probable changes in flood characteristics that may be expected to accompany progressive urbanization. Suggestions also are offered for estimating the effects of urbanization on flood characteristics in areas other than northeastern Illinois. Three variables, drainage area, channel slope, and percent imperviousness (an urbanization factor), are used to estimate flood magnitudes for frequencies ranging from 2 to 500 years. Multiple regression analyses were used to relate flood-discharge data to the above watershed characteristics for 103 gaged watersheds. These watersheds ranged in drainage area from 0.07 to 630 square miles, in channel slope from 1.1 to 115 feet per mile, and in imperviousness from 1 to 39 percent. (Woodard-USGS)

  9. An analysis of the magnitude and frequency of floods on Oahu, Hawaii

    USGS Publications Warehouse

    Nakahara, R.H.

    1980-01-01

    An analysis of available peak-flow data for the island of Oahu, Hawaii, was made by using multiple regression techniques which related flood-frequency data to basin and climatic characteristics for 74 gaging stations on Oahu. In the analysis, several different groupings of stations were investigated, including divisions by geographic location and size of drainage area. The grouping consisting of two leeward divisions and one windward division produced the best results. Drainage basins ranged in area from 0.03 to 45.7 square miles. Equations relating flood magnitudes of selected frequencies to basin characteristics were developed for the three divisions of Oahu. These equations can be used to estimate the magnitude and frequency of floods for any site, gaged or ungaged, for any desired recurrence interval from 2 to 100 years. Data on basin characteristics, flood magnitudes for various recurrence intervals from individual station-frequency curves, and computed flood magnitudes by use of the regression equation are tabulated to provide the needed data. (USGS)

  10. The Effects Of Reinforcement Magnitude On Functional Analysis Outcomes

    PubMed Central

    2005-01-01

    The duration or magnitude of reinforcement has varied and often appears to have been selected arbitrarily in functional analysis research. Few studies have evaluated the effects of reinforcement magnitude on problem behavior, even though basic findings indicate that this parameter may affect response rates during functional analyses. In the current study, 6 children with autism or developmental disabilities who engaged in severe problem behavior were exposed to three separate functional analyses, each of which varied in reinforcement magnitude. Results of these functional analyses were compared to determine if a particular reinforcement magnitude was associated with the most conclusive outcomes. In most cases, the same conclusion about the functions of problem behavior was drawn regardless of the reinforcement magnitude. PMID:16033163

  11. Multifractal detrended fluctuation analysis of Pannonian earthquake magnitude series

    NASA Astrophysics Data System (ADS)

    Telesca, Luciano; Toth, Laszlo

    2016-04-01

    The multifractality of the series of magnitudes of the earthquakes occurred in Pannonia region from 2002 to 2012 has been investigated. The shallow (depth less than 40 km) and deep (depth larger than 70 km) seismic catalogues were analysed by using the multifractal detrended fluctuation analysis. The shallow and deep catalogues are characterized by different multifractal properties: (i) the magnitudes of the shallow events are weakly persistent, while those of the deep ones are almost uncorrelated; (ii) the deep catalogue is more multifractal than the shallow one; (iii) the magnitudes of the deep catalogue are characterized by a right-skewed multifractal spectrum, while that of the shallow magnitude is rather symmetric; (iv) a direct relationship between the b-value of the Gutenberg-Richter law and the multifractality of the magnitudes is suggested.

  12. Using Google Earth to Teach the Magnitude of Deep Time

    ERIC Educational Resources Information Center

    Parker, Joel D.

    2011-01-01

    Most timeline analogies of geologic and evolutionary time are fundamentally flawed. They trade off the problem of grasping very long times for the problem of grasping very short distances. The result is an understanding of relative time with little comprehension of absolute time. Earlier work has shown that the distances most easily understood by…

  13. Nasal peak inspiratory flow at altitude.

    PubMed

    Barry, P W; Mason, N P; Richalet, J P

    2002-01-01

    The present study investigated whether there are changes in nasal peak inspiratory flow (NPIF) during hypobaric hypoxia under controlled environmental conditions. During operation Everest III (COMEX '97), eight subjects ascended to a simulated altitude of 8,848 m in a hypobaric chamber. NPIF was recorded at simulated altitudes of 0 m, 5,000 m and 8,000 m. Oral peak inspiratory and expiratory flow (OPIF, OPEF) were also measured. Ambient air temperature and humidity were controlled. NPIF increased by a mean +/- SD of 16 +/- 12% from sea level to 8,000 m, whereas OPIF increased by 47 +/- 14%. NPIF rose by 0.085 +/- 0.03 L x s(-1) per kilometre of ascent (p<0.05), significantly less than the rise in OPIF and OPEF of 0.35 +/- 0.10 and 0.33 +/- 0.04 L x s(-1) per kilometre (p<0.0005). Nasal peak inspiratory flow rises with ascent to altitude. The rise in nasal peak inspiratory flow with altitude was far less than oral peak inspiratory flow and less than the predicted rise according to changes in air density. This suggests flow limitation at the nose, and occurs under controlled environmental conditions, refuting the hypothesis that nasal blockage at altitude is due to the inhalation of cold, dry air. Further work is needed to determine if nasal blockage limits activity at altitude. PMID:11843316

  14. Earthquake source inversion for moderate magnitude seismic events based on GPS simulated high-rate data

    NASA Astrophysics Data System (ADS)

    Psimoulis, Panos; Dalguer, Luis; Houlie, Nicolas; Zhang, Youbing; Clinton, John; Rothacher, Markus; Giardini, Domenico

    2013-04-01

    The development of GNSS technology with the potential of high-rate (up to 100Hz) GNSS (GPS, GLONASS, Galileo, Compass) records allows the monitoring of the seismic ground motions. In this study we show the potential of estimating the earthquake magnitude (Mw) and the fault geometry parameters (slip, depth, length, rake, dip, strike) during the propagation of seismic waves based on high-rate GPS network data and using a non-linear inversion algorithm. The examined area is the Valais (South-West Switzerland) where a permanent GPS network of 15 stations (COGEAR and AGNES GPS networks) is operational and where the occurrence of an earthquake of Mw≈6 is possible every 80 years. We test our methodology using synthetic events of magnitude 6.0-6.5 corresponding to normal fault according to most of the fault mechanisms of the area, for surface and buried rupture. The epicentres are located in the Valais close to the epicentre of previous historical earthquakes. For each earthquake, synthetic seismic data (velocity records) of 15 sites, corresponding to the current GPS network sites in Valais, were produced. The synthetic seismic data were integrated into displacement time-series. By jointly using these time-series with the Bernese GNSS Software 5.1 (modified), 10Hz sampling rate GPS records were generated assuming a noise of peak-to-peak amplitudes of ±1cm and ±3cm for the horizontal and for the vertical components, respectively. The GPS records were processed and resulted in kinematic time series from where the seismic displacements were derived and inverted for the magnitude and the fault geometry parameters. The inversion results indicate that it is possible to estimate both, the earthquake magnitudes and the fault geometry parameters in real-time (~10 seconds after the fault rupture). The accuracy of the results depends on the geometry of the GPS network and of the position of the earthquake epicentre.

  15. Magnitude and range of the RKKy interaction in SmRh4B4

    NASA Astrophysics Data System (ADS)

    Terris, B. D.; Gray, K. E.; Dunlap, B. D.

    1985-04-01

    The superconductive and magnetic transition temperatures taken together are shown to provide a unique probe which separately determines both the magnitude and range of the RKKY interaction in the RERh4B4 magnetic-superconductors (RE = Er, Sm). Experimentally, an unexpected peak is found in the antiferromagnetic ordering temperature of SmRh4B4 vs. electron mean free path, while for ErRh4B4 the ferromagnetic ordering temperature decreases monotonically. These qualitative features, as well as the quanitative differences between SmRh4B4 and ErRh4B4, are in excellent agreement with calculations using a mean free path dependent RKKY interaction.

  16. SPANISH PEAKS WILDERNESS STUDY AREA, COLORADO.

    USGS Publications Warehouse

    Budding, Karin E.; Kluender, Steven E.

    1984-01-01

    A geologic and geochemical investigation and a survey of mines and prospects were conducted to evaluate the mineral-resource potential of the Spanish Peaks Wilderness Study Area, Huerfano and Las Animas Counties, in south-central Colorado. Anomalous gold, silver, copper, lead, and zinc concentrations in rocks and in stream sediments from drainage basins in the vicinity of the old mines and prospects on West Spanish Peak indicate a substantiated mineral-resource potential for base and precious metals in the area surrounding this peak; however, the mineralized veins are sparse, small in size, and generally low in grade. There is a possibility that coal may underlie the study area, but it would be at great depth and it is unlikely that it would have survived the intense igneous activity in the area. There is little likelihood for the occurrence of oil and gas because of the lack of structural traps and the igneous activity.

  17. The PEAK experience in South Carolina

    SciTech Connect

    1998-11-01

    The PEAK Institute was developed to provide a linkage for formal (schoolteachers) and nonformal educators (extension agents) with agricultural scientists of Clemson University`s South Carolina Agricultural Experiment Station System. The goal of the Institute was to enable teams of educators and researchers to develop and provide PEAK science and math learning experiences related to relevant agricultural and environmental issues of local communities for both classroom and 4-H Club experiences. The Peak Institute was conducted through a twenty day residential Institute held in June for middle school and high school teachers who were teamed with an Extension agent from their community. These educators participated in hands-on, minds-on sessions conducted by agricultural researchers and Clemson University Cooperative Extension specialists. Participants were given the opportunity to see frontier science being conducted by scientists from a variety of agricultural laboratories.

  18. Paleomagnetism of the Becker Peak stock

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Housen, B. A.

    2009-12-01

    Paleomagnetic studies of plutonic rocks, although subject to uncertainty due to lack of paleohorizontal control, can provide important constrains of patterns of regional deformation, and can play a role in evaluation of tectonic models and reconstructions. Many plutonic rocks of the Cascades have been well-studied via paleomagnetism, but there are many that lack robust data sets. One such pluton, the Beckler Peak stock, is a late Cretaceous tonalitic stock, with biotite and amphibole K-Ar ages of 93 to 82 Ma (Engels and Crowder, 1971, Yeats and Engels, 1971). The Beckler Peak stock is considered to be a companion body to the larger Mt. Stuart Batholith, but is separated from the Mt. Stuart Batholith by the Evergreen Fault. For this study five paleomagnetic sites were sampled from the Beckler Peak stock near Skykomish, Washington. After low temperature and thermal demagnetization site means were calculated for the four sites where at least two samples survived demagnetization. Unblocking temperatures were indicative of magnetite and hematite as the carriers of remanence. Two of the site means were disregarded due to anomalous directions likely due to sites being from very large slump blocks. The two acceptable site means, along with a Beckler Peak stock site mean from Beck and Noson (1972) and another from Housen et al. (2003) give a stock-wide mean of D = 3.8°, I = 41.9°, k = 32.9, and α95 = 16.2°. This direction is consistent with mean directions for the Mount Stuart batholith determined by Beck and Noson (1972), Beck et al. (1981), and Housen et al. (2003). This directional consistency supports an association between the Beckler Peak stock and the Mt. Stuart Batholith, or at least that these two plutonic bodies were emplaced in the same structural block, and that any post-magnetization deformation (such as rotation and/or tilt associated with the Evergreen Fault) between the Beckler Peak stock and the Mt. Stuart Batholith was minor.

  19. Magnitude and Duration of the Baltic Ice Lake Drainage Based on Modeling and Sediment Characteristics

    NASA Astrophysics Data System (ADS)

    Johnson, M. D.; Bjork, G.; Elam, J.; Öhrling, C.

    2015-12-01

    At the time of the Pleistocene-Holocene transition, the Baltic Ice Lake catastrophically drained to the Atlantic Ocean over two narrow subaerial highlands (Mt. Billingen and Klyftamon) in south-central Sweden. The amount of water drained is known (7000 km3 resulting in a lowering of the water level 25 m), but the duration of the drainage and thus the magnitude of the discharge is not securely known. Here we present geomorphic and sedimentologic observations and modeling results that allow estimates of the peak velocity, peak discharge and overall drainage duration for the event. Newly available LiDAR based digital elevation models of the highlands has allowed identification of the boulder bars and fields formed during the drainage. D­90 measurements on sediment in these deposits were used in velocity equations available in the literature, and they predict velocities ranging from 8 to 18 m/sec. Grain-size in these boulder deposits decreases downflow. The morphology of the boulder deposits varies with paleo-water depth. We also modeled the drainage duration assuming a simple ice-breach model. This model produced a peak velocity of 12 m/sec, agreeing well with the sediment data. This peak occurred about 5 months after the initial breach. The entire drainage took less than two years, although 75% of the drainage occurred within the first year.

  20. MOSES AND DENNISON PEAK ROADLESS AREAS, CALIFORNIA.

    USGS Publications Warehouse

    Goldfarb, Richard J.; Lipton, David A.

    1984-01-01

    A mineral-resource survey was conducted in the Moses and Dennison Peak Roadless Areas, southeastern Sierra Nevada, California. One area within the Moses Roadless Area is classified as having substantiated mineral-resource potential for small base-metal skarn deposits. Additionally, geochemical data indicate probable potential for small base-metal skarn deposits from one locality within Dennison Peak Roadless Area and for small tungsten skarn deposits from a region within Moses Roadless Area. The geologic setting precludes the presence of energy resources.

  1. [Accuracy of MiniWright peak expiratory flow meters

    PubMed

    Camargos, P A; Ruchkys, V C; Dias, R M; Sakurai, E

    2000-01-01

    OBJECTIVE: To evaluate the accuracy of the Mini-Wright (Clement Clarke International Ltd.) peak-flow meters. METHODS: Twenty of those meters were checked by use of electronic calibration syringe (Jones Flow-Volume Calibrator(R)). Nine of them had an old scale, with values displayed equidistantly, and eleven had a new mechanical scale with non-equidistant values. Each device was connected in series to the calibration syringe to perform eight hand-driven volume injections, with flows ranging from 100 to 700 l/min. Absolute and relative differences between meters and syringe were calculated, the syringe values taken as standard. The accuracy of the twenty Mini-Wright devices was validated by the American Thoracic Society criteria (-/+ 10% or -/+ 20 l/min), and/or European Respiratory Society criteria (-/+ 5% or -/+ 5 l/ min). RESULTS: New scale instruments were more accurate than old scale meters (p < 0.001), by both ATS and ERS criteria. Every meter was rechecked after 600 measurements. Both the old, and the new scale instruments maintained the same level of performance after this evaluation. CONCLUSIONS: Results suggest that new scale meters were accurate and can be safely used in clinical practice. The authors strongly recommend that they are rechecked regularly to ensure that they are within the ATS and ERS variation limits. PMID:14647633

  2. Multiscale mapping of completeness magnitude of earthquake catalogs

    NASA Astrophysics Data System (ADS)

    Vorobieva, Inessa; Narteau, Clement; Shebalin, Peter; Beauducel, François; Nercessian, Alexandre; Clouard, Valérie; Bouin, Marie-Paule

    2013-04-01

    We propose a multiscale method to map spatial variations in completeness magnitude Mc of earthquake catalogs. The Mc value may significantly vary in space due to the change of the seismic network density. Here we suggest a way to use only earthquake catalogs to separate small areas of higher network density (lower Mc) and larger areas of smaller network density (higher Mc). We reduce the analysis of the FMDs to the limited magnitude ranges, thus allowing deviation of the FMD from the log-linearity outside the range. We associate ranges of larger magnitudes with increasing areas for data selection based on constant in average number of completely recorded earthquakes. Then, for each point in space, we document the earthquake frequency-magnitude distribution at all length scales within the corresponding earthquake magnitude ranges. High resolution of the Mc-value is achieved through the determination of the smallest space-magnitude scale in which the Gutenberg-Richter law (i. e. an exponential decay) is verified. The multiscale procedure isolates the magnitude range that meets the best local seismicity and local record capacity. Using artificial catalogs and earthquake catalogs of the Lesser Antilles arc, this Mc mapping method is shown to be efficient in regions with mixed types of seismicity, a variable density of epicenters and various levels of registration.

  3. A scheme to set preferred magnitudes in the ISC Bulletin

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Storchak, Dmitry A.

    2016-04-01

    One of the main purposes of the International Seismological Centre (ISC) is to collect, integrate and reprocess seismic bulletins provided by agencies around the world in order to produce the ISC Bulletin. This is regarded as the most comprehensive bulletin of the Earth's seismicity, and its production is based on a unique cooperation in the seismological community that allows the ISC to complement the work of seismological agencies operating at global and/or local-regional scale. In addition, by using the seismic wave measurements provided by reporting agencies, the ISC computes, where possible, its own event locations and magnitudes such as short-period body wave m b and surface wave M S . Therefore, the ISC Bulletin contains the results of the reporting agencies as well as the ISC own solutions. Among the most used seismic event parameters listed in seismological bulletins, the event magnitude is of particular importance for characterizing a seismic event. The selection of a magnitude value (or multiple ones) for various research purposes or practical applications is not always a straightforward task for users of the ISC Bulletin and related products since a multitude of magnitude types is currently computed by seismological agencies (sometimes using different standards for the same magnitude type). Here, we describe a scheme that we intend to implement in routine ISC operations to mark the preferred magnitudes in order to help ISC users in the selection of events with magnitudes of their interest.

  4. Quantifying Heartbeat Dynamics by Magnitude and Sign Correlations

    NASA Astrophysics Data System (ADS)

    Ivanov, Plamen Ch.; Ashkenazy, Yosef; Kantelhardt, Jan W.; Stanley, H. Eugene

    2003-05-01

    We review a recently developed approach for analyzing time series with long-range correlations by decomposing the signal increment series into magnitude and sign series and analyzing their scaling properties. We show that time series with identical long-range correlations can exhibit different time organization for the magnitude and sign. We apply our approach to series of time intervals between consecutive heartbeats. Using the detrended fluctuation analysis method we find that the magnitude series is long-range correlated, while the sign series is anticorrelated and that both magnitude and sign series may have clinical applications. Further, we study the heartbeat magnitude and sign series during different sleep stages — light sleep, deep sleep, and REM sleep. For the heartbeat sign time series we find short-range anticorrelations, which are strong during deep sleep, weaker during light sleep and even weaker during REM sleep. In contrast, for the heartbeat magnitude time series we find long-range positive correlations, which are strong during REM sleep and weaker during light sleep. Thus, the sign and the magnitude series provide information which is also useful for distinguishing between different sleep stages.

  5. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  6. Derivation of Johnson-Cousins Magnitudes from DSLR Camera Observations

    NASA Astrophysics Data System (ADS)

    Park, Woojin; Pak, Soojong; Shim, Hyunjin; Le, Huynh Anh N.; Im, Myungshin; Chang, Seunghyuk; Yu, Joonkyu

    2016-01-01

    The RGB Bayer filter system consists of a mosaic of R, G, and B filters on the grid of the photo sensors which typical commercial DSLR (Digital Single Lens Reflex) cameras and CCD cameras are equipped with. Lot of unique astronomical data obtained using an RGB Bayer filter system are available, including transient objects, e.g. supernovae, variable stars, and solar system bodies. The utilization of such data in scientific research requires that reliable photometric transformation methods are available between the systems. In this work, we develop a series of equations to convert the observed magnitudes in the RGB Bayer filter system (RB, GB, and BB) into the Johnson-Cousins BVR filter system (BJ, VJ, and RC). The new transformation equations derive the calculated magnitudes in the Johnson-Cousins filters (BJcal, VJcal, and RCcal) as functions of RGB magnitudes and colors. The mean differences between the transformed magnitudes and original magnitudes, i.e. the residuals, are (BJ - BJcal) = 0.064 mag, (VJ - VJcal) = 0.041 mag, and (RC - RCcal) = 0.039 mag. The calculated Johnson-Cousins magnitudes from the transformation equations show a good linear correlation with the observed Johnson-Cousins magnitudes.

  7. Has Athletic Performance Reached its Peak?

    PubMed

    Berthelot, Geoffroy; Sedeaud, Adrien; Marck, Adrien; Antero-Jacquemin, Juliana; Schipman, Julien; Saulière, Guillaume; Marc, Andy; Desgorces, François-Denis; Toussaint, Jean-François

    2015-09-01

    Limits to athletic performance have long been a topic of myth and debate. However, sport performance appears to have reached a state of stagnation in recent years, suggesting that the physical capabilities of humans and other athletic species, such as greyhounds and thoroughbreds, cannot progress indefinitely. Although the ultimate capabilities may be predictable, the exact path for the absolute maximal performance values remains difficult to assess and relies on technical innovations, sport regulation, and other parameters that depend on current societal and economic conditions. The aim of this literature review was to assess the possible plateau of top physical capabilities in various events and detail the historical backgrounds and sociocultural, anthropometrical, and physiological factors influencing the progress and regression of athletic performance. Time series of performances in Olympic disciplines, such as track and field and swimming events, from 1896 to 2012 reveal a major decrease in performance development. Such a saturation effect is simultaneous in greyhound, thoroughbred, and frog performances. The genetic condition, exhaustion of phenotypic pools, economic context, and the depletion of optimal morphological traits contribute to the observed limitation of physical capabilities. Present conditions prevailing, we approach absolute physical limits and endure a continued period of world record scarcity. Optional scenarios for further improvements will mostly depend on sport technology and modification competition rules. PMID:26094000

  8. Inversion of Multi-Station Schumann Resonance Background Records for Global Lightning Activity in Absolute Units

    NASA Astrophysics Data System (ADS)

    Williams, E. R.; Mushtak, V. C.; Guha, A.; Boldi, R. A.; Bor, J.; Nagy, T.; Satori, G.; Sinha, A. K.; Rawat, R.; Hobara, Y.; Sato, M.; Takahashi, Y.; Price, C. G.; Neska, M.; Alexander, K.; Yampolski, Y.; Moore, R. C.; Mitchell, M. F.; Fraser-Smith, A. C.

    2014-12-01

    Every lightning flash contributes energy to the TEM mode of the natural global waveguide that contains the Earth's Schumann resonances. The modest attenuation at ELF (0.1 dB/Mm) allows for the continuous monitoring of the global lightning with a small number of receiving stations worldwide. In this study, nine ELF receiving sites (in Antarctica (3 sites), Hungary, India, Japan, Poland, Spitsbergen and USA) are used to provide power spectra at 12-minute intervals in two absolutely calibrated magnetic fields and occasionally, one electric field, with up to five resonance modes each. The observables are the extracted modal parameters (peak intensity, peak frequency and Q-factor) for each spectrum. The unknown quantities are the geographical locations of three continental lightning 'chimneys' and their lightning source strengths in absolute units (C2 km2/sec). The unknowns are calculated from the observables by the iterative inversion of an evolving 'sensitivity matrix' whose elements are the partial derivatives of each observable for all receiving sites with respect to each unknown quantity. The propagation model includes the important day-night asymmetry of the natural waveguide. To overcome the problem of multiple minima (common in inversion problems of this kind), location information from the World Wide Lightning Location Network has been used to make initial guess solutions based on centroids of stroke locations in each chimney. Results for five consecutive days in 2009 (Jan 7-11) show UT variations with the African chimney dominating on four of five days, and America dominating on the fifth day. The amplitude variations in absolute source strength exceed that of the 'Carnegie curve' of the DC global circuit by roughly twofold. Day-to-day variations in chimney source strength are of the order of tens of percent. Examination of forward calculations performed with the global inversion solution often show good agreement with the observed diurnal variations at

  9. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-06-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  10. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  11. Relationships between peak ground acceleration, peak ground velocity, and modified mercalli intensity in California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.

    1999-01-01

    We have developed regression relationships between Modified Mercalli Intensity (Imm) and peak ground acceleration (PGA) and velocity (PGV) by comparing horizontal peak ground motions to observed intensities for eight significant California earthquakes. For the limited range of Modified Mercalli intensities (Imm), we find that for peak acceleration with V ??? Imm ??? VIII, Imm = 3.66 log(PGA) - 1.66, and for peak velocity with V ??? Imm ??? IX, Imm = 3.47 log(PGV) + 2.35. From comparison with observed intensity maps, we find that a combined regression based on peak velocity for intensity > VII and on peak acceleration for intensity < VII is most suitable for reproducing observed Imm patterns, consistent with high intensities being related to damage (proportional to ground velocity) and with lower intensities determined by felt accounts (most sensitive to higher-frequency ground acceleration). These new Imm relationships are significantly different from the Trifunac and Brady (1975) correlations, which have been used extensively in loss estimation.

  12. Mini-implants and miniplates generate sub-absolute and absolute anchorage

    PubMed Central

    Consolaro, Alberto

    2014-01-01

    The functional demand imposed on bone promotes changes in the spatial properties of osteocytes as well as in their extensions uniformly distributed throughout the mineralized surface. Once spatial deformation is established, osteocytes create the need for structural adaptations that result in bone formation and resorption that happen to meet the functional demands. The endosteum and the periosteum are the effectors responsible for stimulating adaptive osteocytes in the inner and outer surfaces.Changes in shape, volume and position of the jaws as a result of skeletal correction of the maxilla and mandible require anchorage to allow bone remodeling to redefine morphology, esthetics and function as a result of spatial deformation conducted by orthodontic appliances. Examining the degree of changes in shape, volume and structural relationship of areas where mini-implants and miniplates are placed allows us to classify mini-implants as devices of subabsolute anchorage and miniplates as devices of absolute anchorage. PMID:25162561

  13. Comparison of magnetic probe calibration at nano and millitesla magnitudes

    NASA Astrophysics Data System (ADS)

    Pahl, Ryan A.; Rovey, Joshua L.; Pommerenke, David J.

    2014-01-01

    Magnetic field probes are invaluable diagnostics for pulsed inductive plasma devices where field magnitudes on the order of tenths of tesla or larger are common. Typical methods of providing a broadband calibration of dot{{B}} probes involve either a Helmholtz coil driven by a function generator or a network analyzer. Both calibration methods typically produce field magnitudes of tens of microtesla or less, at least three and as many as six orders of magnitude lower than their intended use. This calibration factor is then assumed constant regardless of magnetic field magnitude and the effects of experimental setup are ignored. This work quantifies the variation in calibration factor observed when calibrating magnetic field probes in low field magnitudes. Calibration of two dot{{B}} probe designs as functions of frequency and field magnitude are presented. The first dot{{B}} probe design is the most commonly used design and is constructed from two hand-wound inductors in a differential configuration. The second probe uses surface mounted inductors in a differential configuration with balanced shielding to further reduce common mode noise. Calibration factors are determined experimentally using an 80.4 mm radius Helmholtz coil in two separate configurations over a frequency range of 100-1000 kHz. A conventional low magnitude calibration using a vector network analyzer produced a field magnitude of 158 nT and yielded calibration factors of 15 663 ± 1.7% and 4920 ± 0.6% {T}/{V {s}} at 457 kHz for the surface mounted and hand-wound probes, respectively. A relevant magnitude calibration using a pulsed-power setup with field magnitudes of 8.7-354 mT yielded calibration factors of 14 615 ± 0.3% and 4507 ± 0.4% {T}/{V {s}} at 457 kHz for the surface mounted inductor and hand-wound probe, respectively. Low-magnitude calibration resulted in a larger calibration factor, with an average difference of 9.7% for the surface mounted probe and 12.0% for the hand-wound probe. The

  14. Absolute brightness temperature measurements at 2.1-mm wavelength

    NASA Technical Reports Server (NTRS)

    Ulich, B. L.

    1974-01-01

    Absolute measurements of the brightness temperatures of the Sun, new Moon, Venus, Mars, Jupiter, Saturn, and Uranus, and of the flux density of DR21 at 2.1-mm wavelength are reported. Relative measurements at 3.5-mm wavelength are also preented which resolve the absolute calibration discrepancy between The University of Texas 16-ft radio telescope and the Aerospace Corporation 15-ft antenna. The use of the bright planets and DR21 as absolute calibration sources at millimeter wavelengths is discussed in the light of recent observations.

  15. Absolute Antenna Calibration at the US National Geodetic Survey

    NASA Astrophysics Data System (ADS)

    Mader, G. L.; Bilich, A. L.

    2012-12-01

    Geodetic GNSS applications routinely demand millimeter precision and extremely high levels of accuracy. To achieve these accuracies, measurement and instrument biases at the centimeter to millimeter level must be understood. One of these biases is the antenna phase center, the apparent point of signal reception for a GNSS antenna. It has been well established that phase center patterns differ between antenna models and manufacturers; additional research suggests that the addition of a radome or the choice of antenna mount can significantly alter those a priori phase center patterns. For the more demanding GNSS positioning applications and especially in cases of mixed-antenna networks, it is all the more important to know antenna phase center variations as a function of both elevation and azimuth in the antenna reference frame and incorporate these models into analysis software. Determination of antenna phase center behavior is known as "antenna calibration". Since 1994, NGS has computed relative antenna calibrations for more than 350 antennas. In recent years, the geodetic community has moved to absolute calibrations - the IGS adopted absolute antenna phase center calibrations in 2006 for use in their orbit and clock products, and NGS's CORS group began using absolute antenna calibration upon the release of the new CORS coordinates in IGS08 epoch 2005.00 and NAD 83(2011,MA11,PA11) epoch 2010.00. Although NGS relative calibrations can be and have been converted to absolute, it is considered best practice to independently measure phase center characteristics in an absolute sense. Consequently, NGS has developed and operates an absolute calibration system. These absolute antenna calibrations accommodate the demand for greater accuracy and for 2-dimensional (elevation and azimuth) parameterization. NGS will continue to provide calibration values via the NGS web site www.ngs.noaa.gov/ANTCAL, and will publish calibrations in the ANTEX format as well as the legacy ANTINFO

  16. Direct comparisons between absolute and relative geomagnetic paleointensities: Absolute calibration of a relative paleointensity stack

    NASA Astrophysics Data System (ADS)

    Mochizuki, N.; Yamamoto, Y.; Hatakeyama, T.; Shibuya, H.

    2013-12-01

    Absolute geomagnetic paleointensities (APIs) have been estimated from igneous rocks, while relative paleomagnetic intensities (RPIs) have been reported from sediment cores. These two datasets have been treated separately, as correlations between APIs and RPIs are difficult on account of age uncertainties. High-resolution RPI stacks have been constructed from globally distributed sediment cores with high sedimentation rates. Previous studies often assumed that the RPI stacks have a linear relationship with geomagnetic axial dipole moments, and calibrated the RPI values to API values. However, the assumption of a linear relationship between APIs and RPIs has not been evaluated. Also, a quantitative calibration method for the RPI is lacking. We present a procedure for directly comparing API and RPI stacks, thus allowing reliable calibrations of RPIs. Direct comparisons between APIs and RPIs were conducted with virtually no associated age errors using both tephrochronologic correlations and RPI minima. Using the stratigraphic positions of tephra layers in oxygen isotope stratigraphic records, we directly compared the RPIs and APIs reported from welded tuffs contemporaneously extruded with the tephra layers. In addition, RPI minima during geomagnetic reversals and excursions were compared with APIs corresponding to the reversals and excursions. The comparison of APIs and RPIs at these exact points allowed a reliable calibration of the RPI values. We applied this direct comparison procedure to the global RPI stack PISO-1500. For six independent calibration points, virtual axial dipole moments (VADMs) from the corresponding APIs and RPIs of the PISO-1500 stack showed a near-linear relationship. On the basis of the linear relationship, RPIs of the stack were successfully calibrated to the VADMs. The direct comparison procedure provides an absolute calibration method that will contribute to the recovery of temporal variations and distributions of geomagnetic axial dipole

  17. Influence of weak motion data to magnitude dependence of PGA prediction model in Austria

    NASA Astrophysics Data System (ADS)

    Jia, Yan

    2015-04-01

    Data recorded by the STS2-sensors at the Austrian Seismic Network were differentiated and used to derive the PGA prediction model for Austria (Jia and Lenhardt, 2010). Before using it to our hazard assessment and real time shakemap, it is necessary to validate this model and obtain a deep understanding about it. In this paper, influence of weak motion data to the magnitude dependence of our prediction model was studied. In addition, spatial PGA residuals between the measurements and predictions were investigated as well. There are 127 earthquakes with a magnitude between 3 and 5.4 that were used to derive the PGA prediction model published in 2011. Unfortunately, 90% of used PGA measurements were made for the events with a magnitude smaller than 4. Only ten quakes among them have a magnitude larger than 4, which is the important magnitude range that needs our attention and hazard assessment. In this investigation, 127 earthquakes were divided into two groups: the first group only includes events with a magnitude smaller than 4, while the second group contains quakes with a magnitude larger than 4. By using the same modeling for estimating PGA attenuation in 2011, coefficients of the model were inverted from the measurements in two groups and compared to the one based on the complete data set. It was found that the group with the weak quakes returned results that only have small differences to the one from all 127 events, while the group with strong quakes (ml> 4) gave greater magnitude dependence than the model published in 2011. The distance coefficients stayed nearly unchanged for all three inversions. As the second step, spatial PGA residuals between the measurements and the predictions from our model were investigated. As explained in Jia and Lenhardt (2013), there are some differences in the site amplifications between the West- and the East-Austria. For a fair comparison, residuals were normalized for each station before the investigation. Then normalized

  18. Correlated peak relative light intensity and peak current in triggered lightning subsequent return strokes

    NASA Technical Reports Server (NTRS)

    Idone, V. P.; Orville, R. E.

    1985-01-01

    The correlation between peak relative light intensity L(R) and stroke peak current I(R) is examined for 39 subsequent return strokes in two triggered lightning flashes. One flash contained 19 strokes and the other 20 strokes for which direct measurements were available of the return stroke peak current at ground. Peak currents ranged from 1.6 to 21 kA. The measurements of peak relative light intensity were obtained from photographic streak recordings using calibrated film and microsecond resolution. Correlations, significant at better than the 0.1 percent level, were found for several functional relationships. Although a relation between L(R) and I(R) is evident in these data, none of the analytical relations considered is clearly favored. The correlation between L(R) and the maximum rate of current rise is also examined, but less correlation than between L(R) and I(R) is found. In addition, the peak relative intensity near ground is evaluated for 22 dart leaders, and a mean ratio of peak dart leader to peak return stroke relative light intensity was found to be 0.1 with a range of 0.02-0.23. Using two different methods, the peak current near ground in these dart leaders is estimated to range from 0.1 to 6 kA.

  19. Peak Wind Tool for General Forecasting

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Short, David

    2008-01-01

    This report describes work done by the Applied Meteorology Unit (AMU) in predicting peak winds at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45th Weather Squadron requested the AMU develop a tool to help them forecast the speed and timing of the daily peak and average wind, from the surface to 300 ft on KSC/CCAFS during the cool season. Based on observations from the KSC/CCAFS wind tower network , Shuttle Landing Facility (SLF) surface observations, and CCAFS sounding s from the cool season months of October 2002 to February 2007, the AMU created mul tiple linear regression equations to predict the timing and speed of the daily peak wind speed, as well as the background average wind speed. Several possible predictors were evaluated, including persistence , the temperature inversion depth and strength, wind speed at the top of the inversion, wind gust factor (ratio of peak wind speed to average wind speed), synoptic weather pattern, occurrence of precipitation at the SLF, and strongest wind in the lowest 3000 ft, 4000 ft, or 5000 ft.

  20. Absorption, Creativity, Peak Experiences, Empathy, and Psychoticism.

    ERIC Educational Resources Information Center

    Mathes, Eugene W.; And Others

    Tellegen and Atkinson suggested that the trait of absorption may play a part in meditative skill, creativity, capacity for peak experiences, and empathy. Although the absorption-meditative skill relationship has been confirmed, other predictions have not been tested. Tellegen and Atkinson's Absorption Scale was completed by undergraduates in four…

  1. Six Ways To Foster Peak Performance.

    ERIC Educational Resources Information Center

    Sevilla, Christine; Wells, Timothy D.

    1999-01-01

    Discusses six initiatives that organizations can support to ensure peak performance: individual knowledge portfolios; mentoring and apprenticeship relationships; electronic conferencing systems; organizational knowledge repository; community of practice; reward and recognition. Defines each initiative and describes how to make each one work in an…

  2. Avoiding the False Peaks in Correlation Discrimination

    SciTech Connect

    Awwal, A S

    2009-07-31

    Fiducials imprinted on laser beams are used to perform video image based alignment of the 192 laser beams in the National Ignition Facility (NIF) of Lawrence Livermore National Laboratory. In many video images, matched filtering is used to detect the location of these fiducials. Generally, the highest correlation peak is used to determine the position of the fiducials. However, when the signal to-be-detected is very weak compared to the noise, this approach totally breaks down. The highest peaks act as traps for false detection. The active target images used for automatic alignment in the National Ignition Facility are examples of such images. In these images, the fiducials of interest exhibit extremely low intensity and contrast, surrounded by high intensity reflection from metallic objects. Consequently, the highest correlation peaks are caused by these bright objects. In this work, we show how the shape of the correlation is exploited to isolate the valid matches from hundreds of invalid correlation peaks, and therefore identify extremely faint fiducials under very challenging imaging conditions.

  3. Hubbert's Peak: the Impending World oil Shortage

    NASA Astrophysics Data System (ADS)

    Deffeyes, K. S.

    2004-12-01

    Global oil production will probably reach a peak sometime during this decade. After the peak, the world's production of crude oil will fall, never to rise again. The world will not run out of energy, but developing alternative energy sources on a large scale will take at least 10 years. The slowdown in oil production may already be beginning; the current price fluctuations for crude oil and natural gas may be the preamble to a major crisis. In 1956, the geologist M. King Hubbert predicted that U.S. oil production would peak in the early 1970s.1 Almost everyone, inside and outside the oil industry, rejected Hubbert's analysis. The controversy raged until 1970, when the U.S. production of crude oil started to fall. Hubbert was right. Around 1995, several analysts began applying Hubbert's method to world oil production, and most of them estimate that the peak year for world oil will be between 2004 and 2008. These analyses were reported in some of the most widely circulated sources: Nature, Science, and Scientific American.2 None of our political leaders seem to be paying attention. If the predictions are correct, there will be enormous effects on the world economy. Even the poorest nations need fuel to run irrigation pumps. The industrialized nations will be bidding against one another for the dwindling oil supply. The good news is that we will put less carbon dioxide into the atmosphere. The bad news is that my pickup truck has a 25-gallon tank.

  4. Peak structural response to nonstationary random excitations

    NASA Technical Reports Server (NTRS)

    Shinozuka, M.; Yang, J.-N.

    1971-01-01

    Study establishes distribution function of peak response values, based on frequency interpretation. Excitations considered include impact loading on landing gears and aircraft gust loading. Because of relative severity of excitations, prediction of fatigue and maximum response characteristics is important part of task of structural analysis and design.

  5. Double-peak subauroral ion drifts (DSAIDs)

    NASA Astrophysics Data System (ADS)

    He, Fei; Zhang, Xiao-Xin; Wang, Wenbin; Chen, Bo

    2016-06-01

    This paper reports double-peak subauroral ion drifts (DSAIDs), which is unique subset of subauroral ion drifts (SAIDs). A statistical analysis has been carried out for the first time with a database of 454 DSAID events identified from Defense Meteorological Satellite Program observations from 1987 to 2012. Both case studies and statistical analyses show that the two velocity peaks of DSAIDs are associated with two ion temperature peaks and two region-2 field-aligned currents (R2-FACs) peaks in the midlatitude ionospheric trough located in the low-conductance subauroral region. DSAIDs are regional and vary significantly with magnetic local time. DSAIDs can evolve from/to SAIDs during their lifetimes, which are from several minutes to tens of minutes. Comparisons between the ionospheric parameters of DSAIDs and SAIDs indicate that double-layer region-2 field-aligned currents (R2-FACs) may be the main driver of DSAIDs. It is also found that DSAIDs happen during more disturbed conditions compared with SAIDs.

  6. Colour-magnitude diagrams of transiting Exoplanets - I. Systems with parallaxes

    NASA Astrophysics Data System (ADS)

    Triaud, Amaury H. M. J.

    2014-03-01

    Broad-band flux measurements centred around [3.6 μm] and [4.5 μm] obtained with Spitzer during the occultation of seven extrasolar planets by their host stars have been combined with parallax measurements to compute the absolute magnitudes of these planets. Those measurements are arranged in two colour-magnitude diagrams. Because most of the targets have sizes and temperatures similar to brown dwarfs, they can be compared to one another. In principle, this should permit inferences about exoatmospheres based on knowledge acquired by decades of observations of field brown dwarfs and ultracool stars' atmospheres. Such diagrams can assemble all measurements gathered so far and will provide help in the preparation of new observational programmes. In most cases, planets and brown dwarfs follow similar sequences. HD 2094589b and GJ 436b are found to be outliers, so is the night side of HD 189733b. The photometric variability associated with the orbital phase of HD 189733b is particularly revealing. The planet exhibits what appears like a spectral type and chemical transition between its day and night sides: HD 189733b straddles the L-T spectral class transition, which would imply different cloud coverage on each hemisphere. Methane absorption could be absent at its hotspot but present over the rest of the planet.

  7. Spanish Peaks, Sangre de Cristo Range, Colorado

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Spanish Peaks, on the eastern flank of the Sangre de Cristo range, abruptly rise 7,000 feet above the western Great Plains. Settlers, treasure hunters, trappers, gold and silver miners have long sighted on these prominent landmarks along the Taos branch of the Santa Fe trail. Well before the westward migration, the mountains figured in the legends and history of the Ute, Apache, Comanche, and earlier tribes. 'Las Cumbres Espanolas' are also mentioned in chronicles of exploration by Spaniards including Ulibarri in 1706 and later by de Anza, who eventually founded San Francisco (California). This exceptional view (STS108-720-32), captured by the crew of Space Shuttle mission STS108, portrays the Spanish Peaks in the context of the southern Rocky Mountains. Uplift of the Sangre de Cristo began about 75 million years ago and produced the long north-trending ridges of faulted and folded rock to the west of the paired peaks. After uplift had ceased (26 to 22 million years ago), the large masses of igneous rock (granite, granodiorite, syenodiorite) that form the Peaks were emplaced (Penn, 1995-2001). East and West Spanish Peaks are 'stocks'-bodies of molten rock that intruded sedimentary layers, cooled and solidified, and were later exposed by erosion. East Peak (E), at 12,708 ft is almost circular and is about 5 1/2 miles long by 3 miles wide, while West Peak (W), at 13,623 ft is roughly 2 3/4 miles long by 1 3/4 miles wide. Great dikes-long stone walls-radiate outward from the mountains like spokes of a wheel, a prominent one forms a broad arc northeast of East Spanish Peak. As the molten rock rose, it forced its way into vertical cracks and joints in the sedimentary strata; the less resistant material was then eroded away, leaving walls of hard rock from 1 foot to 100 feet wide, up to 100 feet high, and as long as 14 miles. Dikes trending almost east-west are also common in the region. For more information visit: Sangres.com: The Spanish Peaks (accessed January 16

  8. Number games, magnitude representation, and basic number skills in preschoolers.

    PubMed

    Whyte, Jemma Catherine; Bull, Rebecca

    2008-03-01

    The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was compared following four 25-min intervention sessions. The linear number board game significantly improved children's performance in all posttest measures and facilitated a shift from a logarithmic to a linear representation of numerical magnitude, emphasizing the importance of spatial cues in estimation. Exposure to the number card games involving nonsymbolic magnitude judgments and association of symbolic and nonsymbolic quantities, but without any linear spatial cues, improved some aspects of children's basic number skills but not numerical estimation precision. PMID:18331146

  9. On the macroseismic magnitudes of the largest Italian earthquakes

    NASA Astrophysics Data System (ADS)

    Tinti, S.; Vittori, T.; Mulargia, F.

    1987-07-01

    The macroseismic magnitudes MT of the largest Italian earthquakes ( I0 ⩾ VIII, MCS) have been computed by using the intensity magnitude relationships recently assessed by the authors (1986) for the Italian region. The Progetto Finalizzato Geodinamica (PFG) catalog of the Italian earthquakes, covering the period 1000-1980 (Postpischl, 1985) is the source data base and is reproduced in the Appendix: here the estimated values of MT are given side by side with the catalog macroseismic magnitudes MK i.e. the magnitudes computed according to the Karnik laws (Karnik, 1969). The one-sigma errors Δ MT are also given for each earthquake. The basic aim of the paper is to provide a handy and useful tool to researchers involved in seismicity and seismic-risk studies on Italian territory.

  10. When Should Zero Be Included on a Scale Showing Magnitude?

    ERIC Educational Resources Information Center

    Kozak, Marcin

    2011-01-01

    This article addresses an important problem of graphing quantitative data: should one include zero on the scale showing magnitude? Based on a real time series example, the problem is discussed and some recommendations are proposed.

  11. Magnitude-frequency distribution of volcanic explosion earthquakes

    NASA Astrophysics Data System (ADS)

    Nishimura, Takeshi; Iguchi, Masato; Hendrasto, Mohammad; Aoyama, Hiroshi; Yamada, Taishi; Ripepe, Maurizio; Genco, Riccardo

    2016-07-01

    Magnitude-frequency distributions of volcanic explosion earthquakes that are associated with occurrences of vulcanian and strombolian eruptions, or gas burst activity, are examined at six active volcanoes. The magnitude-frequency distribution at Suwanosejima volcano, Japan, shows a power-law distribution, which implies self-similarity in the system, as is often observed in statistical characteristics of tectonic and volcanic earthquakes. On the other hand, the magnitude-frequency distributions at five other volcanoes, Sakurajima and Tokachi-dake in Japan, Semeru and Lokon in Indonesia, and Stromboli in Italy, are well explained by exponential distributions. The statistical features are considered to reflect source size, as characterized by a volcanic conduit or chamber. Earthquake generation processes associated with vulcanian, strombolian and gas burst events are different from those of eruptions ejecting large amounts of pyroclasts, since the magnitude-frequency distribution of the volcanic explosivity index is generally explained by the power law.

  12. Frequency-Magnitude Relationship of Hydraulic Fracture Microseismicity (Invited)

    NASA Astrophysics Data System (ADS)

    Maxwell, S.

    2009-12-01

    Microseismicity has become a common imaging technique for hydraulic fracture stimulations in the oil and gas industry, offering a wide range of microseismic data sets in different settings. Typically, arrays of 3C sensors are deployed in single monitoring wells presenting processing challenges associated with the limited acquisition geometry. However, the proximity of the sensors to the fracture network results in good sensitivity to detect small magnitude microseisms (down to about moment magnitude -3 in some cases). This sensitivity allows a comparison of the magnitude-frequency relationship between microseisms attributed to hydraulic fracturing with those related to activation of interaction with a pre-existing fault. A case study will be presented showing a clear change in the frequency-magnitude characteristics as the injection interacts with a known fault.

  13. Absolute calibration of sniffer probes on Wendelstein 7-X.

    PubMed

    Moseev, D; Laqua, H P; Marsen, S; Stange, T; Braune, H; Erckmann, V; Gellert, F; Oosterbeek, J W

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m(2) per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m(2) per MW injected beam power is measured. PMID:27587121

  14. Absolute Value Boundedness, Operator Decomposition, and Stochastic Media and Equations

    NASA Technical Reports Server (NTRS)

    Adomian, G.; Miao, C. C.

    1973-01-01

    The research accomplished during this period is reported. Published abstracts and technical reports are listed. Articles presented include: boundedness of absolute values of generalized Fourier coefficients, propagation in stochastic media, and stationary conditions for stochastic differential equations.

  15. The conditions of absolute summability of multiple trigonometric series

    NASA Astrophysics Data System (ADS)

    Bitimkhan, Samat; Akishev, Gabdolla

    2015-09-01

    In this work necessary and sufficient conditions of absolute summability of multiple trigonometric Fourier series of functions from anisotropic spaces of Lebesque are found in terms of its best approximation, the module of smoothness and the mixed smoothness module.

  16. Absolute calibration of sniffer probes on Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  17. Magnitude knowledge: the common core of numerical development.

    PubMed

    Siegler, Robert S

    2016-05-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development. PMID:27074723

  18. A probabilistic neural network for earthquake magnitude prediction.

    PubMed

    Adeli, Hojjat; Panakkat, Ashif

    2009-09-01

    A probabilistic neural network (PNN) is presented for predicting the magnitude of the largest earthquake in a pre-defined future time period in a seismic region using eight mathematically computed parameters known as seismicity indicators. The indicators considered are the time elapsed during a particular number (n) of significant seismic events before the month in question, the slope of the Gutenberg-Richter inverse power law curve for the n events, the mean square deviation about the regression line based on the Gutenberg-Richter inverse power law for the n events, the average magnitude of the last n events, the difference between the observed maximum magnitude among the last n events and that expected through the Gutenberg-Richter relationship known as the magnitude deficit, the rate of square root of seismic energy released during the n events, the mean time or period between characteristic events, and the coefficient of variation of the mean time. Prediction accuracies of the model are evaluated using three different statistical measures: the probability of detection, the false alarm ratio, and the true skill score or R score. The PNN model is trained and tested using data for the Southern California region. The model yields good prediction accuracies for earthquakes of magnitude between 4.5 and 6.0. The PNN model presented in this paper complements the recurrent neural network model developed by the authors previously, where good results were reported for predicting earthquakes with magnitude greater than 6.0. PMID:19502005

  19. Local magnitude calibration of the Hellenic Unified Seismic Network

    NASA Astrophysics Data System (ADS)

    Scordilis, E. M.; Kementzetzidou, D.; Papazachos, B. C.

    2016-01-01

    A new relation is proposed for accurate determination of local magnitudes in Greece. This relation is based on a large number of synthetic Wood-Anderson (SWA) seismograms corresponding to 782 regional shallow earthquakes which occurred during the period 2007-2013 and recorded by 98 digital broad-band stations. These stations are installed and operated by the following: (a) the National Observatory of Athens (HL), (b) the Department of Geophysics of the Aristotle University of Thessaloniki (HT), (c) the Seismological Laboratory of the University of Athens (HA), and (d) the Seismological Laboratory of the Patras University (HP). The seismological networks of the above institutions constitute the recently (2004) established Hellenic Unified Seismic Network (HUSN). These records are used to calculate a refined geometrical spreading factor and an anelastic attenuation coefficient, representative for Greece and surrounding areas, proper for accurate calculation of local magnitudes in this region. Individual station corrections depending on the crustal structure variations in their vicinity and possible inconsistencies in instruments responses are also considered in order to further ameliorate magnitude estimation accuracy. Comparison of such calculated local magnitudes with corresponding original moment magnitudes, based on an independent dataset, revealed that these magnitude scales are equivalent for a wide range of values.

  20. High-orbit satellite magnitude estimation using photometric measurement method

    NASA Astrophysics Data System (ADS)

    Zhang, Shixue

    2015-12-01

    The means to get the accurate high-orbit satellite magnitude can be significant in space target surveillance. This paper proposes a satellite photometric measurement method based on image processing. We calculate the satellite magnitude by comparing the output value of camera's CCD between the known fixed star and the satellite. We calculate the luminance value of a certain object on the acquired image using a background-removing method. According to the observation parameters such as azimuth, elevation, height and the situation of the telescope, we can draw the star map on the image, so we can get the real magnitude of a certain fixed star in the image. We derive a new method to calculate the magnitude value of a certain satellite according to the magnitude of the fixed star in the image. To guarantee the algorithm's stability, we evaluate the measurement precision of the method, and analysis the restrict condition in actual application. We have made plenty of experiment of our system using large telescope in satellite surveillance, and testify the correctness of the algorithm. The experimental result shows that the precision of the proposed algorithm in satellite magnitude measurement is 0.24mv, and this method can be generalized to other relative fields.