Science.gov

Sample records for absolute peak magnitudes

  1. Asteroid absolute magnitudes and slope parameters

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.

    1991-01-01

    A new listing of absolute magnitudes (H) and slope parameters (G) has been created and published in the Minor Planet Circulars; this same listing will appear in the 1992 Ephemerides of Minor Planets. Unlike previous listings, the values of the current list were derived from fits of data at the V band. All observations were reduced in the same fashion using, where appropriate, a single basis default value of 0.15 for the slope parameter. Distances and phase angles were computed for each observation. The data for 113 asteroids was of sufficiently high quality to permit derivation of their H and G. These improved absolute magnitudes and slope parameters will be used to deduce the most reliable bias-corrected asteroid size-frequency distribution yet made.

  2. THE ABSOLUTE MAGNITUDES OF TYPE Ia SUPERNOVAE IN THE ULTRAVIOLET

    SciTech Connect

    Brown, Peter J.; Roming, Peter W. A.; Ciardullo, Robin; Gronwall, Caryl; Hoversten, Erik A.; Pritchard, Tyler; Milne, Peter; Bufano, Filomena; Mazzali, Paolo; Elias-Rosa, Nancy; Filippenko, Alexei V.; Li Weidong; Foley, Ryan J.; Hicken, Malcolm; Kirshner, Robert P.; Gehrels, Neil; Holland, Stephen T.; Immler, Stefan; Phillips, Mark M.; Still, Martin

    2010-10-01

    We examine the absolute magnitudes and light-curve shapes of 14 nearby (redshift z = 0.004-0.027) Type Ia supernovae (SNe Ia) observed in the ultraviolet (UV) with the Swift Ultraviolet/Optical Telescope. Colors and absolute magnitudes are calculated using both a standard Milky Way extinction law and one for the Large Magellanic Cloud that has been modified by circumstellar scattering. We find very different behavior in the near-UV filters (uvw1{sub rc} covering {approx}2600-3300 A after removing optical light, and u {approx} 3000-4000 A) compared to a mid-UV filter (uvm2 {approx}2000-2400 A). The uvw1{sub rc} - b colors show a scatter of {approx}0.3 mag while uvm2-b scatters by nearly 0.9 mag. Similarly, while the scatter in colors between neighboring filters is small in the optical and somewhat larger in the near-UV, the large scatter in the uvm2 - uvw1 colors implies significantly larger spectral variability below 2600 A. We find that in the near-UV the absolute magnitudes at peak brightness of normal SNe Ia in our sample are correlated with the optical decay rate with a scatter of 0.4 mag, comparable to that found for the optical in our sample. However, in the mid-UV the scatter is larger, {approx}1 mag, possibly indicating differences in metallicity. We find no strong correlation between either the UV light-curve shapes or the UV colors and the UV absolute magnitudes. With larger samples, the UV luminosity might be useful as an additional constraint to help determine distance, extinction, and metallicity in order to improve the utility of SNe Ia as standardized candles.

  3. Near-infrared absolute magnitudes of Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Avelino, Arturo; Friedman, Andrew S.; Mandel, Kaisey; Kirshner, Robert; Challis, Peter

    2017-01-01

    Type Ia Supernovae light curves (SN Ia) in the near infrared (NIR) exhibit low dispersion in their peak luminosities and are less vulnerable to extinction by interstellar dust in their host galaxies. The increasing number of high quality NIR SNe Ia light curves, including the recent CfAIR2 sample obtained with PAIRITEL, provides updated evidence for their utility as standard candles for cosmology. Using NIR YJHKs light curves of ~150 nearby SNe Ia from the CfAIR2 and CSP samples, and from the literature, we determine the mean value and dispersion of the absolute magnitude in the range between -10 to 50 rest-frame days after the maximum luminosity in B band. We present the mean light-curve templates and Hubble diagram for YJHKs bands. This work contributes to a firm local anchor for supernova cosmology studies in the NIR which will help to reduce the systematic uncertainties due to host galaxy dust present in optical-only studies. This research is supported by NSF grants AST-156854, AST-1211196, Fundacion Mexico en Harvard, and CONACyT.

  4. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  5. STANDARDIZING TYPE Ia SUPERNOVA ABSOLUTE MAGNITUDES USING GAUSSIAN PROCESS DATA REGRESSION

    SciTech Connect

    Kim, A. G.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Nordin, J.; Thomas, R. C.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Kerschhaggl, M.; Kowalski, M.; Chotard, N.; Copin, Y.; Gangler, E.; and others

    2013-04-01

    We present a novel class of models for Type Ia supernova time-evolving spectral energy distributions (SEDs) and absolute magnitudes: they are each modeled as stochastic functions described by Gaussian processes. The values of the SED and absolute magnitudes are defined through well-defined regression prescriptions, so that data directly inform the models. As a proof of concept, we implement a model for synthetic photometry built from the spectrophotometric time series from the Nearby Supernova Factory. Absolute magnitudes at peak B brightness are calibrated to 0.13 mag in the g band and to as low as 0.09 mag in the z = 0.25 blueshifted i band, where the dispersion includes contributions from measurement uncertainties and peculiar velocities. The methodology can be applied to spectrophotometric time series of supernovae that span a range of redshifts to simultaneously standardize supernovae together with fitting cosmological parameters.

  6. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    SciTech Connect

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo

    2013-09-20

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.

  7. The absolute magnitude distribution of Kuiper Belt objects

    SciTech Connect

    Fraser, Wesley C.; Brown, Michael E.; Morbidelli, Alessandro; Parker, Alex; Batygin, Konstantin

    2014-02-20

    Here we measure the absolute magnitude distributions (H-distribution) of the dynamically excited and quiescent (hot and cold) Kuiper Belt objects (KBOs), and test if they share the same H-distribution as the Jupiter Trojans. From a compilation of all useable ecliptic surveys, we find that the KBO H-distributions are well described by broken power laws. The cold population has a bright-end slope, α{sub 1}=1.5{sub −0.2}{sup +0.4}, and break magnitude, H{sub B}=6.9{sub −0.2}{sup +0.1} (r'-band). The hot population has a shallower bright-end slope of, α{sub 1}=0.87{sub −0.2}{sup +0.07}, and break magnitude H{sub B}=7.7{sub −0.5}{sup +1.0}. Both populations share similar faint-end slopes of α{sub 2} ∼ 0.2. We estimate the masses of the hot and cold populations are ∼0.01 and ∼3 × 10{sup –4} M {sub ⊕}. The broken power-law fit to the Trojan H-distribution has α{sub 1} = 1.0 ± 0.2, α{sub 2} = 0.36 ± 0.01, and H {sub B} = 8.3. The Kolmogorov-Smirnov test reveals that the probability that the Trojans and cold KBOs share the same parent H-distribution is less than 1 in 1000. When the bimodal albedo distribution of the hot objects is accounted for, there is no evidence that the H-distributions of the Trojans and hot KBOs differ. Our findings are in agreement with the predictions of the Nice model in terms of both mass and H-distribution of the hot and Trojan populations. Wide-field survey data suggest that the brightest few hot objects, with H{sub r{sup ′}}≲3, do not fall on the steep power-law slope of fainter hot objects. Under the standard hierarchical model of planetesimal formation, it is difficult to account for the similar break diameters of the hot and cold populations given the low mass of the cold belt.

  8. The absolute magnitude distribution of cold classical Kuiper belt objects

    NASA Astrophysics Data System (ADS)

    Petit, Jean-Marc; Bannister, Michele T.; Alexandersen, Mike; Chen, Ying-Tung; Gladman, Brett; Gwyn, Stephen; Kavelaars, JJ; Volk, Kathryn

    2016-10-01

    We report measurements of the low inclination component of the main Kuiper Belt showing a size freqency distribution very steep for sizes larger than H_r ~ 6.5-7.0 and then a flattening to shallower slope that is still steeper than the collisional equilibrium slope.The Outer Solar System Origins Survey (OSSOS) is ongoing and is expected to detect over 500 TNOs in a precisely calibrated and characterized survey. Combining our current sample with CFEPS and the Alexandersen et al. (2015) survey, we analyse a sample of ~180 low inclination main classical (cold) TNOs, with absolute magnitude H_r (SDSS r' like flter) in the range 5 to 8.8. We confirm that the H_r distribution can be approximated by an exponential with a very steep slope (>1) at the bright end of the distribution, as has been recognized long ago. A transition to a shallower slope occurs around H_r ~ 6.5 - 7.0, an H_r mag identified by Fraster et al (2014). Faintward of this transition, we find a second exponential to be a good approximation at least until H_r ~ 8.5, but with a slope significantly steeper than the one proposed by Fraser et al. (2014) or even the collisional equilibrium value of 0.5.The transition in the cold TNO H_r distribution thus appears to occur at larger sizes than is observed in the high inclination main classical (hot) belt, an important indicator of a different cosmogony for these two sub-components of the main classical Kuiper belt. Given the largish slope faintward of the transition, the cold population with ~100 km diameter may dominate the mass of the Kuiper belt in the 40 AU < a < 47 au region.

  9. Absolute magnitudes of asteroids and a revision of asteroid albedo estimates from WISE thermal observations

    NASA Astrophysics Data System (ADS)

    Pravec, Petr; Harris, Alan W.; Kušnirák, Peter; Galád, Adrián; Hornoch, Kamil

    2012-09-01

    We obtained estimates of the Johnson V absolute magnitudes (H) and slope parameters (G) for 583 main-belt and near-Earth asteroids observed at Ondřejov and Table Mountain Observatory from 1978 to 2011. Uncertainties of the absolute magnitudes in our sample are <0.21 mag, with a median value of 0.10 mag. We compared the H data with absolute magnitude values given in the MPCORB, Pisa AstDyS and JPL Horizons orbit catalogs. We found that while the catalog absolute magnitudes for large asteroids are relatively good on average, showing only little biases smaller than 0.1 mag, there is a systematic offset of the catalog values for smaller asteroids that becomes prominent in a range of H greater than ∼10 and is particularly big above H ∼ 12. The mean (Hcatalog - H) value is negative, i.e., the catalog H values are systematically too bright. This systematic negative offset of the catalog values reaches a maximum around H = 14 where the mean (Hcatalog - H) is -0.4 to -0.5. We found also smaller correlations of the offset of the catalog H values with taxonomic types and with lightcurve amplitude, up to ∼0.1 mag or less. We discuss a few possible observational causes for the observed correlations, but the reason for the large bias of the catalog absolute magnitudes peaking around H = 14 is unknown; we suspect that the problem lies in the magnitude estimates reported by asteroid surveys. With our photometric H and G data, we revised the preliminary WISE albedo estimates made by Masiero et al. (Masired, J.R. et al. [2011]. Astrophys. J. 741, 68-89) and Mainzer et al. (Mainzer, A. et al. [2011b]. Astrophys. J. 743, 156-172) for asteroids in our sample. We found that the mean geometric albedo of Tholen/Bus/DeMeo C/G/B/F/P/D types with sizes of 25-300 km is pV = 0.057 with the standard deviation (dispersion) of the sample of 0.013 and the mean albedo of S/A/L types with sizes 0.6-200 km is 0.197 with the standard deviation of the sample of 0.051. The standard errors of the

  10. Timing and magnitude of peak height velocity and peak tissue velocities for early, average, and late maturing boys and girls.

    PubMed

    Iuliano-Burns, S; Mirwald, R L; Bailey, D A

    2001-01-01

    Height, weight, and tissue accrual were determined in 60 male and 53 female adolescents measured annually over six years using standard anthropometry and dual-energy X-ray absorptiometry (DXA). Annual velocities were derived, and the ages and magnitudes of peak height and peak tissue velocities were determined using a cubic spline fit to individual data. Individuals were rank ordered on the basis of sex and age at peak height velocity (PHV) and then divided into quartiles: early (lowest quartile), average (middle two quartiles), and late (highest quartile) maturers. Sex- and maturity-related comparisons in ages and magnitudes of peak height and peak tissue velocities were made. Males reached peak velocities significantly later than females for all tissues and had significantly greater magnitudes at peak. The age at PHV was negatively correlated with the magnitude of PHV in both sexes. At a similar maturity point (age at PHV) there were no differences in weight or fat mass among maturity groups in both sexes. Late maturing males, however, accrued more bone mineral and lean mass and were taller at the age of PHV compared to early maturers. Thus, maturational status (early, average, or late maturity) as indicated by age at PHV is inversely related to the magnitude of PHV in both sexes. At a similar maturational point there are no differences between early and late maturers for weight and fat mass in boys and girls.

  11. Independent coding of absolute duration and distance magnitudes in the prefrontal cortex.

    PubMed

    Marcos, Encarni; Tsujimoto, Satoshi; Genovesio, Aldo

    2017-01-01

    The estimation of space and time can interfere with each other, and neuroimaging studies have shown overlapping activation in the parietal and prefrontal cortical areas. We used duration and distance discrimination tasks to determine whether space and time share resources in prefrontal cortex (PF) neurons. Monkeys were required to report which of two stimuli, a red circle or blue square, presented sequentially, were longer and farther, respectively, in the duration and distance tasks. In a previous study, we showed that relative duration and distance are coded by different populations of neurons and that the only common representation is related to goal coding. Here, we examined the coding of absolute duration and distance. Our results support a model of independent coding of absolute duration and distance metrics by demonstrating that not only relative magnitude but also absolute magnitude are independently coded in the PF.

  12. Spectrophotometry of Wolf-Rayet stars - Intrinsic colors and absolute magnitudes

    NASA Technical Reports Server (NTRS)

    Torres-Dodgen, Ana V.; Massey, Philip

    1988-01-01

    Absolute spectrophotometry of about 10-A resolution in the range 3400-7300 A have been obtained for southern Wolf-Rayet stars, and line-free magnitudes and colors have been constructed. The emission-line contamination in the narrow-band ubvr systems of Westerlund (1966) and Smith (1968) is shown to be small for most WN stars, but to be quite significant for WC stars. It is suggested that the more severe differences in intrinsic color from star to star of the same spectral subtype noted at shorter wavelengths are due to differences in atmospheric extent. True continuum absolute visual magnitudes and intrinsic colors are obtained for the LMC WR stars. The most visually luminous WN6-WN7 stars are found to be located in the core of the 30 Doradus region.

  13. THE ABSOLUTE MAGNITUDES OF RED HORIZONTAL BRANCH STARS IN THE ugriz SYSTEM

    SciTech Connect

    Chen, Y. Q.; Zhao, G.; Zhao, J. K.

    2009-09-10

    Based on photometric data of the central parts of eight globular clusters and one open cluster presented by An and his collaborators, we select red horizontal branch (RHB) stars in the (g - r){sub 0}-g {sub 0} diagram and make a statistical study of the distributions of their colors and absolute magnitudes in the SDSS ugriz system. Meanwhile, absolute magnitudes in the Johnson VRI system are calculated through the translation formulae between gri and VRI in the literature. The calibrations of absolute magnitude as functions of metallicity and age are established by linear regressions of the data. It is found that metallicity coefficients in these calibrations decrease, while age coefficients increase, from the blue u filter to the red z filter. The calibration of M{sub i} = 0.06[Fe/H] + 0.040t + 0.03 has the smallest scatter of 0.04 mag, and thus i is the best filter in the ugriz system when RHB stars are used for distance indicators. The comparison of the M{sub I} calibration from our data with that from red clump stars indicates that the previous suggestion that the I filter is better than the V filter in distance determination may not be true because of its significant dependence on age.

  14. Absolute magnitude estimation and relative judgement approaches to subjective workload assessment

    NASA Technical Reports Server (NTRS)

    Vidulich, Michael A.; Tsang, Pamela S.

    1987-01-01

    Two rating scale techniques employing an absolute magnitude estimation method, were compared to a relative judgment method for assessing subjective workload. One of the absolute estimation techniques used was an unidimensional overall workload scale and the other was the multidimensional NASA-Task Load Index technique. Thomas Saaty's Analytic Hierarchy Process was the unidimensional relative judgment method used. These techniques were used to assess the subjective workload of various single- and dual-tracking conditions. The validity of the techniques was defined as their ability to detect the same phenomena observed in the tracking performance. Reliability was assessed by calculating test-retest correlations. Within the context of the experiment, the Saaty Analytic Hierarchy Process was found to be superior in validity and reliability. These findings suggest that the relative judgment method would be an effective addition to the currently available subjective workload assessment techniques.

  15. Analysis of the Magnitude and Frequency of Peak Discharge and Maximum Observed Peak Discharge in New Mexico and Surrounding Areas

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2008-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent

  16. Distance and absolute magnitudes of the brightest stars in the dwarf galaxy Sextans A

    NASA Technical Reports Server (NTRS)

    Sandage, A.; Carlson, G.

    1982-01-01

    In an attempt to improve present bright star calibration, data were gathered for the brightest red and blue stars and the Cepheids in the Im V dwarf galaxy, Sextans A. On the basis of a magnitude sequence measured to V and B values of about 22 and 23, respectively, the mean magnitudes of the three brightest blue stars are V=17.98 and B=17.88. The three brightest red supergiants have V=18.09 and B=20.14. The periods and magnitudes measured for five Cepheids yield an apparent blue distance modulus of 25.67 + or - 0.2, via the P-L relation, and the mean absolute magnitudes of V=-7.56 and B=-5.53 for the red supergiants provide additional calibration of the brightest red stars as distance indicators. If Sextans A were placed at the distance of the Virgo cluster, it would appear to have a surface brightness of 23.5 mag/sq arcec. This, together with the large angular diameter, would make such a galaxy easily discoverable in the Virgo cluster by means of ground-based surveys.

  17. Methods for estimating the magnitude and frequency of peak streamflows for unregulated streams in Oklahoma

    USGS Publications Warehouse

    Lewis, Jason M.

    2010-01-01

    Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.

  18. Estimating the magnitude of peak flows at selected recurrence intervals for streams in Idaho

    USGS Publications Warehouse

    Berenbrock, Charles

    2002-01-01

    The region-of-influence method is not recommended for use in determining flood-frequency estimates for ungaged sites in Idaho because the results, overall, are less accurate and the calculations are more complex than those of regional regression equations. The regional regression equations were considered to be the primary method of estimating the magnitude and frequency of peak flows for ungaged sites in Idaho.

  19. Engine combustion control responsive to location and magnitude of peak combustion pressure

    SciTech Connect

    Tombley, D.E.

    1987-11-17

    A combustion control is described for an internal combustion engine of the type having combustion chambers, means for supplying a combustible charge to and igniting the combustible charge within each combustion chamber, power output apparatus including a rotating crankshaft, and means for sensing the crankshaft angle (LPP) and magnitude (MPP) of peak combustion pressure for each combustion chamber. The combustion control consists of: means for deriving the average magnitude of peak combustion pressure (AMPP); means for determining base values; memory means for storing tables of LPP ignition trim values, MPP ignition trim values and A/F trim values for each combustion chamber; means for comparing the sensed LPP value for each combustion chamber with a desired LPP value (DLPP) for that combustion chamber and adjusting the LPP ignition trim value for the predetermined engine operating parameters; means for comparing the MPP value for each combustion chamber with the average magnitude of peak combustion pressure; means to adjust the A/F trim value in the rich direction and reset the MPP ignition trim value; means to adjust the MPP ignition trim value in the advance direction; means to adjust the A/F trim value in the lean direction and reset the MPP ignition trim value; means for determining the combustible charge mixture for each combustion chamber from the base value thereof and the A/F trim value for the sensed predetermined engine operating parameters; means for determining the ignition timing for each combustion.

  20. Debiased Orbital and Absolute Magnitude Distribution of the Near-Earth Objects

    NASA Astrophysics Data System (ADS)

    Bottke, William F.; Morbidelli, Alessandro; Jedicke, Robert; Petit, Jean-Marc; Levison, Harold F.; Michel, Patrick; Metcalfe, Travis S.

    2002-04-01

    The orbital and absolute magnitude distribution of the near-Earth objects (NEOs) is difficult to compute, partly because only a modest fraction of the entire NEO population has been discovered so far, but also because the known NEOs are biased by complicated observational selection effects. To circumvent these problems, we created a model NEO population which was fit to known NEOs discovered or accidentally rediscovered by Spacewatch. Our method was to numerically integrate thousands of test particles from five source regions that we believe provide most NEOs to the inner Solar System. Four of these source regions are in or adjacent to the main asteroid belt, while the fifth one is associated with the transneptunian disk. The nearly isotropic comets, which include the Halley-type comets and the long-period comets, were not included in our model. Test bodies from our source regions that passed into the NEO region (perihelia q<1.3 AU and aphelia Q≥0.983 AU) were tracked until they were eliminated by striking the Sun or a planet or were ejected out of the inner Solar System. These integrations were used to create five residence time probability distributions in semimajor axis, eccentricity, and inclination space (one for each source). These distributions show where NEOs from a given source are statistically most likely to be located. Combining these five residence time probability distributions with an NEO absolute magnitude distribution computed from previous work and a probability function representing the observational biases associated with the Spacewatch NEO survey, we produced an NEO model population that could be fit to 138 NEOs discovered or accidentally rediscovered by Spacewatch. By testing a range of possible source combinations, a best-fit NEO model was computed which (i) provided the debiased orbital and absolute magnitude distributions for the NEO population and (ii) indicated the relative importance of each NEO source region. Our best-fit model is

  1. Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Martin, Gary R.

    2003-01-01

    This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.

  2. The orbit of Phi Cygni measured with long-baseline optical interferometry - Component masses and absolute magnitudes

    NASA Technical Reports Server (NTRS)

    Armstrong, J. T.; Hummel, C. A.; Quirrenbach, A.; Buscher, D. F.; Mozurkewich, D.; Vivekanand, M.; Simon, R. S.; Denison, C. S.; Johnston, K. J.; Pan, X.-P.

    1992-01-01

    The orbit of the double-lined spectroscopic binary Phi Cygni, the distance to the system, and the masses and absolute magnitudes of its components are presented via measurements with the Mar III Optical Interferometer. On the basis of a reexamination of the spectroscopic data of Rach & Herbig (1961), the values and uncertainties are adopted for the period and the projected semimajor axes from the present fit to the spectroscopic data and the values of the remaining elements from the present fit to the Mark III data. The elements of the true orbit are derived, and the masses and absolute magnitudes of the components, and the distance to the system are calculated.

  3. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  4. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  5. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  6. Estimating flood-peak discharge magnitudes and frequencies for rural streams in Illinois

    USGS Publications Warehouse

    Soong, David T.; Ishii, Audrey; Sharpe, Jennifer B.; Avery, Charles F.

    2004-01-01

    Flood-peak discharge magnitudes and frequencies at streamflow-gaging sites were developed with the annual maximum series (AMS) and the partial duration series (PDS) in this study. Regional equations for both flood series were developed for estimating flood-peak discharge magnitudes at specified recurrence intervals of rural Illinois streams. The regional equations are techniques for estimating flood quantiles at ungaged sites or for improving estimated flood quantiles at gaged sites with short records or unrepresentative data. Besides updating at-site floodfrequency estimates using flood data up to water year 1999, this study updated the generalized skew coefficients for Illinois to be used with the Log-Pearson III probability distribution for analyzing the AMS, developed a program for analyzing the partial duration series with the Generalized Pareto probability distribution, and applied the BASINSOFT program with digital datasets in soil, topography, land cover, and precipitation to develop a set of basin characteristics. The multiple regression analysis was used to develop the regional equations with subsets of the basin characteristics and the updated at-site flood frequencies. Seven hydrologic regions were delineated using physiographic and hydrologic characteristics of drainage basins of Illinois. The seven hydrologic regions were used for both the AMS and PDS analyses. Examples are presented to illustrate the use of the AMS regional equations to estimate flood quantiles at an ungaged site and to improve flood-quantile estimates at and near a gaged site. Flood-quantile estimates in four regulated channel reaches of Illinois also are approximated by linear interpolation. Documentation of the flood data preparation and evaluation, procedures for determining the flood quantiles, basin characteristics, generalized skew coefficients, hydrologic region delineations, and the multiple regression analyses used to determine the regional equations are presented in the

  7. Estimating magnitude and frequency of floods using the PeakFQ 7.0 program

    USGS Publications Warehouse

    Veilleux, Andrea G.; Cohn, Timothy A.; Flynn, Kathleen M.; Mason, Jr., Robert R.; Hummel, Paul R.

    2014-01-01

    Flood-frequency analysis provides information about the magnitude and frequency of flood discharges based on records of annual maximum instantaneous peak discharges collected at streamgages. The information is essential for defining flood-hazard areas, for managing floodplains, and for designing bridges, culverts, dams, levees, and other flood-control structures. Bulletin 17B (B17B) of the Interagency Advisory Committee on Water Data (IACWD; 1982) codifies the standard methodology for conducting flood-frequency studies in the United States. B17B specifies that annual peak-flow data are to be fit to a log-Pearson Type III distribution. Specific methods are also prescribed for improving skew estimates using regional skew information, tests for high and low outliers, adjustments for low outliers and zero flows, and procedures for incorporating historical flood information. The authors of B17B identified various needs for methodological improvement and recommended additional study. In response to these needs, the Advisory Committee on Water Information (ACWI, successor to IACWD; http://acwi.gov/, Subcommittee on Hydrology (SOH), Hydrologic Frequency Analysis Work Group (HFAWG), has recommended modest changes to B17B. These changes include adoption of a generalized method-of-moments estimator denoted the Expected Moments Algorithm (EMA) (Cohn and others, 1997) and a generalized version of the Grubbs-Beck test for low outliers (Cohn and others, 2013). The SOH requested that the USGS implement these changes in a user-friendly, publicly accessible program.

  8. Establishing ion ratio thresholds based on absolute peak area for absolute protein quantification using protein cleavage isotope dilution mass spectrometry.

    PubMed

    Loziuk, Philip L; Sederoff, Ronald R; Chiang, Vincent L; Muddiman, David C

    2014-11-07

    Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification.

  9. Flux of optical meteors down to M sub pg = +12. [photographic absolute magnitude

    NASA Technical Reports Server (NTRS)

    Cook, A. F.; Weekes, T. C.; Williams, J. T.; Omongain, E.

    1980-01-01

    Observations of the flux of optical meteors down to photographic magnitudes of +12 are reported. The meteors were detected by photometry using a 10-m optical reflector from December 12-15, 1974, during the Geminid shower. A total of 2222 light pulses is identified as coming from meteors within the 1 deg field of view of the detector, most of which correspond to sporadic meteors traversing the detector beam at various angles and velocities and do not differ with the date, indicating that the Geminid contribution at faint luminosities is small compared to the sporadic contribution. A rate of 1.1 to 3.3 x 10 to the -12th meteors/sq cm per sec is obtained together with a power law meteor spectrum which is used to derive a relationship between cumulative meteor flux and magnitude which is linear for magnitudes from -2.4 through +12. Expressions for the cumulative flux upon the earth's atmosphere and at a test surface at 1 AU far from the earth as a function of magnitude are also obtained along with an estimate of the cumulative number density of particles.

  10. Decarbonization rate and the timing and magnitude of the CO2 concentration peak

    NASA Astrophysics Data System (ADS)

    Seshadri, Ashwin K.

    2016-11-01

    Carbon-dioxide (CO2) is the main contributor to anthropogenic global warming, and the timing of its peak concentration in the atmosphere is likely to be the major factor in the timing of maximum radiative forcing. Other forcers such as aerosols and non-CO2 greenhouse gases may also influence the timing of maximum radiative forcing. This paper approximates solutions to a linear model of atmospheric CO2 dynamics with four time-constants to identify factors governing the timing of its concentration peak. The most important emissions-related factor is the ratio between average rates at which emissions increase and decrease, which in turn is related to the rate at which the emissions intensity of CO2 is reduced. Rapid decarbonization of CO2 can not only limit global warming but also achieve an early CO2 concentration peak. The most important carbon cycle parameters are the long multi-century time-constant of atmospheric CO2, and the ratio of contributions to the impulse response function of atmospheric CO2 from the infinitely long lived and the multi-century contributions respectively. Reducing uncertainties in these parameters can reduce uncertainty in forecasts of the radiative forcing peak. A simple approximation for peak CO2 concentration, valid especially if decarbonization is slow, is developed. Peak concentration is approximated as a function of cumulative emissions and emissions at the time of the concentration peak. Furthermore peak concentration is directly proportional to cumulative CO2 emissions for a wide range of emissions scenarios. Therefore, limiting the peak CO2 concentration is equivalent to limiting cumulative emissions. These relationships need to be verified using more complex models of Earth system's carbon cycle.

  11. Estimating the Magnitude and Frequency of Peak Streamflows for Ungaged Sites on Streams in Alaska and Conterminous Basins in Canada

    USGS Publications Warehouse

    Curran, Janet H.; Meyer, David F.; Tasker, Gary D.

    2003-01-01

    Estimates of the magnitude and frequency of peak streamflow are needed across Alaska for floodplain management, cost-effective design of floodway structures such as bridges and culverts, and other water-resource management issues. Peak-streamflow magnitudes for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were computed for 301 streamflow-gaging and partial-record stations in Alaska and 60 stations in conterminous basins of Canada. Flows were analyzed from data through the 1999 water year using a log-Pearson Type III analysis. The State was divided into seven hydrologically distinct streamflow analysis regions for this analysis, in conjunction with a concurrent study of low and high flows. New generalized skew coefficients were developed for each region using station skew coefficients for stations with at least 25 years of systematic peak-streamflow data. Equations for estimating peak streamflows at ungaged locations were developed for Alaska and conterminous basins in Canada using a generalized least-squares regression model. A set of predictive equations for estimating the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak streamflows was developed for each streamflow analysis region from peak-streamflow magnitudes and physical and climatic basin characteristics. These equations may be used for unregulated streams without flow diversions, dams, periodically releasing glacial impoundments, or other streamflow conditions not correlated to basin characteristics. Basin characteristics should be obtained using methods similar to those used in this report to preserve the statistical integrity of the equations.

  12. OSSOS. II. A Sharp Transition in the Absolute Magnitude Distribution of the Kuiper Belt’s Scattering Population

    NASA Astrophysics Data System (ADS)

    Shankman, C.; Kavelaars, JJ.; Gladman, B. J.; Alexandersen, M.; Kaib, N.; Petit, J.-M.; Bannister, M. T.; Chen, Y.-T.; Gwyn, S.; Jakubik, M.; Volk, K.

    2016-02-01

    We measure the absolute magnitude, H, distribution, dN(H) ∝ 10αH, of the scattering Trans-Neptunian Objects (TNOs) as a proxy for their size-frequency distribution. We show that the H-distribution of the scattering TNOs is not consistent with a single-slope distribution, but must transition around Hg ˜ 9 to either a knee with a shallow slope or to a divot, which is a differential drop followed by second exponential distribution. Our analysis is based on a sample of 22 scattering TNOs drawn from three different TNO surveys—the Canada-France Ecliptic Plane Survey, Alexandersen et al., and the Outer Solar System Origins Survey, all of which provide well-characterized detection thresholds—combined with a cosmogonic model for the formation of the scattering TNO population. Our measured absolute magnitude distribution result is independent of the choice of cosmogonic model. Based on our analysis, we estimate that the number of scattering TNOs is (2.4-8.3) × 105 for Hr < 12. A divot H-distribution is seen in a variety of formation scenarios and may explain several puzzles in Kuiper Belt science. We find that a divot H-distribution simultaneously explains the observed scattering TNO, Neptune Trojan, Plutino, and Centaur H-distributions while simultaneously predicting a large enough scattering TNO population to act as the sole supply of the Jupiter-Family Comets.

  13. Methods for estimating the magnitude and frequency of peak discharges of rural, unregulated streams in Virginia

    USGS Publications Warehouse

    Bisese, James A.

    1995-01-01

    Methods are presented for estimating the peak dis- charges of rural, unregulated streams in Virginia. A Pearson Type III distribution was fitted to the logarithms of annual peak-discharge records from 363 stream-gaging stations in Virginia to estimate the peak discharge at these stations for recurrence intervals of 2 to 500 years. Peak-discharge characteristics for 284 stations were regressed on potential explanatory variables, including drainage area, main channel length, main channel slope, mean basin elevation, percentage of forest cover, mean annual precipitation, and maximum rainfall intensity, by using generalized least-squares multiple-regression analysis. Stations were grouped into eight peak-discharge regions based on the five physiographic provinces in the State, and equations are presented for each region. Alternative equations using drainage area only are presented for each region. Alternative equations using drainage area only are presented for each region. Methods and sample computations are provided to estimate peak discharges for recurrence intervals of 2 to 500 years at gaged and ungaged sites in Virginia, and to adjust the regression estimates for sites where nearby gaged-site data are available.

  14. OSSOS. II. A SHARP TRANSITION IN THE ABSOLUTE MAGNITUDE DISTRIBUTION OF THE KUIPER BELT’S SCATTERING POPULATION

    SciTech Connect

    Shankman, C.; Kavelaars, JJ.; Bannister, M. T.; Gwyn, S.; Gladman, B. J.; Alexandersen, M.; Kaib, N.; Petit, J.-M.; Chen, Y.-T.; Jakubik, M.; Volk, K.

    2016-02-15

    We measure the absolute magnitude, H, distribution, dN(H) ∝ 10{sup αH}, of the scattering Trans-Neptunian Objects (TNOs) as a proxy for their size-frequency distribution. We show that the H-distribution of the scattering TNOs is not consistent with a single-slope distribution, but must transition around H{sub g} ∼ 9 to either a knee with a shallow slope or to a divot, which is a differential drop followed by second exponential distribution. Our analysis is based on a sample of 22 scattering TNOs drawn from three different TNO surveys—the Canada–France Ecliptic Plane Survey, Alexandersen et al., and the Outer Solar System Origins Survey, all of which provide well-characterized detection thresholds—combined with a cosmogonic model for the formation of the scattering TNO population. Our measured absolute magnitude distribution result is independent of the choice of cosmogonic model. Based on our analysis, we estimate that the number of scattering TNOs is (2.4–8.3) × 10{sup 5} for H{sub r} < 12. A divot H-distribution is seen in a variety of formation scenarios and may explain several puzzles in Kuiper Belt science. We find that a divot H-distribution simultaneously explains the observed scattering TNO, Neptune Trojan, Plutino, and Centaur H-distributions while simultaneously predicting a large enough scattering TNO population to act as the sole supply of the Jupiter-Family Comets.

  15. Estimating stellar atmospheric parameters, absolute magnitudes and elemental abundances from the LAMOST spectra with Kernel-based principal component analysis

    NASA Astrophysics Data System (ADS)

    Xiang, M.-S.; Liu, X.-W.; Shi, J.-R.; Yuan, H.-B.; Huang, Y.; Luo, A.-L.; Zhang, H.-W.; Zhao, Y.-H.; Zhang, J.-N.; Ren, J.-J.; Chen, B.-Q.; Wang, C.; Li, J.; Huo, Z.-Y.; Zhang, W.; Wang, J.-L.; Zhang, Y.; Hou, Y.-H.; Wang, Y.-F.

    2017-01-01

    Accurate determination of stellar atmospheric parameters and elemental abundances is crucial for Galactic archaeology via large-scale spectroscopic surveys. In this paper, we estimate stellar atmospheric parameters - effective temperature Teff, surface gravity log g and metallicity [Fe/H], absolute magnitudes MV and MKs, α-element to metal (and iron) abundance ratio [α/M] (and [α/Fe]), as well as carbon and nitrogen abundances [C/H] and [N/H] from the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) spectra with a multivariate regression method based on kernel-based principal component analysis, using stars in common with other surveys (Hipparcos, Kepler, Apache Point Observatory Galactic Evolution Experiment) as training data sets. Both internal and external examinations indicate that given a spectral signal-to-noise ratio (SNR) better than 50, our method is capable of delivering stellar parameters with a precision of ˜100 K for Teff, ˜0.1 dex for log g, 0.3-0.4 mag for MV and MKs, 0.1 dex for [Fe/H], [C/H] and [N/H], and better than 0.05 dex for [α/M] ([α/Fe]). The results are satisfactory even for a spectral SNR of 20. The work presents first determinations of [C/H] and [N/H] abundances from a vast data set of LAMOST, and, to our knowledge, the first reported implementation of absolute magnitude estimation directly based on a vast data set of observed spectra. The derived stellar parameters for millions of stars from the LAMOST surveys will be publicly available in the form of value-added catalogues.

  16. Estimating magnitude and frequency of peak discharges for rural, unregulated, streams in West Virginia

    USGS Publications Warehouse

    Wiley, J.B.; Atkins, John T.; Tasker, Gary D.

    2000-01-01

    Multiple and simple least-squares regression models for the log10-transformed 100-year discharge with independent variables describing the basin characteristics (log10-transformed and untransformed) for 267 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions of the State, designated East, North, and South. Exploratory data analysis procedures identified 31 gaging stations at which discharges are different than would be expected for West Virginia. Regional equations for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak discharges were determined by generalized least-squares regression using data from 236 gaging stations. Log10-transformed drainage area was the most significant independent variable for all regions.Equations developed in this study are applicable only to rural, unregulated, streams within the boundaries of West Virginia. The accuracy of estimating equations is quantified by measuring the average prediction error (from 27.7 to 44.7 percent) and equivalent years of record (from 1.6 to 20.0 years).

  17. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells.

    PubMed

    Gerencser, Akos A; Mookerjee, Shona A; Jastroch, Martin; Brand, Martin D

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions.

  18. Using A New Model for Main Sequence Turnoff Absolute Magnitudes to Measure Stellar Streams in the Milky Way Halo

    NASA Astrophysics Data System (ADS)

    Weiss, Jake; Newberg, Heidi Jo; Arsenault, Matthew; Bechtel, Torrin; Desell, Travis; Newby, Matthew; Thompson, Jeffery M.

    2016-01-01

    Statistical photometric parallax is a method for using the distribution of absolute magnitudes of stellar tracers to statistically recover the underlying density distribution of these tracers. In previous work, statistical photometric parallax was used to trace the Sagittarius Dwarf tidal stream, the so-called bifurcated piece of the Sagittaritus stream, and the Virgo Overdensity through the Milky Way. We use an improved knowledge of this distribution in a new algorithm that accounts for the changes in the stellar population of color-selected stars near the photometric limit of the Sloan Digital Sky Survey (SDSS). Although we select bluer main sequence turnoff stars (MSTO) as tracers, large color errors near the survey limit cause many stars to be scattered out of our selection box and many fainter, redder stars to be scattered into our selection box. We show that we are able to recover parameters for analogues of these streams in simulated data using a maximum likelihood optimization on MilkyWay@home. We also present the preliminary results of fitting the density distribution of major Milky Way tidal streams in SDSS data. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.

  19. Color excesses, intrinsic colors, and absolute magnitudes of Galactic and Large Magellanic Cloud Wolf-Rayet stars

    NASA Technical Reports Server (NTRS)

    Vacca, William D.; Torres-Dodgen, Ana V.

    1990-01-01

    A new method of determining the color excesses of WR stars in the Galaxy and the LMC has been developed and is used to determine the excesses for 44 Galactic and 32 LMC WR stars. The excesses are combined with line-free, narrow-band spectrophotometry to derive intrinsic colors of the WR stars of nearly all spectral subtypes. No correlation of UV spectral index or intrinsic colors with spectral subtype is found for the samples of single WN or WC stars. There is evidence that early WN stars in the LMC have flatter UV continua and redder intrinsic colors than early WN stars in the Galaxy. No separation is found between the values derived for Galactic WC stars and those obtained for LMC WC stars. The intrinsic colors are compared with those calculated from model atmospheres of WR stars and generally good agreement is found. Absolute magnitudes are derived for WR stars in the LMC and for those Galactic WR stars located in clusters and associations for which there are reliable distance estimates.

  20. Absolute magnitudes and slope parameters for 250,000 asteroids observed by Pan-STARRS PS1 - Preliminary results

    NASA Astrophysics Data System (ADS)

    Vereš, Peter; Jedicke, Robert; Fitzsimmons, Alan; Denneau, Larry; Granvik, Mikael; Bolin, Bryce; Chastel, Serge; Wainscoat, Richard J.; Burgett, William S.; Chambers, Kenneth C.; Flewelling, Heather; Kaiser, Nick; Magnier, Eugen A.; Morgan, Jeff S.; Price, Paul A.; Tonry, John L.; Waters, Christopher

    2015-11-01

    We present the results of a Monte Carlo technique to calculate the absolute magnitudes (H) and slope parameters (G) of ∼240,000 asteroids observed by the Pan-STARRS1 telescope during the first 15 months of its 3-year all-sky survey mission. The system's exquisite photometry with photometric errors ≲ 0.04mag , and well-defined filter and photometric system, allowed us to derive accurate H and G even with a limited number of observations and restricted range in phase angles. Our Monte Carlo method simulates each asteroid's rotation period, amplitude and color to derive the most-likely H and G, but its major advantage is in estimating realistic statistical + systematic uncertainties and errors on each parameter. The method was tested by comparison with the well-established and accurate results for about 500 asteroids provided by Pravec et al. (Pravec, P. et al. [2012]. Icarus 221, 365-387) and then applied to determining H and G for the Pan-STARRS1 asteroids using both the Muinonen et al. (Muinonen, K. et al. [2010]. Icarus 209, 542-555) and Bowell et al. (Bowell, E. et al. [1989]. Asteroids III, Chapter Application of Photometric Models to Asteroids. University of Arizona Press, pp. 524-555) phase functions. Our results confirm the bias in MPC photometry discovered by Jurić et al. (Jurić, M. et al. [2002]. Astrophys. J. 124, 1776-1787).

  1. Interference peak detection based on FPGA for real-time absolute distance ranging with dual-comb lasers

    NASA Astrophysics Data System (ADS)

    Ni, Kai; Dong, Hao; Zhou, Qian; Xu, Mingfei; Li, Xinghui; Wu, Guanhao

    2015-08-01

    Absolute distance measurement using dual femtosecond comb lasers can achieve higher accuracy and faster measurement speed, which makes it more and more attractive. The data processing flow consists of four steps: interference peak detection, fast Fourier transform (FFT), phase fitting and compensation of index of refraction. A realtime data processing system based on Field-Programmable Gate Array (FPGA) for dual-comb ranging has been newly developed. The design and implementation of the interference peak detection algorithm by FPGA and Verilog language is introduced in this paper, which is viewed as the most complicated part and an important guarantee for system precision and reliability. An adaptive sliding window for scanning is used to detect peaks. In the process of detection, the algorithm stores 16 sample data as a detection unit and calculates the average of each unit. The average result is used to determine the vertical center height of the sliding window. The algorithm estimates the noise intensity of each detection unit, and then calculates the average of the noise strength of successive 128 units. The noise average is used to calculate the signal to noise ratio of the current working environment, which is used to adjust the height of the sliding window. This adaptive sliding window helps to eliminate fake peaks caused by noise. The whole design is based on the way of pipeline, which can improves the real-time throughput of the overall peak detection module. Its execution speed is up to 140MHz in the FPGA, and the peak can be detected in 16 clock cycle when it appears.

  2. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells

    PubMed Central

    Gerencser, Akos A.; Mookerjee, Shona A.; Jastroch, Martin; Brand, Martin D.

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273

  3. [Study on the axial strain sensor of birefringence photonic crystal fiber loop mirror based on the absolute integral of the monitoring peak].

    PubMed

    Jiang, Ying; Zeng, Jie; Liang, Da-Kai; Wang, Xue-Liang; Ni, Xiao-Yu; Zhang, Xiao-Yan; Li, Ji-Feng; Luo, Wen-Yong

    2013-12-01

    In the present paper, the theoretical expression of the wavelength change and the axial strain of birefringence fiber loop mirror is developed. The theoretical result shows that the axial strain sensitivity of birefringence photonic crystal fiber loop mirror is much lower than conventional birefringence fiber loop mirror. It is difficult to measure the axial strain by monitoring the wavelength change of birefringence photonic crystal fiber loop mirror, and it is easy to cause the measurement error because the output spectrum is not perfectly smooth. The different strain spectrum of birefringence photonic crystal fiber loop mirror was measured experimentally by an optical spectrum analyzer. The measured spectrum was analysed. The results show that the absolute integral of the monitoring peak decreases with increasing strain and the absolute integral is linear versus strain. Based on the above results, it is proposed that the axial strain can be measured by monitoring the absolute integral of the monitoring peak in this paper. The absolute integral of the monitoring peak is a comprehensive index which can indicate the light intensity of different wavelength. This method of monitoring the absolute integral of the monitoring peak to measure the axial strain can not only overcome the difficulty of monitoring the wavelength change of birefringence photonic crystal fiber loop mirror, but also reduce the measurement error caused by the unsmooth output spectrum.

  4. Methods for estimating the magnitude and frequency of peak streamflows at ungaged sites in and near the Oklahoma Panhandle

    USGS Publications Warehouse

    Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.

    2015-09-28

    Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.

  5. Techniques for Estimating the Magnitude and Frequency of Peak Flows on Small Streams in Minnesota Based on Data through Water Year 2005

    USGS Publications Warehouse

    Lorenz, David L.; Sanocki, Chris A.; Kocian, Matthew J.

    2010-01-01

    Knowledge of the peak flow of floods of a given recurrence interval is essential for regulation and planning of water resources and for design of bridges, culverts, and dams along Minnesota's rivers and streams. Statistical techniques are needed to estimate peak flow at ungaged sites because long-term streamflow records are available at relatively few places. Because of the need to have up-to-date peak-flow frequency information in order to estimate peak flows at ungaged sites, the U.S. Geological Survey (USGS) conducted a peak-flow frequency study in cooperation with the Minnesota Department of Transportation and the Minnesota Pollution Control Agency. Estimates of peak-flow magnitudes for 1.5-, 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are presented for 330 streamflow-gaging stations in Minnesota and adjacent areas in Iowa and South Dakota based on data through water year 2005. The peak-flow frequency information was subsequently used in regression analyses to develop equations relating peak flows for selected recurrence intervals to various basin and climatic characteristics. Two statistically derived techniques-regional regression equation and region of influence regression-can be used to estimate peak flow on ungaged streams smaller than 3,000 square miles in Minnesota. Regional regression equations were developed for selected recurrence intervals in each of six regions in Minnesota: A (northwestern), B (north central and east central), C (northeastern), D (west central and south central), E (southwestern), and F (southeastern). The regression equations can be used to estimate peak flows at ungaged sites. The region of influence regression technique dynamically selects streamflow-gaging stations with characteristics similar to a site of interest. Thus, the region of influence regression technique allows use of a potentially unique set of gaging stations for estimating peak flow at each site of interest. Two methods of selecting streamflow

  6. Absolute total and partial cross sections for ionization of nucleobases by proton impact in the Bragg peak velocity range

    SciTech Connect

    Tabet, J.; Eden, S.; Feil, S.; Abdoul-Carime, H.; Farizon, B.; Farizon, M.; Ouaskit, S.; Maerk, T. D.

    2010-08-15

    We present experimental results for proton ionization of nucleobases (adenine, cytosine, thymine, and uracil) based on an event-by-event analysis of the different ions produced combined with an absolute target density determination. We are able to disentangle in detail the various proton ionization channels from mass-analyzed product ion signals in coincidence with the charge-analyzed projectile. In addition we are able to determine a complete set of cross sections for the ionization of these molecular targets by 20-150 keV protons including the total and partial cross sections and the direct-ionization and electron-capture cross sections.

  7. Absolute reliability of hamstring to quadriceps strength imbalance ratios calculated using peak torque, joint angle-specific torque and joint ROM-specific torque values.

    PubMed

    Ayala, F; De Ste Croix, M; Sainz de Baranda, P; Santonja, F

    2012-11-01

    The main purpose of this study was to determine the absolute reliability of conventional (H/Q(CONV)) and functional (H/Q(FUNC)) hamstring to quadriceps strength imbalance ratios calculated using peak torque values, 3 different joint angle-specific torque values (10°, 20° and 30° of knee flexion) and 4 different joint ROM-specific average torque values (0-10°, 11-20°, 21-30° and 0-30° of knee flexion) adopting a prone position in recreational athletes. A total of 50 recreational athletes completed the study. H/Q(CONV) and H/Q(FUNC) ratios were recorded at 3 different angular velocities (60, 180 and 240°/s) on 3 different occasions with a 72-96 h rest interval between consecutive testing sessions. Absolute reliability was examined through typical percentage error (CVTE), percentage change in the mean (CM) and intraclass correlations (ICC) as well as their respective confidence limits. H/Q(CONV) and H/Q(FUNC) ratios calculated using peak torque values showed moderate reliability values, with CM scores lower than 2.5%, CV(TE) values ranging from 16 to 20% and ICC values ranging from 0.3 to 0.7. However, poor absolute reliability scores were shown for H/Q(CONV) and H/Q(FUNC) ratios calculated using joint angle-specific torque values and joint ROM-specific average torque values, especially for H/Q(FUNC) ratios (CM: 1-23%; CV(TE): 22-94%; ICC: 0.1-0.7). Therefore, the present study suggests that the CV(TE) values reported for H/Q(CONV) and H/Q(FUNC) (≈18%) calculated using peak torque values may be sensitive enough to detect large changes usually observed after rehabilitation programmes but not acceptable to examine the effect of preventitive training programmes in healthy individuals. The clinical reliability of hamstring to quadriceps strength ratios calculated using joint angle-specific torque values and joint ROM-specific average torque values are questioned and should be re-evaluated in future research studies.

  8. Space density distribution of galaxies in the absolute magnitude - rotation velocity plane: a volume-complete Tully-Fisher relation from CALIFA stellar kinematics

    NASA Astrophysics Data System (ADS)

    Bekeraité, S.; Walcher, C. J.; Falcón-Barroso, J.; Garcia Lorenzo, B.; Lyubenova, M.; Sánchez, S. F.; Spekkens, K.; van de Ven, G.; Wisotzki, L.; Ziegler, B.; Aguerri, J. A. L.; Barrera-Ballesteros, J.; Bland-Hawthorn, J.; Catalán-Torrecilla, C.; García-Benito, R.

    2016-10-01

    We measured the distribution in absolute magnitude - circular velocity space for a well-defined sample of 199 rotating galaxies of the Calar Alto Legacy Integral Field Area Survey (CALIFA) using their stellar kinematics. Our aim in this analysis is to avoid subjective selection criteria and to take volume and large-scale structure factors into account. Using stellar velocity fields instead of gas emission line kinematics allows including rapidly rotating early-type galaxies. Our initial sample contains 277 galaxies with available stellar velocity fields and growth curve r-band photometry. After rejecting 51 velocity fields that could not be modelled because of the low number of bins, foreground contamination, or significant interaction, we performed Markov chain Monte Carlo modelling of the velocity fields, from which we obtained the rotation curve and kinematic parameters and their realistic uncertainties. We performed an extinction correction and calculated the circular velocity vcirc accounting for the pressure support of a given galaxy. The resulting galaxy distribution on the Mr-vcirc plane was then modelled as a mixture of two distinct populations, allowing robust and reproducible rejection of outliers, a significant fraction of which are slow rotators. The selection effects are understood well enough that we were able to correct for the incompleteness of the sample. The 199 galaxies were weighted by volume and large-scale structure factors, which enabled us to fit a volume-corrected Tully-Fisher relation (TFR). More importantly, we also provide the volume-corrected distribution of galaxies in the Mr-vcirc plane, which can be compared with cosmological simulations. The joint distribution of the luminosity and circular velocity space densities, representative over the range of -20 > Mr > -22 mag, can place more stringent constraints on the galaxy formation and evolution scenarios than linear TFR fit parameters or the luminosity function alone. Galaxies main

  9. Bias Properties of Extragalactic Distance Indicators. XI. Methods to Correct for Observational Selection Bias for RR Lyrae Absolute Magnitudes from Trigonometric Parallaxes Expected from the Full-Sky Astrometric Mapping Explorer Satellite

    NASA Astrophysics Data System (ADS)

    Sandage, Allan; Saha, A.

    2002-04-01

    A short history is given of the development of the correction for observation selection bias inherent in the calibration of absolute magnitudes using trigonometric parallaxes. The developments have been due to Eddington, Jeffreys, Trumpler & Weaver, Wallerstein, Ljunggren & Oja, West, Lutz & Kelker, after whom the bias is named, Turon Lacarrieu & Crézé, Hanson, Smith, and many others. As a tutorial to gain an intuitive understanding of several complicated trigonometric bias problems, we study a toy bias model of a parallax catalog that incorporates assumed parallax measuring errors of various severities. The two effects of bias errors on the derived absolute magnitudes are (1) the Lutz-Kelker correction itself, which depends on the relative parallax error δπ/π and the spatial distribution, and (2) a Malmquist-like ``incompleteness'' correction of opposite sign due to various apparent magnitude cutoffs as they are progressively imposed on the catalog. We calculate the bias properties using simulations involving 3×106 stars of fixed absolute magnitude using Mv=+0.6 to imitate RR Lyrae variables in the mean. These stars are spread over a spherical volume bounded by a radius 50,000 pc with different spatial density distributions. The bias is demonstrated by first using a fixed rms parallax uncertainty per star of 50 μas and then using a variable rms accuracy that ranges from 50 μas at apparent magnitude V=9 to 500 μas at V=15 according to the specifications for the Full-Sky Astrometric Mapping Explorer (FAME) satellite to be launched in 2004. The effects of imposing magnitude limits and limits on the ``observer's'' error, δπ/π, are displayed. We contrast the method of calculating mean absolute magnitude directly from the parallaxes where bias corrections are mandatory, with an inverse method using maximum likelihood that is free of the Lutz-Kelker bias, although a Malmquist bias is present. Simulations show the power of the inverse method. Nevertheless, we

  10. Estimated Magnitudes and Recurrence Intervals of Peak Flows on the Mousam and Little Ossipee Rivers for the Flood of April 2007 in Southern Maine

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Stewart, Gregory J.; Cohn, Timothy A.; Dudley, Robert W.

    2007-01-01

    Large amounts of rain fell on southern Maine from the afternoon of April 15, 2007, to the afternoon of April 16, 2007, causing substantial damage to houses, roads, and culverts. This report provides an estimate of the peak flows on two rivers in southern Maine--the Mousam River and the Little Ossipee River--because of their severe flooding. The April 2007 estimated peak flow of 9,230 ft3/s at the Mousam River near West Kennebunk had a recurrence interval between 100 and 500 years; 95-percent confidence limits for this flow ranged from 25 years to greater than 500 years. The April 2007 estimated peak flow of 8,220 ft3/s at the Little Ossipee River near South Limington had a recurrence interval between 100 and 500 years; 95-percent confidence limits for this flow ranged from 50 years to greater than 500 years.

  11. Estimating the magnitude of annual peak discharges with recurrence intervals between 1.1 and 3.0 years for rural, unregulated streams in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.; Atkins, John T.; Newell, Dawn A.

    2002-01-01

    Multiple and simple least-squares regression models for the log10-transformed 1.5- and 2-year recurrence intervals of peak discharges with independent variables describing the basin characteristics (log10-transformed and untransformed) for 236 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions in West Virginia designated as East, North, and South. Regional equations for the 1.1-, 1.2-, 1.3-, 1.4-, 1.5-, 1.6-, 1.7-, 1.8-, 1.9-, 2.0-, 2.5-, and 3-year recurrence intervals of peak discharges were determined by generalized least-squares regression. Log10-transformed drainage area was the most significant independent variable for all regions. Equations developed in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia. The accuracies of estimating equations are quantified by measuring the average prediction error (from 27.4 to 52.4 percent) and equivalent years of record (from 1.1 to 3.4 years).

  12. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  13. Peak strain magnitudes and rates in the tibia exceed greatly those in the skull: An in vivo study in a human subject.

    PubMed

    Hillam, Richard A; Goodship, Allen E; Skerry, Tim M

    2015-09-18

    Bone mass and architecture are the result of a genetically determined baseline structure, modified by the effect of internal hormonal/biochemical regulators and the effect of mechanical loading. Bone strain is thought to drive a feedback mechanism to regulate bone formation and resorption to maintain an optimal, but not excessive mass and organisation of material at each skeletal location. Because every site in the skeleton has different functions, we have measured bone strains induced by physiological and more unusual activities, at two different sites, the tibia and cranium of a young human male in vivo. During the most vigorous activities, tibial strains were shown to exceed 0.2%, when ground reaction exceeded 5 times body weight. However in the skull the highest strains recorded were during heading a heavy medicine/exercise ball where parietal strains were up to 0.0192%. Interestingly parietal strains during more physiological activities were much lower, often below 0.01%. Strains during biting were not dependent upon bite force, but could be induced by facial contortions of similar appearance without contact between the teeth. Rates of strain change in the two sites were also very different, where peak tibial strain rate exceeded rate in the parietal bone by more than 5 fold. These findings suggest that the skull and tibia are subject to quite different regulatory influences, as strains that would be normal in the human skull would be likely to lead to profound bone loss by disuse in the long bones.

  14. Peak strain magnitudes and rates in the tibia exceed greatly those in the skull: An in vivo study in a human subject

    PubMed Central

    Hillam, Richard A; Goodship, Allen E; Skerry, Tim M

    2015-01-01

    Bone mass and architecture are the result of a genetically determined baseline structure, modified by the effect of internal hormonal/biochemical regulators and the effect of mechanical loading. Bone strain is thought to drive a feedback mechanism to regulate bone formation and resorption to maintain an optimal, but not excessive mass and organisation of material at each skeletal location. Because every site in the skeleton has different functions, we have measured bone strains induced by physiological and more unusual activities, at two different sites, the tibia and cranium of a young human male in vivo. During the most vigorous activities, tibial strains were shown to exceed 0.2%, when ground reaction exceeded 5 times body weight. However in the skull the highest strains recorded were during heading a heavy medicine/exercise ball where parietal strains were up to 0.0192%. Interestingly parietal strains during more physiological activities were much lower, often below 0.01%. Strains during biting were not dependent upon bite force, but could be induced by facial contortions of similar appearance without contact between the teeth. Rates of strain change in the two sites were also very different, where peak tibial strain rate exceeded rate in the parietal bone by more than 5 fold. These findings suggest that the skull and tibia are subject to quite different regulatory influences, as strains that would be normal in the human skull would be likely to lead to profound bone loss by disuse in the long bones. PMID:26232812

  15. Discovery of Cepheids in NGC 5253: Absolute peak brightness of SN Ia 1895B and SN Ia 1972E and the value of H(sub 0)

    NASA Technical Reports Server (NTRS)

    Saha, A.; Sandage, Allan; Labhardt, Lukas; Schwengeler, Hans; Tammann, G. A.; Panagia, N.; Macchetto, F. D.

    1995-01-01

    Observations of the Hubble Space Telescope (HST) between 1993 May 31 and 1993 July 19 in 20 epochs in the F555W passband and five epochs in the F785LP passband have led to the discovery of 14 Cepheids in the Amorphous galaxy NGC 5253. The apparent V distance modulus is (m-M)(sub AV) = 28.08 +/- 0.10 determined from the 12 Cepheids with normal amplitudes. The distance modulus using the F785LP data is consistent with the V value to within the errors. Five methods used to determine the internal reddening are consistent with zero differential reddening, accurate to a level of E(B-V) less than 0.05 mag, over the region occupied by Cepheids and the two supernovae (SNe) produced by NGC 5253. The apparent magnitudes at maximum for the two SNe in NGC 5253 are adopted as B(sub max) = 8.33 +/- 0.2 mag for SN 1895B, and B(sub max) = 8.56 +/- 0.1 and V(sub max) = 8.60 +/- 0.1 for SN 1972E which is a prototype SN of Type Ia. The apparent magnitude system used by Walker (1923) for SN 1859B has been corrected to the modern B scale and zero point to determine its adopted B(sub max) value.

  16. Absolute Rovibrational Intensities for the Chi(sup 1)Sigma(sup +) v=3 <-- 0 Band of (12)C(16)O Obtained with Kitt Peak and BOMEM FTS Instruments

    NASA Technical Reports Server (NTRS)

    Chackerian, Charles, Jr.; Kshirsagar, R. J.; Giver, L. P.; Brown, L. R.; Condon, Estelle P. (Technical Monitor)

    1999-01-01

    This work was initiated to compare absolute line intensities retrieved with the Kitt Peak FTS (Fourier Transform Spectrometer) and Ames BOMEM FTS. Since thermal contaminations can be a problem using the BOMEM instrument if proper precautions are not taken it was thought that measurements done at 6300 per cm would more easily result in satisfactory intercomparisons. Very recent measurements of the CO 3 <-- 0 band fine intensities confirms results reported here that the intensities listed in HITRAN (High Resolution Molecular Absorption Database) for this band are on the order of six to seven percent too low. All of the infrared intensities in the current HITRAN tabulation are based on the electric dipole moment function reported fifteen years ago. The latter in turn was partly based on intensities for the 3 <-- 0 band reported thirty years ago. We have, therefore, redetermined the electric dipole moment function of ground electronic state CO.

  17. Absolute Zero

    NASA Astrophysics Data System (ADS)

    Donnelly, Russell J.; Sheibley, D.; Belloni, M.; Stamper-Kurn, D.; Vinen, W. F.

    2006-12-01

    Absolute Zero is a two hour PBS special attempting to bring to the general public some of the advances made in 400 years of thermodynamics. It is based on the book “Absolute Zero and the Conquest of Cold” by Tom Shachtman. Absolute Zero will call long-overdue attention to the remarkable strides that have been made in low-temperature physics, a field that has produced 27 Nobel Prizes. It will explore the ongoing interplay between science and technology through historical examples including refrigerators, ice machines, frozen foods, liquid oxygen and nitrogen as well as much colder fluids such as liquid hydrogen and liquid helium. A website has been established to promote the series: www.absolutezerocampaign.org. It contains information on the series, aimed primarily at students at the middle school level. There is a wealth of material here and we hope interested teachers will draw their student’s attention to this website and its substantial contents, which have been carefully vetted for accuracy.

  18. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  19. Absolute Photometry

    NASA Astrophysics Data System (ADS)

    Hartig, George

    1990-12-01

    The absolute sensitivity of the FOS will be determined in SV by observing 2 stars at 3 epochs, first in 3 apertures (1.0", 0.5", and 0.3" circular) and then in 1 aperture (1.0" circular). In cycle 1, one star, BD+28D4211 will be observed in the 1.0" aperture to establish the stability of the sensitivity and flat field characteristics and improve the accuracy obtained in SV. This star will also be observed through the paired apertures since these are not calibrated in SV. The stars will be observed in most detector/grating combinations. The data will be averaged to form the inverse sensitivity functions required by RSDP.

  20. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the

  1. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  2. Twin Peaks

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The image was taken by the Imager for Mars Pathfinder (IMP) after its deployment on Sol 3. Mars Pathfinder was developed and managed by the Jet Propulsion Laboratory (JPL) for the National Aeronautics and Space Administration. The IMP was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

  3. Absolute multilateration between spheres

    NASA Astrophysics Data System (ADS)

    Muelaner, Jody; Wadsworth, William; Azini, Maria; Mullineux, Glen; Hughes, Ben; Reichold, Armin

    2017-04-01

    Environmental effects typically limit the accuracy of large scale coordinate measurements in applications such as aircraft production and particle accelerator alignment. This paper presents an initial design for a novel measurement technique with analysis and simulation showing that that it could overcome the environmental limitations to provide a step change in large scale coordinate measurement accuracy. Referred to as absolute multilateration between spheres (AMS), it involves using absolute distance interferometry to directly measure the distances between pairs of plain steel spheres. A large portion of each sphere remains accessible as a reference datum, while the laser path can be shielded from environmental disturbances. As a single scale bar this can provide accurate scale information to be used for instrument verification or network measurement scaling. Since spheres can be simultaneously measured from multiple directions, it also allows highly accurate multilateration-based coordinate measurements to act as a large scale datum structure for localized measurements, or to be integrated within assembly tooling, coordinate measurement machines or robotic machinery. Analysis and simulation show that AMS can be self-aligned to achieve a theoretical combined standard uncertainty for the independent uncertainties of an individual 1 m scale bar of approximately 0.49 µm. It is also shown that combined with a 1 µm m‑1 standard uncertainty in the central reference system this could result in coordinate standard uncertainty magnitudes of 42 µm over a slender 1 m by 20 m network. This would be a sufficient step change in accuracy to enable next generation aerospace structures with natural laminar flow and part-to-part interchangeability.

  4. Absolutely classical spin states

    NASA Astrophysics Data System (ADS)

    Bohnet-Waldraff, F.; Giraud, O.; Braun, D.

    2017-01-01

    We introduce the concept of "absolutely classical" spin states, in analogy to absolutely separable states of bipartite quantum systems. Absolutely classical states are states that remain classical (i.e., a convex sum of projectors on coherent states of a spin j ) under any unitary transformation applied to them. We investigate the maximal size of the ball of absolutely classical states centered on the maximally mixed state and derive a lower bound for its radius as a function of the total spin quantum number. We also obtain a numerical estimate of this maximal radius and compare it to the case of absolutely separable states.

  5. Automaticity of Conceptual Magnitude.

    PubMed

    Gliksman, Yarden; Itamar, Shai; Leibovich, Tali; Melman, Yonatan; Henik, Avishai

    2016-02-16

    What is bigger, an elephant or a mouse? This question can be answered without seeing the two animals, since these objects elicit conceptual magnitude. How is an object's conceptual magnitude processed? It was suggested that conceptual magnitude is automatically processed; namely, irrelevant conceptual magnitude can affect performance when comparing physical magnitudes. The current study further examined this question and aimed to expand the understanding of automaticity of conceptual magnitude. Two different objects were presented and participants were asked to decide which object was larger on the screen (physical magnitude) or in the real world (conceptual magnitude), in separate blocks. By creating congruent (the conceptually larger object was physically larger) and incongruent (the conceptually larger object was physically smaller) pairs of stimuli it was possible to examine the automatic processing of each magnitude. A significant congruity effect was found for both magnitudes. Furthermore, quartile analysis revealed that the congruity was affected similarly by processing time for both magnitudes. These results suggest that the processing of conceptual and physical magnitudes is automatic to the same extent. The results support recent theories suggested that different types of magnitude processing and representation share the same core system.

  6. Automaticity of Conceptual Magnitude

    PubMed Central

    Gliksman, Yarden; Itamar, Shai; Leibovich, Tali; Melman, Yonatan; Henik, Avishai

    2016-01-01

    What is bigger, an elephant or a mouse? This question can be answered without seeing the two animals, since these objects elicit conceptual magnitude. How is an object’s conceptual magnitude processed? It was suggested that conceptual magnitude is automatically processed; namely, irrelevant conceptual magnitude can affect performance when comparing physical magnitudes. The current study further examined this question and aimed to expand the understanding of automaticity of conceptual magnitude. Two different objects were presented and participants were asked to decide which object was larger on the screen (physical magnitude) or in the real world (conceptual magnitude), in separate blocks. By creating congruent (the conceptually larger object was physically larger) and incongruent (the conceptually larger object was physically smaller) pairs of stimuli it was possible to examine the automatic processing of each magnitude. A significant congruity effect was found for both magnitudes. Furthermore, quartile analysis revealed that the congruity was affected similarly by processing time for both magnitudes. These results suggest that the processing of conceptual and physical magnitudes is automatic to the same extent. The results support recent theories suggested that different types of magnitude processing and representation share the same core system. PMID:26879153

  7. Color and magnitude dependence of galaxy clustering

    NASA Astrophysics Data System (ADS)

    Müller, Volker

    2016-10-01

    A quantitative study of the clustering properties of galaxies in the cosmic web as a function of absolute magnitude and colour is presented using the SDSS Data Release 7 galaxy redshift survey. We compare our results with mock galaxy samples obtained with four different semi-analytical models of galaxy formation imposed on the merger trees of the Millenium simulation.

  8. PEAK READING VOLTMETER

    DOEpatents

    Dyer, A.L.

    1958-07-29

    An improvement in peak reading voltmeters is described, which provides for storing an electrical charge representative of the magnitude of a transient voltage pulse and thereafter measuring the stored charge, drawing oniy negligible energy from the storage element. The incoming voltage is rectified and stored in a condenser. The voltage of the capacitor is applied across a piezoelectric crystal between two parallel plates. Amy change in the voltage of the capacitor is reflected in a change in the dielectric constant of the crystal and the capacitance between a second pair of plates affixed to the crystal is altered. The latter capacitor forms part of the frequency determlning circuit of an oscillator and means is provided for indicating the frequency deviation which is a measure of the peak voltage applied to the voltmeter.

  9. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  10. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  11. Absolute calibration of optical tweezers

    SciTech Connect

    Viana, N.B.; Mazolli, A.; Maia Neto, P.A.; Nussenzveig, H.M.; Rocha, M.S.; Mesquita, O.N.

    2006-03-27

    As a step toward absolute calibration of optical tweezers, a first-principles theory of trapping forces with no adjustable parameters, corrected for spherical aberration, is experimentally tested. Employing two very different setups, we find generally very good agreement for the transverse trap stiffness as a function of microsphere radius for a broad range of radii, including the values employed in practice, and at different sample chamber depths. The domain of validity of the WKB ('geometrical optics') approximation to the theory is verified. Theoretical predictions for the trapping threshold, peak position, depth variation, multiple equilibria, and 'jump' effects are also confirmed.

  12. Are Earthquake Magnitudes Clustered?

    SciTech Connect

    Davidsen, Joern; Green, Adam

    2011-03-11

    The question of earthquake predictability is a long-standing and important challenge. Recent results [Phys. Rev. Lett. 98, 098501 (2007); ibid.100, 038501 (2008)] have suggested that earthquake magnitudes are clustered, thus indicating that they are not independent in contrast to what is typically assumed. Here, we present evidence that the observed magnitude correlations are to a large extent, if not entirely, an artifact due to the incompleteness of earthquake catalogs and the well-known modified Omori law. The latter leads to variations in the frequency-magnitude distribution if the distribution is constrained to those earthquakes that are close in space and time to the directly following event.

  13. Misconceptions about astronomical magnitudes

    NASA Astrophysics Data System (ADS)

    Schulman, Eric; Cox, Caroline V.

    1997-10-01

    The present system of astronomical magnitudes was created as an inverse scale by Claudius Ptolemy in about 140 A.D. and was defined to be logarithmic in 1856 by Norman Pogson, who believed that human eyes respond logarithmically to the intensity of light. Although scientists have known for some time that the response is instead a power law, astronomers continue to use the Pogson magnitude scale. The peculiarities of this system make it easy for students to develop numerous misconceptions about how and why to use magnitudes. We present a useful exercise in the use of magnitudes to derive a cosmologically interesting quantity (the mass-to-light ratio for spiral galaxies), with potential pitfalls pointed out and explained.

  14. Telescopic limiting magnitudes

    NASA Technical Reports Server (NTRS)

    Schaefer, Bradley E.

    1990-01-01

    The prediction of the magnitude of the faintest star visible through a telescope by a visual observer is a difficult problem in physiology. Many prediction formulas have been advanced over the years, but most do not even consider the magnification used. Here, the prediction algorithm problem is attacked with two complimentary approaches: (1) First, a theoretical algorithm was developed based on physiological data for the sensitivity of the eye. This algorithm also accounts for the transmission of the atmosphere and the telescope, the brightness of the sky, the color of the star, the age of the observer, the aperture, and the magnification. (2) Second, 314 observed values for the limiting magnitude were collected as a test of the formula. It is found that the formula does accurately predict the average observed limiting magnitudes under all conditions.

  15. Absolute and relative blindsight.

    PubMed

    Balsdon, Tarryn; Azzopardi, Paul

    2015-03-01

    The concept of relative blindsight, referring to a difference in conscious awareness between conditions otherwise matched for performance, was introduced by Lau and Passingham (2006) as a way of identifying the neural correlates of consciousness (NCC) in fMRI experiments. By analogy, absolute blindsight refers to a difference between performance and awareness regardless of whether it is possible to match performance across conditions. Here, we address the question of whether relative and absolute blindsight in normal observers can be accounted for by response bias. In our replication of Lau and Passingham's experiment, the relative blindsight effect was abolished when performance was assessed by means of a bias-free 2AFC task or when the criterion for awareness was varied. Furthermore, there was no evidence of either relative or absolute blindsight when both performance and awareness were assessed with bias-free measures derived from confidence ratings using signal detection theory. This suggests that both relative and absolute blindsight in normal observers amount to no more than variations in response bias in the assessment of performance and awareness. Consideration of the properties of psychometric functions reveals a number of ways in which relative and absolute blindsight could arise trivially and elucidates a basis for the distinction between Type 1 and Type 2 blindsight.

  16. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    PubMed

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  17. REINFORCER MAGNITUDE ATTENUATES

    PubMed Central

    Pinkston, Jonathan W.; Lamb, R. J.

    2012-01-01

    When given to pigeons, the direct-acting dopamine agonist apomorphine elicits pecking. The response has been likened to foraging pecking because it bears remarkable similarity to foraging behavior, and it is enhanced by food deprivation. On the other hand, other data suggest the response is not related to foraging behavior and may even interfere with food ingestion. Although elicited pecking interferes with food capture, it may selectively alter procurement phases of feeding, which can be isolated in operant preparations. To explore the relation between operant and elicited pecking, we provided pigeons the opportunity to earn different reinforcer magnitudes during experimental sessions. During signaled components, each of 4 pigeons could earn 2-, 4-, or 8-s access to grain for a single peck made at the end of a 5-min interval. In general, responding increased as a function of reinforcer magnitude. Apomorphine increased pecking for 2 pigeons and decreased pecking for the other 2. In both cases, apomorphine was more potent under the component providing the smallest reinforcer magnitude. Analysis of the pattern of pecking across the interval indicated that behavior lost its temporal organization as dose increased. Because apomorphine-induced pecking varied inversely with reinforcer magnitude, we conclude that elicited pecks are not functionally related to food procurement. The data are consistent with the literature on behavioral resistance to change and suggest that the effects of apomorphine may be modulated by prevailing stimulus–reinforcer relationships. PMID:23144505

  18. Absolute neutrino mass scale

    NASA Astrophysics Data System (ADS)

    Capelli, Silvia; Di Bari, Pasquale

    2013-04-01

    Neutrino oscillation experiments firmly established non-vanishing neutrino masses, a result that can be regarded as a strong motivation to extend the Standard Model. In spite of being the lightest massive particles, neutrinos likely represent an important bridge to new physics at very high energies and offer new opportunities to address some of the current cosmological puzzles, such as the matter-antimatter asymmetry of the Universe and Dark Matter. In this context, the determination of the absolute neutrino mass scale is a key issue within modern High Energy Physics. The talks in this parallel session well describe the current exciting experimental activity aiming to determining the absolute neutrino mass scale and offer an overview of a few models beyond the Standard Model that have been proposed in order to explain the neutrino masses giving a prediction for the absolute neutrino mass scale and solving the cosmological puzzles.

  19. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  20. Reward Value Effects on Timing in the Peak Procedure

    ERIC Educational Resources Information Center

    Galtress, Tiffany; Kirkpatrick, Kimberly

    2009-01-01

    Three experiments examined the effect of motivational variables on timing in the peak procedure. In Experiment 1, rats received a 60-s peak procedure that was coupled with long-term, between-phase changes in reinforcer magnitude. Increases in reinforcer magnitude produced a leftward shift in the peak that persisted for 20 sessions of training. In…

  1. Landslide seismic magnitude

    NASA Astrophysics Data System (ADS)

    Lin, C. H.; Jan, J. C.; Pu, H. C.; Tu, Y.; Chen, C. C.; Wu, Y. M.

    2015-11-01

    Landslides have become one of the most deadly natural disasters on earth, not only due to a significant increase in extreme climate change caused by global warming, but also rapid economic development in topographic relief areas. How to detect landslides using a real-time system has become an important question for reducing possible landslide impacts on human society. However, traditional detection of landslides, either through direct surveys in the field or remote sensing images obtained via aircraft or satellites, is highly time consuming. Here we analyze very long period seismic signals (20-50 s) generated by large landslides such as Typhoon Morakot, which passed though Taiwan in August 2009. In addition to successfully locating 109 large landslides, we define landslide seismic magnitude based on an empirical formula: Lm = log ⁡ (A) + 0.55 log ⁡ (Δ) + 2.44, where A is the maximum displacement (μm) recorded at one seismic station and Δ is its distance (km) from the landslide. We conclude that both the location and seismic magnitude of large landslides can be rapidly estimated from broadband seismic networks for both academic and applied purposes, similar to earthquake monitoring. We suggest a real-time algorithm be set up for routine monitoring of landslides in places where they pose a frequent threat.

  2. Limiting magnitude of hypertelescopes

    NASA Astrophysics Data System (ADS)

    Surya, Arun

    Optical stellar interferometers have demonstrated milli-arcsecond resolution with few apertures spaced hundreds of meters apart. To obtain rich direct images, many apertures will be needed, for a better sampling of the incoming wavefront. The coherent imaging thus achievable improves the sensitivity with respect to the incoherent combination of successive fringed exposures, heretofore achieved in the form of optical aperture synthesis. For efficient use of highly diluted apertures, this can be done with pupil densification, a technique also called ``Hypertelescope Imaging". Using numerical simulations we have found out the limiting magnitude of hypertelescopes over different baselines and pupil densifications. Here we discuss the advantages of using hypertelescope systems over classical pairwise optical interferometry.

  3. Absolute airborne gravimetry

    NASA Astrophysics Data System (ADS)

    Baumann, Henri

    This work consists of a feasibility study of a first stage prototype airborne absolute gravimeter system. In contrast to relative systems, which are using spring gravimeters, the measurements acquired by absolute systems are uncorrelated and the instrument is not suffering from problems like instrumental drift, frequency response of the spring and possible variation of the calibration factor. The major problem we had to resolve were to reduce the influence of the non-gravitational accelerations included in the measurements. We studied two different approaches to resolve it: direct mechanical filtering, and post-processing digital compensation. The first part of the work describes in detail the different mechanical passive filters of vibrations, which were studied and tested in the laboratory and later in a small truck in movement. For these tests as well as for the airborne measurements an absolute gravimeter FG5-L from Micro-G Ltd was used together with an Inertial navigation system Litton-200, a vertical accelerometer EpiSensor, and GPS receivers for positioning. These tests showed that only the use of an optical table gives acceptable results. However, it is unable to compensate for the effects of the accelerations of the drag free chamber. The second part describes the strategy of the data processing. It is based on modeling the perturbing accelerations by means of GPS, EpiSensor and INS data. In the third part the airborne experiment is described in detail, from the mounting in the aircraft and data processing to the different problems encountered during the evaluation of the quality and accuracy of the results. In the part of data processing the different steps conducted from the raw apparent gravity data and the trajectories to the estimation of the true gravity are explained. A comparison between the estimated airborne data and those obtained by ground upward continuation at flight altitude allows to state that airborne absolute gravimetry is feasible and

  4. Two classes of speculative peaks

    NASA Astrophysics Data System (ADS)

    Roehner, Bertrand M.

    2001-10-01

    Speculation not only occurs in financial markets but also in numerous other markets, e.g. commodities, real estate, collectibles, and so on. Such speculative movements result in price peaks which share many common characteristics: same order of magnitude of duration with respect to amplitude, same shape (the so-called sharp-peak pattern). Such similarities suggest (at least as a first approximation) a common speculative behavior. However, a closer examination shows that in fact there are (at least) two distinct classes of speculative peaks. For the first, referred to as class U, (i) the amplitude of the peak is negatively correlated with the price at the start of the peak (ii) the ensemble coefficient of variation exhibits a trough. Opposite results are observed for the second class that we refer to as class S. Once these empirical observations have been made we try to understand how they should be interpreted. First, we show that the two properties are in fact related in the sense that the second is a consequence of the first. Secondly, by listing a number of cases belonging to each class we observe that the markets in the S-class offer collection of items from which investors can select those they prefer. On the contrary, U-markets consist of undifferentiated products for which a selection cannot be made in the same way. All prices considered in the paper are real (i.e., deflated) prices.

  5. Peak acceleration limiter

    NASA Technical Reports Server (NTRS)

    Chapman, C. P.

    1972-01-01

    Device is described that limits accelerations by shutting off shaker table power very rapidly in acceleration tests. Absolute value of accelerometer signal is used to trigger electronic switch which terminates test and sounds alarm.

  6. Absolute-structure reports.

    PubMed

    Flack, Howard D

    2013-08-01

    All the 139 noncentrosymmetric crystal structures published in Acta Crystallographica Section C between January 2011 and November 2012 inclusive have been used as the basis of a detailed study of the reporting of absolute structure. These structure determinations cover a wide range of space groups, chemical composition and resonant-scattering contribution. Defining A and D as the average and difference of the intensities of Friedel opposites, their level of fit has been examined using 2AD and selected-D plots. It was found, regardless of the expected resonant-scattering contribution to Friedel opposites, that the Friedel-difference intensities are often dominated by random uncertainty and systematic error. An analysis of data collection strategy is provided. It is found that crystal-structure determinations resulting in a Flack parameter close to 0.5 may not necessarily be from crystals twinned by inversion. Friedifstat is shown to be a robust estimator of the resonant-scattering contribution to Friedel opposites, very little affected by the particular space group of a structure nor by the occupation of special positions. There is considerable confusion in the text of papers presenting achiral noncentrosymmetric crystal structures. Recommendations are provided for the optimal way of treating noncentrosymmetric crystal structures for which the experimenter has no interest in determining the absolute structure.

  7. PeakWorks

    SciTech Connect

    2016-11-30

    The PeakWorks software is designed to assist in the quantitative analysis of atom probe tomography (APT) generated mass spectra. Specifically, through an interactive user interface, mass peaks can be identified automatically (defined by a threshold) and/or identified manually. The software then provides a means to assign specific elemental isotopes (including more than one) to each peak. The software also provides a means for the user to choose background subtraction of each peak based on background fitting functions, the choice of which is left to the users discretion. Peak ranging (the mass range over which peaks are integrated) is also automated allowing the user to chose a quantitative range (e.g. full-widthhalf- maximum). The software then integrates all identified peaks, providing a background-subtracted composition, which also includes the deconvolution of peaks (i.e. those peaks that happen to have overlapping isotopic masses). The software is also able to output a 'range file' that can be used in other software packages, such as within IVAS. A range file lists the peak identities, the mass range of each identified peak, and a color code for the peak. The software is also able to generate 'dummy' peak ranges within an outputted range file that can be used within IVAS to provide a means for background subtracted proximity histogram analysis.

  8. Peak flow meter (image)

    MedlinePlus

    A peak flow meter is commonly used by a person with asthma to measure the amount of air that can be ... become narrow or blocked due to asthma, peak flow values will drop because the person cannot blow ...

  9. Absolute Equilibrium Entropy

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1997-01-01

    The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.

  10. Magnitude and frequency of floods in Alabama

    USGS Publications Warehouse

    Atkins, J. Brian

    1996-01-01

    Methods of estimating flood magnitudes for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years are described for rural streams in Alabama that are not affected by regulation or urbanization. Flood-frequency characteristics are presented for 198 gaging stations in Alabama having 10 or more years of record through September 1991, that are used in the regional analysis. Regression relations were developed using generalized least-squares regression techniques to estimate flood magnitude and frequency on ungaged streams as a function of the drainage area of a basin. Sites on gaged streams should be weighted with gaging station data that are presented in the report. Graphical relations of peak discharges to drainage areas are also presented for sites along the Alabama, Black Warrior, Cahaba, Choctawhatchee, Conecub, and Tombigbee Rivers. Equations for estimating flood magnitudes on ungaged urban streams (taken from a previous report) that use drainage area and percentage of impervious cover as independent variables also are given.

  11. Asteroid magnitudes, UBV colors, and IRAS albedos and diameters

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.

    1989-01-01

    This paper lists absolute magnitudes and slope parameters for known asteroids numbered through 3318. The values presented are those used in reducing asteroid IR flux data obtained with the IRAS. U-B colors are given for 938 asteroids, and B-V colors are given for 945 asteroids. The IRAS albedos and diameters are tabulated for 1790 asteroids.

  12. The Effects of Reinforcer Magnitude on Timing in Rats

    ERIC Educational Resources Information Center

    Ludvig, Elliot A.; Conover, Kent; Shizgal, Peter

    2007-01-01

    The relation between reinforcer magnitude and timing behavior was studied using a peak procedure. Four rats received multiple consecutive sessions with both low and high levels of brain stimulation reward (BSR). Rats paused longer and had later start times during sessions when their responses were reinforced with low-magnitude BSR. When estimated…

  13. Are Bragg Peaks Gaussian?

    PubMed Central

    Hammouda, Boualem

    2014-01-01

    It is common practice to assume that Bragg scattering peaks have Gaussian shape. The Gaussian shape function is used to perform most instrumental smearing corrections. Using Monte Carlo ray tracing simulation, the resolution of a realistic small-angle neutron scattering (SANS) instrument is generated reliably. Including a single-crystal sample with large d-spacing, Bragg peaks are produced. Bragg peaks contain contributions from the resolution function and from spread in the sample structure. Results show that Bragg peaks are Gaussian in the resolution-limited condition (with negligible sample spread) while this is not the case when spread in the sample structure is non-negligible. When sample spread contributes, the exponentially modified Gaussian function is a better account of the Bragg peak shape. This function is characterized by a non-zero third moment (skewness) which makes Bragg peaks asymmetric for broad neutron wavelength spreads. PMID:26601025

  14. Maximum magnitude earthquakes induced by fluid injection

    NASA Astrophysics Data System (ADS)

    McGarr, A.

    2014-02-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  15. Estimating Absolute Site Effects

    SciTech Connect

    Malagnini, L; Mayeda, K M; Akinci, A; Bragato, P L

    2004-07-15

    The authors use previously determined direct-wave attenuation functions as well as stable, coda-derived source excitation spectra to isolate the absolute S-wave site effect for the horizontal and vertical components of weak ground motion. They used selected stations in the seismic network of the eastern Alps, and find the following: (1) all ''hard rock'' sites exhibited deamplification phenomena due to absorption at frequencies ranging between 0.5 and 12 Hz (the available bandwidth), on both the horizontal and vertical components; (2) ''hard rock'' site transfer functions showed large variability at high-frequency; (3) vertical-motion site transfer functions show strong frequency-dependence, and (4) H/V spectral ratios do not reproduce the characteristics of the true horizontal site transfer functions; (5) traditional, relative site terms obtained by using reference ''rock sites'' can be misleading in inferring the behaviors of true site transfer functions, since most rock sites have non-flat responses due to shallow heterogeneities resulting from varying degrees of weathering. They also use their stable source spectra to estimate total radiated seismic energy and compare against previous results. they find that the earthquakes in this region exhibit non-constant dynamic stress drop scaling which gives further support for a fundamental difference in rupture dynamics between small and large earthquakes. To correct the vertical and horizontal S-wave spectra for attenuation, they used detailed regional attenuation functions derived by Malagnini et al. (2002) who determined frequency-dependent geometrical spreading and Q for the region. These corrections account for the gross path effects (i.e., all distance-dependent effects), although the source and site effects are still present in the distance-corrected spectra. The main goal of this study is to isolate the absolute site effect (as a function of frequency) by removing the source spectrum (moment-rate spectrum) from

  16. Peak Experience Project

    ERIC Educational Resources Information Center

    Scott, Daniel G.; Evans, Jessica

    2010-01-01

    This paper emerges from the continued analysis of data collected in a series of international studies concerning Childhood Peak Experiences (CPEs) based on developments in understanding peak experiences in Maslow's hierarchy of needs initiated by Dr Edward Hoffman. Bridging from the series of studies, Canadian researchers explore collected…

  17. Magnitude and frequency of floods in Arkansas

    USGS Publications Warehouse

    Hodge, Scott A.; Tasker, Gary D.

    1995-01-01

    Methods are presented for estimating the magnitude and frequency of peak discharges of streams in Arkansas. Regression analyses were developed in which a stream's physical and flood characteristics were related. Four sets of regional regression equations were derived to predict peak discharges with selected recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years on streams draining less than 7,770 square kilometers. The regression analyses indicate that size of drainage area, main channel slope, mean basin elevation, and the basin shape factor were the most significant basin characteristics that affect magnitude and frequency of floods. The region of influence method is included in this report. This method is still being improved and is to be considered only as a second alternative to the standard method of producing regional regression equations. This method estimates unique regression equations for each recurrence interval for each ungaged site. The regression analyses indicate that size of drainage area, main channel slope, mean annual precipitation, mean basin elevation, and the basin shape factor were the most significant basin and climatic characteristics that affect magnitude and frequency of floods for this method. Certain recommendations on the use of this method are provided. A method is described for estimating the magnitude and frequency of peak discharges of streams for urban areas in Arkansas. The method is from a nationwide U.S. Geeological Survey flood frequency report which uses urban basin characteristics combined with rural discharges to estimate urban discharges. Annual peak discharges from 204 gaging stations, with drainage areas less than 7,770 square kilometers and at least 10 years of unregulated record, were used in the analysis. These data provide the basis for this analysis and are published in the Appendix of this report as supplemental data. Large rivers such as the Red, Arkansas, White, Black, St. Francis, Mississippi, and

  18. Magnitude and frequency of floods in Washington

    USGS Publications Warehouse

    Cummans, J.E.; Collings, Michael R.; Nasser, Edmund George

    1975-01-01

    Relations are provided to estimate the magnitude and frequency of floods on Washington streams. Annual-peak-flow data from stream gaging stations on unregulated streams having 1 years or more of record were used to determine a log-Pearson Type III frequency curve for each station. Flood magnitudes having recurrence intervals of 2, 5, i0, 25, 50, and 10years were then related to physical and climatic indices of the drainage basins by multiple-regression analysis using the Biomedical Computer Program BMDO2R. These regression relations are useful for estimating flood magnitudes of the specified recurrence intervals at ungaged or short-record sites. Separate sets of regression equations were defined for western and eastern parts of the State, and the State was further subdivided into 12 regions in which the annual floods exhibit similar flood characteristics. Peak flows are related most significantly in western Washington to drainage-area size and mean annual precipitation. In eastern Washington-they are related most significantly to drainage-area size, mean annual precipitation, and percentage of forest cover. Standard errors of estimate of the estimating relations range from 25 to 129 percent, and the smallest errors are generally associated with the more humid regions.

  19. The Type Ia Supernova Color-Magnitude Relation and Host Galaxy Dust: A Simple Hierarchical Bayesian Model

    NASA Astrophysics Data System (ADS)

    Mandel, Kaisey; Scolnic, Daniel; Shariff, Hikmatali; Foley, Ryan; Kirshner, Robert

    2017-01-01

    Inferring peak optical absolute magnitudes of Type Ia supernovae (SN Ia) from distance-independent measures such as their light curve shapes and colors underpins the evidence for cosmic acceleration. SN Ia with broader, slower declining optical light curves are more luminous (“broader-brighter”) and those with redder colors are dimmer. But the “redder-dimmer” color-luminosity relation widely used in cosmological SN Ia analyses confounds its two separate physical origins. An intrinsic correlation arises from the physics of exploding white dwarfs, while interstellar dust in the host galaxy also makes SN Ia appear dimmer and redder. Conventional SN Ia cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (MB vs. B-V) slope βint differs from the host galaxy dust law RB, this convolution results in a specific curve of mean extinguished absolute magnitude vs. apparent color. The derivative of this curve smoothly transitions from βint in the blue tail to RB in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope βapp between βint and RB. We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a dataset of SALT2 optical light curve fits of 277 nearby SN Ia at z < 0.10. The conventional linear fit obtains βapp ≈ 3. Our model finds a βint = 2.2 ± 0.3 and a distinct dust law of RB = 3.7 ± 0

  20. Negative absolute temperature for mobile particles

    NASA Astrophysics Data System (ADS)

    Braun, Simon; Ronzheimer, Philipp; Schreiber, Michael; Hodgman, Sean; Bloch, Immanuel; Schneider, Ulrich

    2013-05-01

    Absolute temperature is usually bound to be strictly positive. However, negative absolute temperature states, where the occupation probability of states increases with their energy, are possible in systems with an upper energy bound. So far, such states have only been demonstrated in localized spin systems with finite, discrete spectra. We realized a negative absolute temperature state for motional degrees of freedom with ultracold bosonic 39K atoms in an optical lattice, by implementing the attractive Bose-Hubbard Hamiltonian. This new state strikingly revealed itself by a quasimomentum distribution that is peaked at maximum kinetic energy. The measured kinetic energy distribution and the extracted negative temperature indicate that the ensemble is close to degeneracy, with coherence over several lattice sites. The state is as stable as a corresponding positive temperature state: The negative temperature stabilizes the system against mean-field collapse driven by negative pressure. Negative temperatures open up new parameter regimes for cold atoms, enabling fundamentally new many-body states. Additionally, they give rise to several counterintuitive effects such as heat engines with above unity efficiency.

  1. System for absolute measurements by interferometric sensors

    NASA Astrophysics Data System (ADS)

    Norton, Douglas A.

    1993-03-01

    The most common problem of interferometric sensors is their inability to measure absolute path imbalance. Presented in this paper is a signal processing system that gives absolute, unambiguous reading of optical path difference for almost any style of interferometric sensor. Key components are a wide band (incoherent) optical source, a polychromator, and FFT electronics. Advantages include no moving parts in the signal processor, no active components at the sensor location, and the use of standard single mode fiber for sensor illumination and signal transmission. Actual absolute path imbalance of the interferometer is determined without using fringe counting or other inferential techniques. The polychromator extracts the interference information that occurs at each discrete wavelength within the spectral band of the optical source. The signal processing consists of analog and digital filtering, Fast Fourier analysis, and a peak detection and interpolation algorithm. This system was originally designed for use in a remote pressure sensing application that employed a totally passive fiber optic interferometer. A performance qualification was made using a Fabry-Perot interferometer and a commercially available laser interferometer to measure the reference displacement.

  2. Microfabricated Collector-Generator Electrode Sensor for Measuring Absolute pH and Oxygen Concentrations.

    PubMed

    Dengler, Adam K; Wightman, R Mark; McCarty, Gregory S

    2015-10-20

    Fast-scan cyclic voltammetry (FSCV) has attracted attention for studying in vivo neurotransmission due to its subsecond temporal resolution, selectivity, and sensitivity. Traditional FSCV measurements use background subtraction to isolate changes in the local electrochemical environment, providing detailed information on fluctuations in the concentration of electroactive species. This background subtraction removes information about constant or slowly changing concentrations. However, determination of background concentrations is still important for understanding functioning brain tissue. For example, neural activity is known to consume oxygen and produce carbon dioxide which affects local levels of oxygen and pH. Here, we present a microfabricated microelectrode array which uses FSCV to detect the absolute levels of oxygen and pH in vitro. The sensor is a collector-generator electrode array with carbon microelectrodes spaced 5 μm apart. In this work, a periodic potential step is applied at the generator producing transient local changes in the electrochemical environment. The collector electrode continuously performs FSCV enabling these induced changes in concentration to be recorded with the sensitivity and selectivity of FSCV. A negative potential step applied at the generator produces a transient local pH shift at the collector. The generator-induced pH signal is detected using FSCV at the collector and correlated to absolute solution pH by postcalibration of the anodic peak position. In addition, in oxygenated solutions a negative potential step at the generator produces hydrogen peroxide by reducing oxygen. Hydrogen peroxide is detected with FSCV at the collector electrode, and the magnitude of the oxidative peak is proportional to absolute oxygen concentrations. Oxygen interference on the pH signal is minimal and can be accounted for with a postcalibration.

  3. Significance of periodogram peaks

    NASA Astrophysics Data System (ADS)

    Süveges, Maria; Guy, Leanne; Zucker, Shay

    2016-10-01

    Three versions of significance measures or False Alarm Probabilities (FAPs) for periodogram peaks are presented and compared for sinusoidal and box-like signals, with specific application on large-scale surveys in mind.

  4. Peak power ratio generator

    DOEpatents

    Moyer, R.D.

    A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.

  5. Peak power ratio generator

    DOEpatents

    Moyer, Robert D.

    1985-01-01

    A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.

  6. Pikes Peak, Colorado

    USGS Publications Warehouse

    Brunstein, Craig; Quesenberry, Carol; Davis, John; Jackson, Gene; Scott, Glenn R.; D'Erchia, Terry D.; Swibas, Ed; Carter, Lorna; McKinney, Kevin; Cole, Jim

    2006-01-01

    For 200 years, Pikes Peak has been a symbol of America's Western Frontier--a beacon that drew prospectors during the great 1859-60 Gold Rush to the 'Pikes Peak country,' the scenic destination for hundreds of thousands of visitors each year, and an enduring source of pride for cities in the region, the State of Colorado, and the Nation. November 2006 marks the 200th anniversary of the Zebulon M. Pike expedition's first sighting of what has become one of the world's most famous mountains--Pikes Peak. In the decades following that sighting, Pikes Peak became symbolic of America's Western Frontier, embodying the spirit of Native Americans, early explorers, trappers, and traders who traversed the vast uncharted wilderness of the Western Great Plains and the Southern Rocky Mountains. High-quality printed paper copies of this poster are available at no cost from Information Services, U.S. Geological Survey (1-888-ASK-USGS).

  7. Peak Oil, Peak Coal and Climate Change

    NASA Astrophysics Data System (ADS)

    Murray, J. W.

    2009-05-01

    Research on future climate change is driven by the family of scenarios developed for the IPCC assessment reports. These scenarios create projections of future energy demand using different story lines consisting of government policies, population projections, and economic models. None of these scenarios consider resources to be limiting. In many of these scenarios oil production is still increasing to 2100. Resource limitation (in a geological sense) is a real possibility that needs more serious consideration. The concept of 'Peak Oil' has been discussed since M. King Hubbert proposed in 1956 that US oil production would peak in 1970. His prediction was accurate. This concept is about production rate not reserves. For many oil producing countries (and all OPEC countries) reserves are closely guarded state secrets and appear to be overstated. Claims that the reserves are 'proven' cannot be independently verified. Hubbert's Linearization Model can be used to predict when half the ultimate oil will be produced and what the ultimate total cumulative production (Qt) will be. US oil production can be used as an example. This conceptual model shows that 90% of the ultimate US oil production (Qt = 225 billion barrels) will have occurred by 2011. This approach can then be used to suggest that total global production will be about 2200 billion barrels and that the half way point will be reached by about 2010. This amount is about 5 to 7 times less than assumed by the IPCC scenarios. The decline of Non-OPEC oil production appears to have started in 2004. Of the OPEC countries, only Saudi Arabia may have spare capacity, but even that is uncertain, because of lack of data transparency. The concept of 'Peak Coal' is more controversial, but even the US National Academy Report in 2007 concluded only a small fraction of previously estimated reserves in the US are actually minable reserves and that US reserves should be reassessed using modern methods. British coal production can be

  8. Comparison of TV magnitudes and visual magnitudes of meteors

    NASA Astrophysics Data System (ADS)

    Shigeno, Yoshihiko; Toda, Masayuki

    2008-08-01

    The generally accepted belief is that a meteor, with a large amount of infrared rays, can be captured brighter than it actually is by infrared-sensitive image intensifiers (I.I.) or CCD. We conducted observations of meteors using three methodologies: 1) I.I. with an attached filter that has the same spectral response as the human eye at night vision, 2) I.I. without the filter and 3) visually to determine meteor magnitudes. A total of 31 members of the astronomical club at Meiji University observed 50 Perseid meteors, 19 Geminid meteors as well as 44 sporadic meteors and the results were tabulated. The results helped us understand that on average I.I. can record meteors as brighter than visual observation by the magnitude equivalent of 0.5 for Perseids, 1.0 for Geminids and 0.5 for sporadic meteors. For I.I. with a filter that has the same spectral response the human eye at night vision, it turned out that we could obtain almost the same magnitude with observation by the human eye. We learned that a bright meteor with negative magnitude can be observed by I.I. brighter than the human eye. From several examples, we found I.I. could record a meteor with about -1 visual magnitude as brighter by about three magnitudes. We could probably do so because a bright meteor with negative magnitude may contain more infrared rays and the brightness could be amplified.

  9. Integrated Circuit Stellar Magnitude Simulator

    ERIC Educational Resources Information Center

    Blackburn, James A.

    1978-01-01

    Describes an electronic circuit which can be used to demonstrate the stellar magnitude scale. Six rectangular light-emitting diodes with independently adjustable duty cycles represent stars of magnitudes 1 through 6. Experimentally verifies the logarithmic response of the eye. (Author/GA)

  10. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  11. Monochromator-Based Absolute Calibration of Radiation Thermometers

    NASA Astrophysics Data System (ADS)

    Keawprasert, T.; Anhalt, K.; Taubert, D. R.; Hartmann, J.

    2011-08-01

    A monochromator integrating-sphere-based spectral comparator facility has been developed to calibrate standard radiation thermometers in terms of the absolute spectral radiance responsivity, traceable to the PTB cryogenic radiometer. The absolute responsivity calibration has been improved using a 75 W xenon lamp with a reflective mirror and imaging optics to a relative standard uncertainty at the peak wavelength of approximately 0.17 % ( k = 1). Via a relative measurement of the out-of-band responsivity, the spectral responsivity of radiation thermometers can be fully characterized. To verify the calibration accuracy, the absolutely calibrated radiation thermometer is used to measure Au and Cu freezing-point temperatures and then to compare the obtained results with the values obtained by absolute methods, resulting in T - T 90 values of +52 mK and -50 mK for the gold and copper fixed points, respectively.

  12. Correlation-Peak Imaging

    NASA Astrophysics Data System (ADS)

    Ziegler, A.; Metzler, A.; Köckenberger, W.; Izquierdo, M.; Komor, E.; Haase, A.; Décorps, M.; von Kienlin, M.

    1996-08-01

    Identification and quantitation in conventional1H spectroscopic imagingin vivois often hampered by the small chemical-shift range. To improve the spectral resolution of spectroscopic imaging, homonuclear two-dimensional correlation spectroscopy has been combined with phase encoding of the spatial dimensions. From the theoretical description of the coherence-transfer signal in the Fourier-transform domain, a comprehensive acquisition and processing strategy is presented that includes optimization of the width and the position of the acquisition windows, matched filtering of the signal envelope, and graphical presentation of the cross peak of interest. The procedure has been applied to image the spatial distribution of the correlation peaks from specific spin systems in the hypocotyl of castor bean (Ricinus communis) seedlings. Despite the overlap of many resonances, correlation-peak imaging made it possible to observe a number of proton resonances, such as those of sucrose, β-glucose, glutamine/glutamate, lysine, and arginine.

  13. Cryogenic, Absolute, High Pressure Sensor

    NASA Technical Reports Server (NTRS)

    Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)

    2001-01-01

    A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.

  14. Hale Central Peak

    NASA Technical Reports Server (NTRS)

    2004-01-01

    19 September 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows some of the mountains that make up the central peak region of Hale Crater, located near 35.8oS, 36.5oW. Dark, smooth-surfaced sand dunes are seen to be climbing up the mountainous slopes. The central peak of a crater consists of rock brought up during the impact from below the crater floor. This autumn image is illuminated from the upper left and covers an area approximately 3 km (1.9 mi) across.

  15. Make peak flow a habit!

    MedlinePlus

    Asthma - make peak flow a habit; Reactive airway disease - peak flow; Bronchial asthma - peak flow ... your airways are narrowed and blocked due to asthma, your peak flow values drop. You can check ...

  16. Bidirectional Modulation of Numerical Magnitude.

    PubMed

    Arshad, Qadeer; Nigmatullina, Yuliya; Nigmatullin, Ramil; Asavarut, Paladd; Goga, Usman; Khan, Sarah; Sander, Kaija; Siddiqui, Shuaib; Roberts, R E; Cohen Kadosh, Roi; Bronstein, Adolfo M; Malhotra, Paresh A

    2016-05-01

    Numerical cognition is critical for modern life; however, the precise neural mechanisms underpinning numerical magnitude allocation in humans remain obscure. Based upon previous reports demonstrating the close behavioral and neuro-anatomical relationship between number allocation and spatial attention, we hypothesized that these systems would be subject to similar control mechanisms, namely dynamic interhemispheric competition. We employed a physiological paradigm, combining visual and vestibular stimulation, to induce interhemispheric conflict and subsequent unihemispheric inhibition, as confirmed by transcranial direct current stimulation (tDCS). This allowed us to demonstrate the first systematic bidirectional modulation of numerical magnitude toward either higher or lower numbers, independently of either eye movements or spatial attention mediated biases. We incorporated both our findings and those from the most widely accepted theoretical framework for numerical cognition to present a novel unifying computational model that describes how numerical magnitude allocation is subject to dynamic interhemispheric competition. That is, numerical allocation is continually updated in a contextual manner based upon relative magnitude, with the right hemisphere responsible for smaller magnitudes and the left hemisphere for larger magnitudes.

  17. Bidirectional Modulation of Numerical Magnitude

    PubMed Central

    Arshad, Qadeer; Nigmatullina, Yuliya; Nigmatullin, Ramil; Asavarut, Paladd; Goga, Usman; Khan, Sarah; Sander, Kaija; Siddiqui, Shuaib; Roberts, R. E.; Cohen Kadosh, Roi; Bronstein, Adolfo M.; Malhotra, Paresh A.

    2016-01-01

    Numerical cognition is critical for modern life; however, the precise neural mechanisms underpinning numerical magnitude allocation in humans remain obscure. Based upon previous reports demonstrating the close behavioral and neuro-anatomical relationship between number allocation and spatial attention, we hypothesized that these systems would be subject to similar control mechanisms, namely dynamic interhemispheric competition. We employed a physiological paradigm, combining visual and vestibular stimulation, to induce interhemispheric conflict and subsequent unihemispheric inhibition, as confirmed by transcranial direct current stimulation (tDCS). This allowed us to demonstrate the first systematic bidirectional modulation of numerical magnitude toward either higher or lower numbers, independently of either eye movements or spatial attention mediated biases. We incorporated both our findings and those from the most widely accepted theoretical framework for numerical cognition to present a novel unifying computational model that describes how numerical magnitude allocation is subject to dynamic interhemispheric competition. That is, numerical allocation is continually updated in a contextual manner based upon relative magnitude, with the right hemisphere responsible for smaller magnitudes and the left hemisphere for larger magnitudes. PMID:26879093

  18. The Color-Magnitude Distribution of Small Jupiter Trojans

    NASA Astrophysics Data System (ADS)

    Wong, Ian; Brown, Michael E.

    2015-12-01

    We present an analysis of survey observations targeting the leading L4 Jupiter Trojan cloud near opposition using the wide-field Suprime-Cam CCD camera on the 8.2 m Subaru Telescope. The survey covered about 38 deg2 of sky and imaged 147 fields spread across a wide region of the L4 cloud. Each field was imaged in both the g‧ and the i‧ band, allowing for the measurement of g - i color. We detected 557 Trojans in the observed fields, ranging in absolute magnitude from H = 10.0 to H = 20.3. We fit the total magnitude distribution to a broken power law and show that the power-law slope rolls over from 0.45 ± 0.05 to {0.36}-0.09+0.05 at a break magnitude of {H}b={14.93}-0.88+0.73. Combining the best-fit magnitude distribution of faint objects from our survey with an analysis of the magnitude distribution of bright objects listed in the Minor Planet Center catalog, we obtain the absolute magnitude distribution of Trojans over the entire range from H = 7.2 to H = 16.4. We show that the g - i color of Trojans decreases with increasing magnitude. In the context of the less-red and red color populations, as classified in Wong et al. using photometric and spectroscopic data, we demonstrate that the observed trend in color for the faint Trojans is consistent with the expected trend derived from extrapolation of the best-fit color population magnitude distributions for bright cataloged Trojans. This indicates a steady increase in the relative number of less-red objects with decreasing size. Finally, we interpret our results using collisional modeling and propose several hypotheses for the color evolution of the Jupiter Trojan population. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  19. Impact Crater with Peak

    NASA Technical Reports Server (NTRS)

    2002-01-01

    (Released 14 June 2002) The Science This THEMIS visible image shows a classic example of a martian impact crater with a central peak. Central peaks are common in large, fresh craters on both Mars and the Moon. This peak formed during the extremely high-energy impact cratering event. In many martian craters the central peak has been either eroded or buried by later sedimentary processes, so the presence of a peak in this crater indicates that the crater is relatively young and has experienced little degradation. Observations of large craters on the Earth and the Moon, as well as computer modeling of the impact process, show that the central peak contains material brought from deep beneath the surface. The material exposed in these peaks will provide an excellent opportunity to study the composition of the martian interior using THEMIS multi-spectral infrared observations. The ejecta material around the crater can is well preserved, again indicating relatively little modification of this landform since its initial creation. The inner walls of this approximately 18 km diameter crater show complex slumping that likely occurred during the impact event. Since that time there has been some downslope movement of material to form the small chutes and gullies that can be seen on the inner crater wall. Small (50-100 m) mega-ripples composed of mobile material can be seen on the floor of the crater. Much of this material may have come from the walls of the crater itself, or may have been blown into the crater by the wind. The Story When a meteor smacked into the surface of Mars with extremely high energy, pow! Not only did it punch an 11-mile-wide crater in the smoother terrain, it created a central peak in the middle of the crater. This peak forms kind of on the 'rebound.' You can see this same effect if you drop a single drop of milk into a glass of milk. With craters, in the heat and fury of the impact, some of the land material can even liquefy. Central peaks like the one

  20. An Integrated Model of Choices and Response Times in Absolute Identification

    ERIC Educational Resources Information Center

    Brown, Scott D.; Marley, A. A. J.; Donkin, Christopher; Heathcote, Andrew

    2008-01-01

    Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a…

  1. The Effects of Exercise Intensity vs. Metabolic State on the Variability and Magnitude of Left Ventricular Twist Mechanics during Exercise.

    PubMed

    Armstrong, Craig; Samuel, Jake; Yarlett, Andrew; Cooper, Stephen-Mark; Stembridge, Mike; Stöhr, Eric J

    2016-01-01

    Increased left ventricular (LV) twist and untwisting rate (LV twist mechanics) are essential responses of the heart to exercise. However, previously a large variability in LV twist mechanics during exercise has been observed, which complicates the interpretation of results. This study aimed to determine some of the physiological sources of variability in LV twist mechanics during exercise. Sixteen healthy males (age: 22 ± 4 years, [Formula: see text]O2peak: 45.5 ± 6.9 ml∙kg-1∙min-1, range of individual anaerobic threshold (IAT): 32-69% of [Formula: see text]O2peak) were assessed at rest and during exercise at: i) the same relative exercise intensity, 40%peak, ii) at 2% above IAT, and, iii) at 40%peak with hypoxia (40%peak+HYP). LV volumes were not significantly different between exercise conditions (P > 0.05). However, the mean margin of error of LV twist was significantly lower (F2,47 = 2.08, P < 0.05) during 40%peak compared with IAT (3.0 vs. 4.1 degrees). Despite the same workload and similar LV volumes, hypoxia increased LV twist and untwisting rate (P < 0.05), but the mean margin of error remained similar to that during 40%peak (3.2 degrees, P > 0.05). Overall, LV twist mechanics were linearly related to rate pressure product. During exercise, the intra-individual variability of LV twist mechanics is smaller at the same relative exercise intensity compared with IAT. However, the absolute magnitude (degrees) of LV twist mechanics appears to be associated with the prevailing rate pressure product. Exercise tests that evaluate LV twist mechanics should be standardised by relative exercise intensity and rate pressure product be taken into account when interpreting results.

  2. INDIAN PEAKS WILDERNESS, COLORADO.

    USGS Publications Warehouse

    Pearson, Robert C.; Speltz, Charles N.

    1984-01-01

    The Indian Peaks Wilderness northwest of Denver is partly within the Colorado Mineral Belt, and the southeast part of it contains all the geologic characteristics associated with the several nearby mining districts. Two deposits have demonstrated mineral resources, one of copper and the other of uranium; both are surrounded by areas with probable potential. Two other areas have probable resource potential for copper, gold, and possibly molydenum. Detailed gravity and magnetic studies in the southeast part of the Indian Peaks Wilderness might detect in the subsurface igneous bodies that may be mineralized. Physical exploration such as drilling would be necessary to determine more precisely the copper resources at the Roaring Fork locality and uranium resources at Wheeler Basin.

  3. PEAK LIMITING AMPLIFIER

    DOEpatents

    Goldsworthy, W.W.; Robinson, J.B.

    1959-03-31

    A peak voltage amplitude limiting system adapted for use with a cascade type amplifier is described. In its detailed aspects, the invention includes an amplifier having at least a first triode tube and a second triode tube, the cathode of the second tube being connected to the anode of the first tube. A peak limiter triode tube has its control grid coupled to thc anode of the second tube and its anode connected to the cathode of the second tube. The operation of the limiter is controlled by a bias voltage source connected to the control grid of the limiter tube and the output of the system is taken from the anode of the second tube.

  4. Database applicaton for absolute spectrophotometry

    NASA Astrophysics Data System (ADS)

    Bochkov, Valery V.; Shumko, Sergiy

    2002-12-01

    32-bit database application with multidocument interface for Windows has been developed to calculate absolute energy distributions of observed spectra. The original database contains wavelength calibrated observed spectra which had been already passed through apparatus reductions such as flatfielding, background and apparatus noise subtracting. Absolute energy distributions of observed spectra are defined in unique scale by means of registering them simultaneously with artificial intensity standard. Observations of sequence of spectrophotometric standards are used to define absolute energy of the artificial standard. Observations of spectrophotometric standards are used to define optical extinction in selected moments. FFT algorithm implemented in the application allows performing convolution (deconvolution) spectra with user-defined PSF. The object-oriented interface has been created using facilities of C++ libraries. Client/server model with Windows Socket functionality based on TCP/IP protocol is used to develop the application. It supports Dynamic Data Exchange conversation in server mode and uses Microsoft Exchange communication facilities.

  5. Absolute classification with unsupervised clustering

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, D. A.

    1992-01-01

    An absolute classification algorithm is proposed in which the class definition through training samples or otherwise is required only for a particular class of interest. The absolute classification is considered as a problem of unsupervised clustering when one cluster is known initially. The definitions and statistics of the other classes are automatically developed through the weighted unsupervised clustering procedure, which is developed to keep the cluster corresponding to the class of interest from losing its identity as the class of interest. Once all the classes are developed, a conventional relative classifier such as the maximum-likelihood classifier is used in the classification.

  6. Understanding Magnitudes to Understand Fractions

    ERIC Educational Resources Information Center

    Gabriel, Florence

    2016-01-01

    Fractions are known to be difficult to learn and difficult to teach, yet they are vital for students to have access to further mathematical concepts. This article uses evidence to support teachers employing teaching methods that focus on the conceptual understanding of the magnitude of fractions.

  7. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  8. Peak-Finding Algorithms.

    PubMed

    Hung, Jui-Hung; Weng, Zhiping

    2017-03-01

    Microarray and next-generation sequencing technologies have greatly expedited the discovery of genomic DNA that can be enriched using various biochemical methods. Chromatin immunoprecipitation (ChIP) is a general method for enriching chromatin fragments that are specifically recognized by an antibody. The resulting DNA fragments can be assayed by microarray (ChIP-chip) or sequencing (ChIP-seq). This introduction focuses on ChIP-seq data analysis. The first step of analyzing ChIP-seq data is identifying regions in the genome that are enriched in a ChIP sample; these regions are called peaks.

  9. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  10. Stress magnitudes in the crust: constraints from stress orientation and relative magnitude data

    USGS Publications Warehouse

    Zoback, M.L.; Magee, M.

    1991-01-01

    The World Stress Map Project is a global cooperative effort to compile and interpret data on the orientation and relative magnitudes of the contemporary in situ tectonic stress field in the Earth's lithosphere. The intraplate stress field in both the oceans and continents is largely compressional with one or both of the horizontal stresses greater than the vertical stress. The regionally uniform horizontal intraplate stress orientations are generally consistent with either relative or absolute plate motions indicating that plate-boundary forces dominate the stress distribution within the plates. Current models of stresses due to whole mantle flow inferred from seismic topography models predict a general compressional stress state within continents but do not match the broad-scale horizontal stress orientations. The broad regionally uniform intraplate stress orientations are best correlated with compressional plate-boundary forces and the geometry of the plate boundaries. -from Authors

  11. Absolute transition probabilities of phosphorus.

    NASA Technical Reports Server (NTRS)

    Miller, M. H.; Roig, R. A.; Bengtson, R. D.

    1971-01-01

    Use of a gas-driven shock tube to measure the absolute strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-

  12. Relativistic Absolutism in Moral Education.

    ERIC Educational Resources Information Center

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  13. Absolute Standards for Climate Measurements

    NASA Astrophysics Data System (ADS)

    Leckey, J.

    2016-10-01

    In a world of changing climate, political uncertainty, and ever-changing budgets, the benefit of measurements traceable to SI standards increases by the day. To truly resolve climate change trends on a decadal time scale, on-orbit measurements need to be referenced to something that is both absolute and unchanging. One such mission is the Climate Absolute Radiance and Refractivity Observatory (CLARREO) that will measure a variety of climate variables with an unprecedented accuracy to definitively quantify climate change. In the CLARREO mission, we will utilize phase change cells in which a material is melted to calibrate the temperature of a blackbody that can then be observed by a spectrometer. A material's melting point is an unchanging physical constant that, through a series of transfers, can ultimately calibrate a spectrometer on an absolute scale. CLARREO consists of two primary instruments: an infrared (IR) spectrometer and a reflected solar (RS) spectrometer. The mission will contain orbiting radiometers with sufficient accuracy to calibrate other space-based instrumentation and thus transferring the absolute traceability. The status of various mission options will be presented.

  14. An empirical evolutionary magnitude estimation for early warning of earthquakes

    NASA Astrophysics Data System (ADS)

    Chen, Da-Yi; Wu, Yih-Min; Chin, Tai-Lin

    2017-03-01

    The earthquake early warning (EEW) system is difficult to provide consistent magnitude estimate in the early stage of an earthquake occurrence because only few stations are triggered and few seismic signals are recorded. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters using the initial portion of the recorded waveforms after P-wave arrival. However, for a large-magnitude earthquake (Mw > 7.0), the time to complete the whole ruptures resulted from the corresponding fault may be very long. The magnitude estimations may not be correctly predicted by the initial portion of the seismograms. To estimate the magnitude of a large earthquake in real-time, the amplitude parameters should be updated with ongoing waveforms instead of adopting amplitude contents in a predefined fixed-length time window, since it may underestimate magnitude for large-magnitude events. In this paper, we propose a fast, robust and less-saturated approach to estimate earthquake magnitudes. The EEW system will initially give a lower-bound of the magnitude in a time window with a few seconds and then update magnitude with less saturation by extending the time window. Here we compared two kinds of time windows for measuring amplitudes. One is P-wave time window (PTW) after P-wave arrival; the other is whole-wave time window after P-wave arrival (WTW), which may include both P and S wave. One to ten second time windows for both PTW and WTW are considered to measure the peak ground displacement from the vertical component of the waveforms. Linear regression analysis are run at each time step (1- to 10-s time interval) to find the empirical relationships among peak ground displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 in Taiwan with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that considering WTW to estimate magnitudes has smaller standard deviation than PTW. The

  15. Tectonic stress - Models and magnitudes

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Bergman, E. A.; Richardson, R. M.

    1980-01-01

    It is shown that global data on directions of principal stresses in plate interiors can serve as a test of possible plate tectonic force models. Such tests performed to date favor force models in which ridge pushing forces play a significant role. For such models the general magnitude of regional deviatoric stresses is comparable to the 200-300 bar compressive stress exerted by spreading ridges. An alternative approach to estimating magnitudes of regional deviatoric stresses from stress orientations is to seek regions of local stress either demonstrably smaller than or larger than the regional stresses. The regional stresses in oceanic intraplate regions are larger than the 100-bar compression exerted by the Ninetyeast Ridge and less than the bending stresses (not less than 1 kbar) beneath Hawaii.

  16. Subject position affects EEG magnitudes.

    PubMed

    Rice, Justin K; Rorden, Christopher; Little, Jessica S; Parra, Lucas C

    2013-01-01

    EEG (electroencephalography) has been used for decades in thousands of research studies and is today a routine clinical tool despite the small magnitude of measured scalp potentials. It is widely accepted that the currents originating in the brain are strongly influenced by the high resistivity of skull bone, but it is less well known that the thin layer of CSF (cerebrospinal fluid) has perhaps an even more important effect on EEG scalp magnitude by spatially blurring the signals. Here it is shown that brain shift and the resulting small changes in CSF layer thickness, induced by changing the subject's position, have a significant effect on EEG signal magnitudes in several standard visual paradigms. For spatially incoherent high-frequency activity the effect produced by switching from prone to supine can be dramatic, increasing occipital signal power by several times for some subjects (on average 80%). MRI measurements showed that the occipital CSF layer between the brain and skull decreases by approximately 30% in thickness when a subject moves from prone to supine position. A multiple dipole model demonstrated that this can indeed lead to occipital EEG signal power increases in the same direction and order of magnitude as those observed here. These results suggest that future EEG studies should control for subjects' posture, and that some studies may consider placing their subjects into the most favorable position for the experiment. These findings also imply that special consideration should be given to EEG measurements from subjects with brain atrophy due to normal aging or neurodegenerative diseases, since the resulting increase in CSF layer thickness could profoundly decrease scalp potential measurements.

  17. A catalog of observed nuclear magnitudes of Jupiter family comets

    NASA Astrophysics Data System (ADS)

    Tancredi, G.; Fernández, J. A.; Rickman, H.; Licandro, J.

    2000-10-01

    A catalog of a sample of 105 Jupiter family (JF) comets (defined as those with Tisserand constants T > 2 and orbital periods P < 20 yr) is presented with our ``best estimates'' of their absolute nuclear magnitudes H_N = V(1,0,0). The catalog includes all the nuclear magnitudes reported after 1950 until August 1998 that appear in the International Comet Quarterly Archive of Cometary Photometric Data, the Minor Planet Center (MPC) data base, IAU Circulars, International Comet Quarterly, and a few papers devoted to some particular comets, together with our own observations. Photometric data previous to 1990 have mainly been taken from the Comet Light Curve Catalogue (CLICC) compiled by Kamél (\\cite{kamel}). We discuss the reliability of the reported nuclear magnitudes in relation to the inherent sources of errors and uncertainties, in particular the coma contamination often present even at large heliocentric distances. A large fraction of the JF comets of our sample indeed shows various degrees of activity at large heliocentric distances, which is correlated with recent downward jumps in their perihelion distances. The reliability of coma subtraction methods to compute the nuclear magnitude is also discussed. Most absolute nuclear magnitudes are found in the range 15 - 18, with no magnitudes fainter than H_N ~ 19.5. The catalog can be found at: http://www.fisica.edu.uy/ ~ gonzalo/catalog/. Table 2 and Appendix B are only available in electronic form at http://www.edpsciences.org Table 5 is also available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html

  18. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  19. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  20. Physics of negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Abraham, Eitan; Penrose, Oliver

    2017-01-01

    Negative absolute temperatures were introduced into experimental physics by Purcell and Pound, who successfully applied this concept to nuclear spins; nevertheless, the concept has proved controversial: a recent article aroused considerable interest by its claim, based on a classical entropy formula (the "volume entropy") due to Gibbs, that negative temperatures violated basic principles of statistical thermodynamics. Here we give a thermodynamic analysis that confirms the negative-temperature interpretation of the Purcell-Pound experiments. We also examine the principal arguments that have been advanced against the negative temperature concept; we find that these arguments are not logically compelling, and moreover that the underlying "volume" entropy formula leads to predictions inconsistent with existing experimental results on nuclear spins. We conclude that, despite the counterarguments, negative absolute temperatures make good theoretical sense and did occur in the experiments designed to produce them.

  1. Optomechanics for absolute rotation detection

    NASA Astrophysics Data System (ADS)

    Davuluri, Sankar

    2016-07-01

    In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.

  2. Absolute pitch and pupillary response: effects of timbre and key color.

    PubMed

    Schlemmer, Kathrin B; Kulke, Franziska; Kuchinke, Lars; Van Der Meer, Elke

    2005-07-01

    The pitch identification performance of absolute pitch possessors has previously been shown to depend on pitch range, key color, and timbre of presented tones. In the present study, the dependence of pitch identification performance on key color and timbre of musical tones was examined by analyzing hit rates, reaction times, and pupillary responses of absolute pitch possessors (n = 9) and nonpossessors (n = 12) during a pitch identification task. Results revealed a significant dependence of pitch identification hit rate but not reaction time on timbre and key color in both groups. Among absolute pitch possessors, peak dilation of the pupil was significantly dependent on key color whereas the effect of timbre was marginally significant. Peak dilation of the pupil differed significantly between absolute pitch possessors and nonpossessors. The observed effects point to the importance of learning factors in the acquisition of absolute pitch.

  3. The representation of numerical magnitude

    PubMed Central

    Brannon, Elizabeth M

    2006-01-01

    The combined efforts of many fields are advancing our understanding of how number is represented. Researchers studying numerical reasoning in adult humans, developing humans and non-human animals are using a suite of behavioral and neurobiological methods to uncover similarities and differences in how each population enumerates and compares quantities to identify the neural substrates of numerical cognition. An important picture emerging from this research is that adult humans share with non-human animals a system for representing number as language-independent mental magnitudes and that this system emerges early in development. PMID:16546373

  4. Kitt Peak Observes Comet

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Kitt Peak National Observatory's 2.1-meter telescope observed comet Tempel 1 on April 11, 2005, when the comet was near its closest approach to the Earth. A pinkish dust jet is visible to the southwest, with the broader neutral gas coma surrounding it. North is up, East is to the left, and the field of view is about 80,000 km (50,000 miles) wide. The Sun was almost directly behind the observer at this time. The red, green and blue bars in the background are stars that moved between the individual images.

    This pseudo-color picture was created by combining three black and white images obtained with different filters. The images were obtained with the HB Narrowband Comet Filters, using CN (3870 A - shown in blue), C2 (5140 A - shown in green) and RC (7128 A - shown in red). The CN and C2 filters capture different gas species (along with the underlying dust) while the RC filter captures just the dust.

  5. Networks of Absolute Calibration Stars for SST, AKARI, and WISE

    NASA Astrophysics Data System (ADS)

    Cohen, M.

    2007-04-01

    I describe the Cohen-Walker-Witteborn (CWW) network of absolute calibration stars built to support ground-based, airborne, and space-based sensors, and how they are used to calibrate instruments on the SPITZER Space Telescope (SST and Japan's AKARI (formerly ASTRO-F), and to support NASA's planned MidEx WISE (the Wide-field Infrared Survey Explorer). All missions using this common calibration share a self-consistent framework embracing photometry and low-resolution spectroscopy. CWW also underpins COBE/DIRBE several instruments used on the Kuiper Airborne Observatory ({KAO}), the joint Japan-USA ``IR Telescope in Space" (IRTS) Near-IR and Mid-IR spectrometers, the European Space Agency's IR Space Observatory (ISO), and the US Department of Defense's Midcourse Space eXperiment (MSX). This calibration now spans the far-UV to mid-infrared range with Sirius (one specific Kurucz synthetic spectrum) as basis, and zero magnitude defined from another Kurucz spectrum intended to represent an ideal Vega (not the actual star with its pole-on orientation and mid-infrared dust excess emission). Precision 4-29 μm radiometric measurements on MSX validate CWW's absolute Kurucz spectrum of Sirius, the primary, and a set of bright K/MIII secondary standards. Sirius is measured to be 1.0% higher than predicted. CWW's definitions of IR zero magnitudes lie within 1.1% absolute of MSX measurements. The US Air Force Research Laboratory's independent analysis of on-orbit {MSX} stellar observations compared with emissive reference spheres show CWW primary and empirical secondary spectra lie well within the ±1.45% absolute uncertainty associated with this 15-year effort. Our associated absolute calibration for the InfraRed Array Camera (IRAC) on the SST lies within ˜2% of the recent extension of the calibration of the Hubble Space Telescope's STIS instrument to NICMOS (Bohlin, these Proceedings), showing the closeness of these two independent approaches to calibration.

  6. Peak resolution by semiderivative voltammetry

    SciTech Connect

    Toman, Jeffrey J.; Brown, Steven D.

    1981-08-01

    One of the limitations of dynamic electrochemistry, when used as a quantitative analytical technique, is the resolution of overlapping waves. Approaches used in the past have been either time intensive methods using many blanks, or have relied on many empirical peak parameters. Using an approach based on semidifferential voltammetry, two new techniques have been developed for rapid peak deconvolution. The first technique, NIFITl, is an iterative stripping routine, while the second, BIMFIT, is based on sequential simplex optimization. Both approaches were characterized by deconvolution of synthetic fused peak systems. Subsequently, both were applied to semi-differentiated linear scan voltammograms of Cd2+, Pb2+ and In3+ and to semi-differentiated linear scan anodic stripping voltammograms of Cd2+, ln3+ and Tl+. Deconvolutions were directly characterized by peak height, peak potential and peak halfwidth, in addition to the total squared deviation of the fit peaks from the real fused peaks. Studies of individual peaks as well as of standard additions to fused peaks showed both methods worked well, with excellent deconvolution efficiencies. Synthetic data were totally deconvoluted with peak separation as small as 25 mv, while real systems were deconvoluted with separations below 40 mv. Peak parameters obtained from these deconvolutions allow observations of electrode processes, even in systems containing overlapping peaks.

  7. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  8. Effects of Numerical Versus Foreground-Only Icon Displays on Understanding of Risk Magnitudes.

    PubMed

    Stone, Eric R; Gabard, Alexis R; Groves, Aislinn E; Lipkus, Isaac M

    2015-01-01

    The aim of this work is to advance knowledge of how to measure gist and verbatim understanding of risk magnitude information and to apply this knowledge to address whether graphics that focus on the number of people affected (the numerator of the risk ratio, i.e., the foreground) are effective displays for increasing (a) understanding of absolute and relative risk magnitudes and (b) risk avoidance. In 2 experiments, the authors examined the effects of a graphical display that used icons to represent the foreground information on measures of understanding (Experiments 1 and 2) and on perceived risk, affect, and risk aversion (Experiment 2). Consistent with prior findings, this foreground-only graphical display increased perceived risk and risk aversion; however, it also led to decreased understanding of absolute (although not relative) risk magnitudes. Methodologically, this work shows the importance of distinguishing understanding of absolute risk from understanding of relative risk magnitudes, and the need to assess gist knowledge of both types of risk. Substantively, this work shows that although using foreground-only graphical displays is an appealing risk communication strategy to increase risk aversion, doing so comes at the cost of decreased understanding of absolute risk magnitudes.

  9. Climate Absolute Radiance and Refractivity Observatory (CLARREO)

    NASA Technical Reports Server (NTRS)

    Leckey, John P.

    2015-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a mission, led and developed by NASA, that will measure a variety of climate variables with an unprecedented accuracy to quantify and attribute climate change. CLARREO consists of three separate instruments: an infrared (IR) spectrometer, a reflected solar (RS) spectrometer, and a radio occultation (RO) instrument. The mission will contain orbiting radiometers with sufficient accuracy, including on orbit verification, to calibrate other space-based instrumentation, increasing their respective accuracy by as much as an order of magnitude. The IR spectrometer is a Fourier Transform spectrometer (FTS) working in the 5 to 50 microns wavelength region with a goal of 0.1 K (k = 3) accuracy. The FTS will achieve this accuracy using phase change cells to verify thermistor accuracy and heated halos to verify blackbody emissivity, both on orbit. The RS spectrometer will measure the reflectance of the atmosphere in the 0.32 to 2.3 microns wavelength region with an accuracy of 0.3% (k = 2). The status of the instrumentation packages and potential mission options will be presented.

  10. Absolute calibration of forces in optical tweezers

    NASA Astrophysics Data System (ADS)

    Dutra, R. S.; Viana, N. B.; Maia Neto, P. A.; Nussenzveig, H. M.

    2014-07-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past 15 years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spot, adapting frequently employed video microscopy techniques. Combined with interface spherical aberration, it reveals a previously unknown window of instability for trapping. Comparison with experimental data leads to an overall agreement within error bars, with no fitting, for a broad range of microsphere radii, from the Rayleigh regime to the ray optics one, for different polarizations and trapping heights, including all commonly employed parameter domains. Besides signaling full first-principles theoretical understanding of optical tweezers operation, the results may lead to improved instrument design and control over experiments, as well as to an extended domain of applicability, allowing reliable force measurements, in principle, from femtonewtons to nanonewtons.

  11. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  12. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < -1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  13. Effect of reservoir storage on peak flow

    USGS Publications Warehouse

    Mitchell, William D.

    1962-01-01

    For observation of small-basin flood peaks, numerous crest-stage gages now are operated at culverts in roadway embankments. To the extent that they obstruct the natural flood plains of the streams, these embankments serve to create detention reservoirs, and thus to reduce the magnitude of observed peak flows. Hence, it is desirable to obtain a factor, I/O, by which the observed outflow peaks may be adjusted to corresponding inflow peaks. The problem is made more difficult by the fact that, at most of these observation sites, only peak stages and discharges are observed, and complete hydrographs are not available. It is postulated that the inflow hydrographs may be described in terms of Q, the instantaneous discharge; A, the size of drainage area; Pe, the amount of rainfall excess; H, the time from beginning of rainfall excess; D, the duration of rainfall excess; and T and k, characteristic times for the drainage area, and indicative of the time lag between rainfall and runoff. These factors are combined into the dimensionless ratios (QT/APe), (H/T), (k/T), and (D/T), leading to families of inflow hydrographs in which the first ratio is the ordinate, the second is the abscissa, and the third and fourth are distinguishing parameters. Sixteen dimensionless inflow hydrographs have been routed through reservoir storage to obtain 139 corresponding outflow hydrographs. In most of the routings it has been assumed that the storage-outflow relation is linear; that is, that storage is some constant, K, times the outflow. The existence of nonlinear storage is recognized, and exploratory nonlinear routings are described, but analyses and conclusions are confined to the problems of linear storage. Comparisons between inflow hydrographs and outflow hydrographs indicate that, at least for linear storage, I/O=f(k/T, D/T, K/T) in which I and O are, respectively, the magnitudes of the inflow and the outflow peaks, and T, k, D, and K are as defined above. Diagrams are presented to

  14. Magnitude and Frequency of Floods in Alabama, 2003

    USGS Publications Warehouse

    Hedgecock, T.S.; Feaster, Toby D.

    2007-01-01

    Methods of estimating flood magnitudes for recurrence intervals of 1.5, 2, 5, 10, 25, 50, 100, 200, and 500 years have been developed for rural streams in Alabama that are not affected by regulation or urbanization. Regression relations were developed using generalized least-squares regression techniques to estimate flood magnitude and frequency on ungaged streams as a function of the basin drainage area. These methods are based on flood-frequency characteristics for 169 gaging stations in Alabama and 47 gaging stations in adjacent states having 10 or more years of record through September 2003. Graphical relations of peak flows to drainage areas are presented for sites along the Alabama, Coosa, Tallapoosa, Tennessee, Tombigbee, and Black Warrior Rivers. Equations that account for drainage area and percentage of impervious cover as independent variables also are provided for estimating flood magnitudes on ungaged urban streams (taken from a previous report).

  15. Determination of absolute internal conversion coefficients using the SAGE spectrometer

    NASA Astrophysics Data System (ADS)

    Sorri, J.; Greenlees, P. T.; Papadakis, P.; Konki, J.; Cox, D. M.; Auranen, K.; Partanen, J.; Sandzelius, M.; Pakarinen, J.; Rahkila, P.; Uusitalo, J.; Herzberg, R.-D.; Smallcombe, J.; Davies, P. J.; Barton, C. J.; Jenkins, D. G.

    2016-03-01

    A non-reference based method to determine internal conversion coefficients using the SAGE spectrometer is carried out for transitions in the nuclei of 154Sm, 152Sm and 166Yb. The Normalised-Peak-to-Gamma method is in general an efficient tool to extract internal conversion coefficients. However, in many cases the required well-known reference transitions are not available. The data analysis steps required to determine absolute internal conversion coefficients with the SAGE spectrometer are presented. In addition, several background suppression methods are introduced and an example of how ancillary detectors can be used to select specific reaction products is given. The results obtained for ground-state band E2 transitions show that the absolute internal conversion coefficients can be extracted using the methods described with a reasonable accuracy. In some cases of less intense transitions only an upper limit for the internal conversion coefficient could be given.

  16. Absolute Gravity Datum in the Age of Cold Atom Gravimeters

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Eckl, M. C.

    2014-12-01

    The international gravity datum is defined today by the International Gravity Standardization Net of 1971 (IGSN-71). The data supporting this network was measured in the 1950s and 60s using pendulum and spring-based gravimeter ties (plus some new ballistic absolute meters) to replace the prior protocol of referencing all gravity values to the earlier Potsdam value. Since this time, gravimeter technology has advanced significantly with the development and refinement of the FG-5 (the current standard of the industry) and again with the soon-to-be-available cold atom interferometric absolute gravimeters. This latest development is anticipated to provide improvement in the range of two orders of magnitude as compared to the measurement accuracy of technology utilized to develop ISGN-71. In this presentation, we will explore how the IGSN-71 might best be "modernized" given today's requirements and available instruments and resources. The National Geodetic Survey (NGS), along with other relevant US Government agencies, is concerned about establishing gravity control to establish and maintain high order geodetic networks as part of the nation's essential infrastructure. The need to modernize the nation's geodetic infrastructure was highlighted in "Precise Geodetic Infrastructure, National Requirements for a Shared Resource" National Academy of Science, 2010. The NGS mission, as dictated by Congress, is to establish and maintain the National Spatial Reference System, which includes gravity measurements. Absolute gravimeters measure the total gravity field directly and do not involve ties to other measurements. Periodic "intercomparisons" of multiple absolute gravimeters at reference gravity sites are used to constrain the behavior of the instruments to ensure that each would yield reasonably similar measurements of the same location (i.e. yield a sufficiently consistent datum when measured in disparate locales). New atomic interferometric gravimeters promise a significant

  17. [Experimental test of the ideal free distribution in humans: the effects of reinforcer magnitude and group size].

    PubMed

    Yamaguchi, Tetsuo; Ito, Masato

    2006-02-01

    The ideal free distribution (IFD) theory describes how animals living in the wild distribute themselves between two different resource sites. The IFD theory predicts that the ratio of animals in the two resource sites is equal to the ratio of resources available in those sites. The present study investigated the effects of absolute reinforcer magnitude and group size on the distribution of humans between two resource sites. Two groups of undergraduate students (N = 10 and N = 20) chose blue or red cards to earn points. The ratio of points assigned to each color varied from 1 : 1 to 4 : 1 across five conditions. In each condition, absolute reinforcer magnitude was varied. The generalized ideal free distribution equation was fit to the data obtained under the different magnitude and group size conditions. These results suggest that larger absolute reinforcer magnitude and smaller group size produce higher sensitivity to resource distribution.

  18. Multiple Peaks in SABER Mesospheric OH Emission Altitude Profiles

    NASA Astrophysics Data System (ADS)

    Rozum, J. C.; Ware, G. A.; Baker, D. J.; Mlynczak, M. G.; Russell, J. M.

    2012-12-01

    Since January 2002, the SABER instrument aboard the TIMED satellite has been performing limb-scan measurements of the altitude distribution of the hydroxyl airglow. The majority of the SABER 1.6 μm and 2.0 μm OH volume emission rate (VER) profiles manifest a single peak at around 90 km, and are roughly gaussian in shape. However, a significant number (approximately 10% in nighttime) of these VER profiles have an irregular characteristic of multiple peaks that are comparable in brightness to the absolute maximum. The origin of these multiple peaks in SABER profiles is currently being studied. Single peak and irregular SABER OH VER profiles are compared with OH VER altitude curves obtained via theoretical vertical distribution models. In addition, we compare SABER profiles with OH VER altitude profiles obtained from rocket-borne radiometric experiments. The techniques of Liu and Shepherd's analysis of double-peaked emission profiles obtained by the Wind Imaging Interferometer (WINDII) using similar scan geometry are applied. The geographical distribution of the SABER nighttime multiple-peak VER profiles in the 1.6 μm and 2.0 μm channels is presented, as are the distributions of these profiles with respect to instrument-scan geometry parameters. It is noted that during the night, multiple peak profiles are more common at equatorial latitudes. A relationship has been found between the geographical distribution of two-peaked profiles and spatial orientation of the SABER instrument's viewing field.

  19. A rise in peak performance age in female athletes.

    PubMed

    Elmenshawy, Ahmed R; Machin, Daniel R; Tanaka, Hirofumi

    2015-06-01

    It was reported in 1980s that ages at which peak performance was observed had remained remarkably stable in the past century, although absolute levels of athletic performance increased dramatically for the same time span. The emergence of older (masters) athletes in the past few decades has changed the demographics and age-spectrum of Olympic athletes. The primary aim of the present study was to determine whether the ages at which peak performance was observed had increased in the recent decades. The data spanning 114 years from the first Olympics (1898) to the most recent Olympics (2014) were collected using the publically available data. In the present study, ages at which Olympic medals (gold, silver, and bronze) were won were used as the indicators of peak performance age. Track and field, swimming, rowing, and ice skating events were analyzed. In men, peak performance age did not change significantly in most of the sporting events (except in 100 m sprint running). In contrast, peak performance ages in women have increased significantly since 1980s and consistently in all the athletic events examined. Interestingly, as women's peak performance age increased, they became similar to men's peak ages in many events. In the last 20-30 years, ages at which peak athletic performance is observed have increased in women but not in men.

  20. Absolute stress measurements at the rangely anticline, Northwestern Colorado

    USGS Publications Warehouse

    de la Cruz, R. V.; Raleigh, C.B.

    1972-01-01

    Five different methods of measuring absolute state of stress in rocks in situ were used at sites near Rangely, Colorado, and the results compared. For near-surface measurements, overcoring of the borehole-deformation gage is the most convenient and rapid means of obtaining reliable values for the magnitude and direction of the state of stress in rocks in situ. The magnitudes and directions of the principal stresses are compared to the geologic features of the different areas of measurement. The in situ stresses are consistent in orientation with the stress direction inferred from the earthquake focal-plane solutions and existing joint patterns but inconsistent with stress directions likely to have produced the Rangely anticline. ?? 1972.

  1. Magnitude and sign correlations in heartbeat fluctuations

    NASA Technical Reports Server (NTRS)

    Ashkenazy, Y.; Ivanov, P. C.; Havlin, S.; Peng, C. K.; Goldberger, A. L.; Stanley, H. E.

    2001-01-01

    We propose an approach for analyzing signals with long-range correlations by decomposing the signal increment series into magnitude and sign series and analyzing their scaling properties. We show that signals with identical long-range correlations can exhibit different time organization for the magnitude and sign. We find that the magnitude series relates to the nonlinear properties of the original time series, while the sign series relates to the linear properties. We apply our approach to the heartbeat interval series and find that the magnitude series is long-range correlated, while the sign series is anticorrelated and that both magnitude and sign series may have clinical applications.

  2. Determination of the total absorption peak in an electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Cheng, Jia-Hua; Wang, Zhe; Lebanowski, Logan; Lin, Guey-Lin; Chen, Shaomin

    2016-08-01

    A physically motivated function was developed to accurately determine the total absorption peak in an electromagnetic calorimeter and to overcome biases present in many commonly used methods. The function is the convolution of a detector resolution function with the sum of a delta function, which represents the complete absorption of energy, and a tail function, which describes the partial absorption of energy and depends on the detector materials and structures. Its performance was tested with the simulation of three typical cases. The accuracy of the extracted peak value, resolution, and peak area was improved by an order of magnitude on average, relative to the Crystal Ball function.

  3. The discovery and comparison of symbolic magnitudes.

    PubMed

    Chen, Dawn; Lu, Hongjing; Holyoak, Keith J

    2014-06-01

    Humans and other primates are able to make relative magnitude comparisons, both with perceptual stimuli and with symbolic inputs that convey magnitude information. Although numerous models of magnitude comparison have been proposed, the basic question of how symbolic magnitudes (e.g., size or intelligence of animals) are derived and represented in memory has received little attention. We argue that symbolic magnitudes often will not correspond directly to elementary features of individual concepts. Rather, magnitudes may be formed in working memory based on computations over more basic features stored in long-term memory. We present a model of how magnitudes can be acquired and compared based on BARTlet, a representationally simpler version of Bayesian Analogy with Relational Transformations (BART; Lu, Chen, & Holyoak, 2012). BARTlet operates on distributions of magnitude variables created by applying dimension-specific weights (learned with the aid of empirical priors derived from pre-categorical comparisons) to more primitive features of objects. The resulting magnitude distributions, formed and maintained in working memory, are sensitive to contextual influences such as the range of stimuli and polarity of the question. By incorporating psychological reference points that control the precision of magnitudes in working memory and applying the tools of signal detection theory, BARTlet is able to account for a wide range of empirical phenomena involving magnitude comparisons, including the symbolic distance effect and the semantic congruity effect. We discuss the role of reference points in cognitive and social decision-making, and implications for the evolution of relational representations.

  4. Magnitude systems in old star catalogues

    NASA Astrophysics Data System (ADS)

    Fujiwara, Tomoko; Yamaoka, Hitoshi

    2005-06-01

    The current system of stellar magnitudes originally introduced by Hipparchus was strictly defined by Norman Pogson in 1856. He based his system on Ptolemy's star catalogue, the Almagest, recorded in about AD137, and defined the magnitude-intensity relationship on a logarithmic scale. Stellar magnitudes observed with the naked eye recorded in seven old star catalogues were analyzed in order to examine the visual magnitude systems. Although psychophysicists have proposed that human visual sensitivity follows a power-law scale, it is shown here that the degree of agreement is far better for a logarithmic scale than for a power-law scale. It is also found that light ratios in each star catalogue are nearly equal to 2.512, if the brightest (1st magnitude) and the faintest (6th magnitude and dimmer) stars are excluded from the study. This means that the visual magnitudes in the old star catalogues agree fully with Pogson's logarithmic scale.

  5. Associations of maternal macronutrient intake during pregnancy with infant BMI peak characteristics and childhood BMI.

    PubMed

    Chen, Ling-Wei; Aris, Izzuddin M; Bernard, Jonathan Y; Tint, Mya-Thway; Colega, Marjorelee; Gluckman, Peter D; Tan, Kok Hian; Shek, Lynette Pei-Chi; Chong, Yap-Seng; Yap, Fabian; Godfrey, Keith M; van Dam, Rob M; Chong, Mary Foong-Fong; Lee, Yung Seng

    2017-03-01

    Background: Infant body mass index (BMI) peak characteristics and early childhood BMI are emerging markers of future obesity and cardiometabolic disease risk, but little is known about their maternal nutritional determinants.Objective: We investigated the associations of maternal macronutrient intake with infant BMI peak characteristics and childhood BMI in the Growing Up in Singapore Towards healthy Outcomes study.Design: With the use of infant BMI data from birth to age 18 mo, infant BMI peak characteristics [age (in months) and magnitude (BMIpeak; in kg/m(2)) at peak and prepeak velocities] were derived from subject-specific BMI curves that were fitted with the use of mixed-effects model with a natural cubic spline function. Associations of maternal macronutrient intake (assessed by using a 24-h recall during late gestation) with infant BMI peak characteristics (n = 910) and BMI z scores at ages 2, 3, and 4 y were examined with the use of multivariable linear regression.Results: Mean absolute maternal macronutrient intakes (percentages of energy) were 72 g protein (15.6%), 69 g fat (32.6%), and 238 g carbohydrate (51.8%). A 25-g (∼100-kcal) increase in maternal carbohydrate intake was associated with a 0.01/mo (95% CI: 0.0003, 0.01/mo) higher prepeak velocity and a 0.04 (95% CI: 0.01, 0.08) higher BMIpeak These associations were mainly driven by sugar intake, whereby a 25-g increment of maternal sugar intake was associated with a 0.02/mo (95% CI: 0.01, 0.03/mo) higher infant prepeak velocity and a 0.07 (95% CI: 0.01, 0.13) higher BMIpeak Higher maternal carbohydrate and sugar intakes were associated with a higher offspring BMI z score at ages 2-4 y. Maternal protein and fat intakes were not consistently associated with the studied outcomes.Conclusion: Higher maternal carbohydrate and sugar intakes are associated with unfavorable infancy BMI peak characteristics and higher early childhood BMI. This trial was registered at clinicaltrials.gov as NCT01174875.

  6. The magnitude-redshift relation in a realistic inhomogeneous universe

    SciTech Connect

    Hada, Ryuichiro; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp

    2014-12-01

    The light rays from a source are subject to a local inhomogeneous geometry generated by inhomogeneous matter distribution as well as the existence of collapsed objects. In this paper we investigate the effect of inhomogeneities and the existence of collapsed objects on the propagation of light rays and evaluate changes in the magnitude-redshift relation from the standard relationship found in a homogeneous FRW universe. We give the expression of the correlation function and the variance for the perturbation of apparent magnitude, and calculate it numerically by using the non-linear matter power spectrum. We use the lognormal probability distribution function for the density contrast and spherical collapse model to truncate the power spectrum in order to estimate the blocking effect by collapsed objects. We find that the uncertainties in Ω{sub m} is ∼ 0.02, and that of w is ∼ 0.04 . We also discuss a possible method to extract these effects from real data which contains intrinsic ambiguities associated with the absolute magnitude.

  7. Peak Oil: Diverging Discursive Pipelines

    NASA Astrophysics Data System (ADS)

    Doctor, Jeff

    Peak oil is the claimed moment in time when global oil production reaches its maximum rate and henceforth forever declines. It is highly controversial as to whether or not peak oil represents cause for serious concern. My thesis explores how this controversy unfolds but brackets the ontological status of the reality indexed by the peakoil concept. I do not choose a side in the debate; I look at the debate itself. I examine the energy outlook documents of ExxonMobil, Shell, BP, Chevron, Total and the International Energy Agency (IEA) as well as academic articles and documentaries. Through an in-depth analysis of peak-oil controversy via tenets of actor-network theory (ANT), I show that what is at stake are competing framings of reality itself, which must be understood when engaging with the contentious idea of peak oil.

  8. Flu Season Starting to Peak

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_162917.html Flu Season Starting to Peak More severe strain of ... 6, 2017 FRIDAY, Jan. 6, 2017 (HealthDay News) -- Flu season is in full swing and it's starting ...

  9. Developmental Foundations of Children's Fraction Magnitude Knowledge.

    PubMed

    Mou, Yi; Li, Yaoran; Hoard, Mary K; Nugent, Lara D; Chu, Felicia W; Rouder, Jeffrey N; Geary, David C

    2016-01-01

    The conceptual insight that fractions represent magnitudes is a critical yet daunting step in children's mathematical development, and the knowledge of fraction magnitudes influences children's later mathematics learning including algebra. In this study, longitudinal data were analyzed to identify the mathematical knowledge and domain-general competencies that predicted 8(th) and 9(th) graders' (n=122) knowledge of fraction magnitudes and its cross-grade gains. Performance on the fraction magnitude measures predicted 9(th) grade algebra achievement. Understanding and fluently identifying the numerator-denominator relation in 7(th) grade emerged as the key predictor of later fraction magnitudes knowledge in both 8(th) and 9(th) grades. Competence at using fraction procedures, knowledge of whole number magnitudes, and the central executive contributed to 9(th) but not 8(th) graders' fraction magnitude knowledge, and knowledge of whole number magnitude contributed to cross-grade gains. The key results suggest fluent processing of numerator-denominator relations presages students' understanding of fractions as magnitudes and that the integration of whole number and fraction magnitudes occurs gradually.

  10. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  11. ON A SUFFICIENT CONDITION FOR ABSOLUTE CONTINUITY.

    DTIC Science & Technology

    The formulation of a condition which yields absolute continuity when combined with continuity and bounded variation is the problem considered in the...Briefly, the formulation is achieved through a discussion which develops a proof by contradiction of a sufficiently theorem for absolute continuity which uses in its hypothesis the condition of continuity and bounded variation .

  12. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  13. Monolithically integrated absolute frequency comb laser system

    SciTech Connect

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  14. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  15. A Bayesian perspective on magnitude estimation.

    PubMed

    Petzschner, Frederike H; Glasauer, Stefan; Stephan, Klaas E

    2015-05-01

    Our representation of the physical world requires judgments of magnitudes, such as loudness, distance, or time. Interestingly, magnitude estimates are often not veridical but subject to characteristic biases. These biases are strikingly similar across different sensory modalities, suggesting common processing mechanisms that are shared by different sensory systems. However, the search for universal neurobiological principles of magnitude judgments requires guidance by formal theories. Here, we discuss a unifying Bayesian framework for understanding biases in magnitude estimation. This Bayesian perspective enables a re-interpretation of a range of established psychophysical findings, reconciles seemingly incompatible classical views on magnitude estimation, and can guide future investigations of magnitude estimation and its neurobiological mechanisms in health and in psychiatric diseases, such as schizophrenia.

  16. Influence of menopause and Type 2 diabetes on pulmonary oxygen uptake kinetics and peak exercise performance during cycling.

    PubMed

    Kiely, Catherine; Rocha, Joel; O'Connor, Eamonn; O'Shea, Donal; Green, Simon; Egaña, Mikel

    2015-10-15

    We investigated if the magnitude of the Type 2 diabetes (T2D)-induced impairments in peak oxygen uptake (V̇O2) and V̇O2 kinetics was affected by menopausal status. Twenty-two women with T2D (8 premenopausal, 14 postmenopausal), and 22 nondiabetic (ND) women (11 premenopausal, 11 postmenopausal) matched by age (range = 30-59 yr) were recruited. Participants completed four bouts of constant-load cycling at 80% of their ventilatory threshold for the determination of V̇O2 kinetics. Cardiac output (CO) (inert gas rebreathing) was recorded at rest and at 30 s and 240 s during two additional bouts. Peak V̇O2 was significantly (P < 0.05) reduced in both groups with T2D compared with ND counterparts (premenopausal, 1.79 ± 0.16 vs. 1.55 ± 0.32 l/min; postmenopausal, 1.60 ± 0.30 vs. 1.45 ± 0.24 l/min). The time constant of phase II of the V̇O2 response was slowed (P < 0.05) in both groups with T2D compared with healthy counterparts (premenopausal, 29.1 ± 11.2 vs. 43.0 ± 12.2 s; postmenopausal, 33.0 ± 9.1 vs. 41.8 ± 17.7 s). At rest and during submaximal exercise absolute CO responses were lower, but the "gains" in CO larger (both P < 0.05) in both groups with T2D. Our results suggest that the magnitude of T2D-induced impairments in peak V̇O2 and V̇O2 kinetics is not affected by menopausal status in participants younger than 60 yr of age.

  17. Absolute quantitation of protein posttranslational modification isoform.

    PubMed

    Yang, Zhu; Li, Ning

    2015-01-01

    Mass spectrometry has been widely applied in characterization and quantification of proteins from complex biological samples. Because the numbers of absolute amounts of proteins are needed in construction of mathematical models for molecular systems of various biological phenotypes and phenomena, a number of quantitative proteomic methods have been adopted to measure absolute quantities of proteins using mass spectrometry. The liquid chromatography-tandem mass spectrometry (LC-MS/MS) coupled with internal peptide standards, i.e., the stable isotope-coded peptide dilution series, which was originated from the field of analytical chemistry, becomes a widely applied method in absolute quantitative proteomics research. This approach provides more and more absolute protein quantitation results of high confidence. As quantitative study of posttranslational modification (PTM) that modulates the biological activity of proteins is crucial for biological science and each isoform may contribute a unique biological function, degradation, and/or subcellular location, the absolute quantitation of protein PTM isoforms has become more relevant to its biological significance. In order to obtain the absolute cellular amount of a PTM isoform of a protein accurately, impacts of protein fractionation, protein enrichment, and proteolytic digestion yield should be taken into consideration and those effects before differentially stable isotope-coded PTM peptide standards are spiked into sample peptides have to be corrected. Assisted with stable isotope-labeled peptide standards, the absolute quantitation of isoforms of posttranslationally modified protein (AQUIP) method takes all these factors into account and determines the absolute amount of a protein PTM isoform from the absolute amount of the protein of interest and the PTM occupancy at the site of the protein. The absolute amount of the protein of interest is inferred by quantifying both the absolute amounts of a few PTM

  18. Absolute realization of low BRDF value

    NASA Astrophysics Data System (ADS)

    Liu, Zilong; Liao, Ningfang; Li, Ping; Wang, Yu

    2010-10-01

    Low BRDF value is widespread used in many critical domains such as space and military fairs. These values below 0.1 Sr-1 . So the Absolute realization of these value is the most critical issue in the absolute measurement of BRDF. To develop the Absolute value realization theory of BRDF , defining an arithmetic operators of BRDF , achieving an absolute measurement Eq. of BRDF based on radiance. This is a new theory method to solve the realization problem of low BRDF value. This theory method is realized on a self-designed common double orientation structure in space. By designing an adding structure to extend the range of the measurement system and a control and processing software, Absolute realization of low BRDF value is achieved. A material of low BRDF value is measured in this measurement system and the spectral BRDF value are showed within different angles allover the space. All these values are below 0.4 Sr-1 . This process is a representative procedure about the measurement of low BRDF value. A corresponding uncertainty analysis of this measurement data is given depend on the new theory of absolute realization and the performance of the measurement system. The relative expand uncertainty of the measurement data is 0.078. This uncertainty analysis is suitable for all measurements using the new theory of absolute realization and the corresponding measurement system.

  19. Relatively high motivation for context-evoked reward produces the magnitude effect in rats.

    PubMed

    Yuki, Shoko; Okanoya, Kazuo

    2014-09-01

    Using a concurrent-chain schedule, we demonstrated the effect of absolute reinforcement (i.e., the magnitude effect) on choice behavior in rats. In general, animals' simultaneous choices conform to a relative reinforcement ratio between alternatives. However, studies in pigeons and rats have found that on a concurrent-chain schedule, the overall reinforcement ratio, or absolute amount, also influences choice. The effect of reinforcement amount has also been studied in inter-temporal choice situations, and this effect has been referred to as the magnitude effect. The magnitude effect has been observed in humans under various conditions, but little research has assessed it in animals (e.g., pigeons and rats). The present study confirmed the effect of reinforcement amount in rats during simultaneous and inter-temporal choice situations. We used a concurrent-chain procedure to examine the cause of the magnitude effect during inter-temporal choice. Our results suggest that rats can use differences in reinforcement amount as a contextual cue during choice, and the direction of the magnitude effect in rats might be similar to humans when using the present procedure. Furthermore, our results indicate that the magnitude effect was caused by the initial-link effect when the reinforcement amount was relatively small, while a loss aversion tendency was observed when the reinforcement amount changed within a session. The emergence of the initial-link effect and loss aversion suggests that rats make choices through cognitive processes predicted by prospect theory.

  20. Representations of the Magnitudes of Fractions

    ERIC Educational Resources Information Center

    Schneider, Michael; Siegler, Robert S.

    2010-01-01

    We tested whether adults can use integrated, analog, magnitude representations to compare the values of fractions. The only previous study on this question concluded that even college students cannot form such representations and instead compare fraction magnitudes by representing numerators and denominators as separate whole numbers. However,…

  1. Reward Magnitude Effects on Temporal Discrimination

    ERIC Educational Resources Information Center

    Galtress, Tiffany; Kirkpatrick, Kimberly

    2010-01-01

    Changes in reward magnitude or value have been reported to produce effects on timing behavior, which have been attributed to changes in the speed of an internal pacemaker in some instances and to attentional factors in other cases. The present experiments therefore aimed to clarify the effects of reward magnitude on timing processes. In Experiment…

  2. Magnitude Anomalies and Propagation of Local Phases

    DTIC Science & Technology

    1983-01-31

    statistically significant variation of magnitude anomalies versus one of this above parameters. A contrario, we observed a significant dependance between...enough to demand a more detailed analysis. III - Local dependance of magnitude anomalies. A smoothing of our data on all quakes originating in the same

  3. Peak finding using biorthogonal wavelets

    SciTech Connect

    Tan, C.Y.

    2000-02-01

    The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.

  4. How to use your peak flow meter

    MedlinePlus

    ... get your child used to them. Find Your Personal Best To find your personal best peak flow ... peak flow meter; Bronchial asthma - peak flow meter Images How to measure peak flow References Durrani SR, ...

  5. Magnitude and Frequency of Floods on Nontidal Streams in Delaware

    USGS Publications Warehouse

    Ries, Kernell G.; Dillow, Jonathan J.A.

    2006-01-01

    Reliable estimates of the magnitude and frequency of annual peak flows are required for the economical and safe design of transportation and water-conveyance structures. This report, done in cooperation with the Delaware Department of Transportation (DelDOT) and the Delaware Geological Survey (DGS), presents methods for estimating the magnitude and frequency of floods on nontidal streams in Delaware at locations where streamgaging stations monitor streamflow continuously and at ungaged sites. Methods are presented for estimating the magnitude of floods for return frequencies ranging from 2 through 500 years. These methods are applicable to watersheds exhibiting a full range of urban development conditions. The report also describes StreamStats, a web application that makes it easy to obtain flood-frequency estimates for user-selected locations on Delaware streams. Flood-frequency estimates for ungaged sites are obtained through a process known as regionalization, using statistical regression analysis, where information determined for a group of streamgaging stations within a region forms the basis for estimates for ungaged sites within the region. One hundred and sixteen streamgaging stations in and near Delaware with at least 10 years of non-regulated annual peak-flow data available were used in the regional analysis. Estimates for gaged sites are obtained by combining the station peak-flow statistics (mean, standard deviation, and skew) and peak-flow estimates with regional estimates of skew and flood-frequency magnitudes. Example flood-frequency estimate calculations using the methods presented in the report are given for: (1) ungaged sites, (2) gaged locations, (3) sites upstream or downstream from a gaged location, and (4) sites between gaged locations. Regional regression equations applicable to ungaged sites in the Piedmont and Coastal Plain Physiographic Provinces of Delaware are presented. The equations incorporate drainage area, forest cover, impervious

  6. Reward magnitude effects on temporal discrimination

    PubMed Central

    Galtress, Tiffany; Kirkpatrick, Kimberly

    2014-01-01

    Changes in reward magnitude or value have been reported to produce effects on timing behavior, which have been attributed to changes in the speed of an internal pacemaker in some instances and to attentional factors in other cases. The present experiments therefore aimed to clarify the effects of reward magnitude on timing processes. In Experiment 1, rats were trained to discriminate a short (2 s) vs. a long (8 s) signal followed by testing with intermediate durations. Then, the reward on short or long trials was increased from 1 to 4 pellets in separate groups. Experiment 2 measured the effect of different reward magnitudes associated with the short vs. long signals throughout training. Finally, Experiment 3 controlled for satiety effects during the reward magnitude manipulation phase. A general flattening of the psychophysical function was evident in all three experiments, suggesting that unequal reward magnitudes may disrupt attention to duration. PMID:24965705

  7. Local magnitudes of small contained explosions.

    SciTech Connect

    Chael, Eric Paul

    2009-12-01

    The relationship between explosive yield and seismic magnitude has been extensively studied for underground nuclear tests larger than about 1 kt. For monitoring smaller tests over local ranges (within 200 km), we need to know whether the available formulas can be extrapolated to much lower yields. Here, we review published information on amplitude decay with distance, and on the seismic magnitudes of industrial blasts and refraction explosions in the western U. S. Next we measure the magnitudes of some similar shots in the northeast. We find that local magnitudes ML of small, contained explosions are reasonably consistent with the magnitude-yield formulas developed for nuclear tests. These results are useful for estimating the detection performance of proposed local seismic networks.

  8. Constraining cosmology with shear peak statistics: tomographic analysis

    NASA Astrophysics Data System (ADS)

    Martinet, Nicolas; Bartlett, James G.; Kiessling, Alina; Sartoris, Barbara

    2015-09-01

    The abundance of peaks in weak gravitational lensing maps is a potentially powerful cosmological tool, complementary to measurements of the shear power spectrum. We study peaks detected directly in shear maps, rather than convergence maps, an approach that has the advantage of working directly with the observable quantity, the galaxy ellipticity catalog. Using large numbers of numerical simulations to accurately predict the abundance of peaks and their covariance, we quantify the cosmological constraints attainable by a large-area survey similar to that expected from the Euclid mission, focusing on the density parameter, Ωm, and on the power spectrum normalization, σ8, for illustration. We present a tomographic peak counting method that improves the conditional (marginal) constraints by a factor of 1.2 (2) over those from a two-dimensional (i.e., non-tomographic) peak-count analysis. We find that peak statistics provide constraints an order of magnitude less accurate than those from the cluster sample in the ideal situation of a perfectly known observable-mass relation; however, when the scaling relation is not known a priori, the shear-peak constraints are twice as strong and orthogonal to the cluster constraints, highlighting the value of using both clusters and shear-peak statistics.

  9. Peak Stress Testing Protocol Framework

    EPA Science Inventory

    Treatment of peak flows during wet weather is a common challenge across the country for municipal wastewater utilities with separate and/or combined sewer systems. Increases in wastewater flow resulting from infiltration and inflow (I/I) during wet weather events can result in op...

  10. Hubbert's Peak -- A Physicist's View

    NASA Astrophysics Data System (ADS)

    McDonald, Richard

    2011-04-01

    Oil, as used in agriculture and transportation, is the lifeblood of modern society. It is finite in quantity and will someday be exhausted. In 1956, Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Bartlett extended this work in publications and lectures on the finite nature of oil and its production peak and depletion. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. Central to these analyses are estimates of total ``oil in place'' obtained from engineering studies of oil reservoirs as this quantity determines the area under the Hubbert's Peak. Knowing the production history and the total oil in place allows us to make estimates of reserves, and therefore future oil availability. We will then examine reserves data for various countries, in particular OPEC countries, and see if these data tell us anything about the future availability of oil. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.

  11. Peak-flow frequency relations and evaluation of the peak-flow gaging network in Nebraska

    USGS Publications Warehouse

    Soenksen, Philip J.; Miller, Lisa D.; Sharpe, Jennifer B.; Watton, Jason R.

    1999-01-01

    Estimates of peak-flow magnitude and frequency are required for the efficient design of structures that convey flood flows or occupy floodways, such as bridges, culverts, and roads. The U.S. Geological Survey, in cooperation with the Nebraska Department of Roads, conducted a study to update peak-flow frequency analyses for selected streamflow-gaging stations, develop a new set of peak-flow frequency relations for ungaged streams, and evaluate the peak-flow gaging-station network for Nebraska. Data from stations located in or within about 50 miles of Nebraska were analyzed using guidelines of the Interagency Advisory Committee on Water Data in Bulletin 17B. New generalized skew relations were developed for use in frequency analyses of unregulated streams. Thirty-three drainage-basin characteristics related to morphology, soils, and precipitation were quantified using a geographic information system, related computer programs, and digital spatial data.For unregulated streams, eight sets of regional regression equations relating drainage-basin to peak-flow characteristics were developed for seven regions of the state using a generalized least squares procedure. Two sets of regional peak-flow frequency equations were developed for basins with average soil permeability greater than 4 inches per hour, and six sets of equations were developed for specific geographic areas, usually based on drainage-basin boundaries. Standard errors of estimate for the 100-year frequency equations (1percent probability) ranged from 12.1 to 63.8 percent. For regulated reaches of nine streams, graphs of peak flow for standard frequencies and distance upstream of the mouth were estimated.The regional networks of streamflow-gaging stations on unregulated streams were analyzed to evaluate how additional data might affect the average sampling errors of the newly developed peak-flow equations for the 100-year frequency occurrence. Results indicated that data from new stations, rather than more

  12. A New Gimmick for Assigning Absolute Configuration.

    ERIC Educational Resources Information Center

    Ayorinde, F. O.

    1983-01-01

    A five-step procedure is provided to help students in making the assignment absolute configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)

  13. Magnifying absolute instruments for optically homogeneous regions

    SciTech Connect

    Tyc, Tomas

    2011-09-15

    We propose a class of magnifying absolute optical instruments with a positive isotropic refractive index. They create magnified stigmatic images, either virtual or real, of optically homogeneous three-dimensional spatial regions within geometrical optics.

  14. The Simplicity Argument and Absolute Morality

    ERIC Educational Resources Information Center

    Mijuskovic, Ben

    1975-01-01

    In this paper the author has maintained that there is a similarity of thought to be found in the writings of Cudworth, Emerson, and Husserl in his investigation of an absolute system of morality. (Author/RK)

  15. Toward Reconciling Magnitude Discrepancies Estimated from Paleoearthquake Data

    SciTech Connect

    N. Seth Carpenter; Suzette J. Payne; Annette L. Schafer

    2012-06-01

    We recognize a discrepancy in magnitudes estimated for several Basin and Range, U.S.A. faults. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths (L{sub seg}) where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements (D) along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M {approx} L{sub seg}, should equal M {approx} D). Typically, the empirical relationships are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating L{sub seg} with surface rupture length (SRL). Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M {approx} SRL relationship using L{sub seg} as SRL leads to an underestimation of magnitude and the M {approx} L{sub seg} and M {approx} D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude (Mw) and length, where length is L{sub seg} instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw {approx} L{sub seg} results are strikingly consistent

  16. Absolute cross sections of compound nucleus reactions

    NASA Astrophysics Data System (ADS)

    Capurro, O. A.

    1993-11-01

    The program SEEF is a Fortran IV computer code for the extraction of absolute cross sections of compound nucleus reactions. When the evaporation residue is fed by its parents, only cumulative cross sections will be obtained from off-line gamma ray measurements. But, if one has the parent excitation function (experimental or calculated), this code will make it possible to determine absolute cross sections of any exit channel.

  17. Kelvin and the absolute temperature scale

    NASA Astrophysics Data System (ADS)

    Erlichson, Herman

    2001-07-01

    This paper describes the absolute temperature scale of Kelvin (William Thomson). Kelvin found that Carnot's axiom about heat being a conserved quantity had to be abandoned. Nevertheless, he found that Carnot's fundamental work on heat engines was correct. Using the concept of a Carnot engine Kelvin found that Q1/Q2 = T1/T2. Thermometers are not used to obtain absolute temperatures since they are calculated temperatures.

  18. Modeled future peak streamflows in four coastal Maine rivers

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Dudley, Robert W.

    2013-01-01

    To safely and economically design bridges and culverts, it is necessary to compute the magnitude of peak streamflows that have specified annual exceedance probabilities (AEPs). These peak flows are also needed for effective floodplain management. Annual precipitation and air temperature in the northeastern United States are in general projected to increase during the 21st century (Hayhoe and other, 2007). It is therefore important for engineers and resource managers to understand how peak flows may change in the future. This Fact Sheet, prepared in cooperation with the Maine Department of Transportation, presents a summary of modeled changes in peak flows at four basins in coastal Maine on the basis of projected changes in air temperature and precipitation. The full Scientific Investigations Report (Hodgkins and Dudley, 2013) is available at http://pubs.usgs.gov/sir/2013/5080/.

  19. Development of magnitude scaling relationship for earthquake early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D.

    2011-12-01

    Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.

  20. Magnitude and frequency of floods in Nebraska

    USGS Publications Warehouse

    Beckman, Emil W.

    1976-01-01

    Observed maximum flood peaks at 303 gaging stations with 13 or more years of record and significant peaks at 57 short-term stations and 31 miscellaneous sites are useful in designing flood-control works for maximum safety from flood damage. Comparison is made with maximum observed floods in the United States.

  1. Complete identification of the Parkes half-Jansky sample of GHz peaked spectrum radio galaxies

    NASA Astrophysics Data System (ADS)

    de Vries, N.; Snellen, I. A. G.; Schilizzi, R. T.; Lehnert, M. D.; Bremer, M. N.

    2007-03-01

    Context: Gigahertz Peaked Spectrum (GPS) radio galaxies are generally thought to be the young counterparts of classical extended radio sources. Statistically complete samples of GPS sources are vital for studying the early evolution of radio-loud AGN and the trigger of their nuclear activity. The "Parkes half-Jansky" sample of GPS radio galaxies is such a sample, representing the southern counterpart of the 1998 Stanghellini sample of bright GPS sources. Aims: As a first step of the investigation of the sample, the host galaxies need to be identified and their redshifts determined. Methods: Deep R-band VLT-FORS1 and ESO 3.6 m EFOSC II images and long slit spectra have been taken for the unidentified sources in the sample. Results: We have identified all twelve previously unknown host galaxies of the radio sources in the sample. Eleven have host galaxies in the range 21.0 < RC < 23.0, while one object, PKS J0210+0419, is identified in the near infrared with a galaxy with Ks = 18.3. The redshifts of 21 host galaxies have been determined in the range 0.474 < z < 1.539, bringing the total number of redshifts to 39 (80%). Analysis of the absolute magnitudes of the GPS host galaxies show that at z>1 they are on average a magnitude fainter than classical 3C radio galaxies, as found in earlier studies. However their restframe UV luminosities indicate that there is an extra light contribution from the AGN, or from a population of young stars. Based on observations collected at the European Southern Observatory Very Large Telescope, Paranal, Chile (ESO prog. ID No. 073.B-0289(B)) and the European Southern Observatory 3.6 m Telescope, La Silla, Chile (prog. ID No. 073.B-0289(A)). Appendices are only available in electronic form at http://www.aanda.org

  2. The effect of background galaxy contamination on the absolute magnitude and light curve speed class of type Ia supernovae

    NASA Technical Reports Server (NTRS)

    Boisseau, John R.; Wheeler, J. Craig

    1991-01-01

    Observational data are presented in support of the hypothesis that background galaxy contamination is present in the photometric data of Ia supernovae and that this effect can account for the observed dispersion in the light curve speeds of most of Ia supernovae. The implication is that the observed dispersion in beta is artificial and that most of Ia supernovae have nearly homogeneous light curves. The result supports the notion that Ia supernovae are good standard candles.

  3. A potential for overestimating the absolute magnitudes of second virial coefficients by small-angle X-ray scattering.

    PubMed

    Scott, David J; Patel, Trushar R; Winzor, Donald J

    2013-04-15

    Theoretical consideration is given to the effect of cosolutes (including buffer and electrolyte components) on the determination of second virial coefficients for proteins by small-angle X-ray scattering (SAXS)-a factor overlooked in current analyses in terms of expressions for a two-component system. A potential deficiency of existing practices is illustrated by reassessment of published results on the effect of polyethylene glycol concentration on the second virial coefficient for urate oxidase. This error reflects the substitution of I(0,c3,0), the scattering intensity in the limit of zero scattering angle and solute concentration, for I(0,0,0), the corresponding parameter in the limit of zero cosolute concentration (c3) as well. Published static light scattering results on the dependence of the apparent molecular weight of ovalbumin on buffer concentration are extrapolated to zero concentration to obtain the true value (M2) and thereby establish the feasibility of obtaining the analogous SAXS parameter, I(0,0,0), experimentally.

  4. Absolute quantum yield measurement of powder samples.

    PubMed

    Moreno, Luis A

    2012-05-12

    quantum yield calculation. 5. Corrected quantum yield calculation. 6. Chromaticity coordinates calculation using Report Generator program. The Hitachi F-7000 Quantum Yield Measurement System offer advantages for this application, as follows: High sensitivity (S/N ratio 800 or better RMS). Signal is the Raman band of water measured under the following conditions: Ex wavelength 350 nm, band pass Ex and Em 5 nm, response 2 sec), noise is measured at the maximum of the Raman peak. High sensitivity allows measurement of samples even with low quantum yield. Using this system we have measured quantum yields as low as 0.1 for a sample of salicylic acid and as high as 0.8 for a sample of magnesium tungstate. Highly accurate measurement with a dynamic range of 6 orders of magnitude allows for measurements of both sharp scattering peaks with high intensity, as well as broad fluorescence peaks of low intensity under the same conditions. High measuring throughput and reduced light exposure to the sample, due to a high scanning speed of up to 60,000 nm/minute and automatic shutter function. Measurement of quantum yield over a wide wavelength range from 240 to 800 nm. Accurate quantum yield measurements are the result of collecting instrument spectral response and integrating sphere correction factors before measuring the sample. Large selection of calculated parameters provided by dedicated and easy to use software. During this video we will measure sodium salicylate in powder form which is known to have a quantum yield value of 0.4 to 0.5.

  5. Determination of the Meteor Limiting Magnitude

    NASA Technical Reports Server (NTRS)

    Kingery, A.; Blaauw, R.; Cooke, W. J.

    2016-01-01

    The limiting meteor magnitude of a meteor camera system will depend on the camera hardware and software, sky conditions, and the location of the meteor radiant. Some of these factors are constants for a given meteor camera system, but many change between meteor shower or sporadic source and on both long and short timescales. Since the limiting meteor magnitude ultimately gets used to calculate the limiting meteor mass for a given data set, it is important to have an understanding of these factors and to monitor how they change throughout the night, as a 0.5 magnitude uncertainty in limiting magnitude translates to a uncertainty in limiting mass by a factor of two.

  6. METHOD OF PEAK CURRENT MEASUREMENT

    DOEpatents

    Baker, G.E.

    1959-01-20

    The measurement and recording of peak electrical currents are described, and a method for utilizing the magnetic field of the current to erase a portion of an alternating constant frequency and amplitude signal from a magnetic mediums such as a magnetic tapes is presented. A portion of the flux from the current carrying conductor is concentrated into a magnetic path of defined area on the tape. After the current has been recorded, the tape is played back. The amplitude of the signal from the portion of the tape immediately adjacent the defined flux area and the amplitude of the signal from the portion of the tape within the area are compared with the amplitude of the signal from an unerased portion of the tape to determine the percentage of signal erasure, and thereby obtain the peak value of currents flowing in the conductor.

  7. SPANISH PEAKS PRIMITIVE AREA, MONTANA.

    USGS Publications Warehouse

    Calkins, James A.; Pattee, Eldon C.

    1984-01-01

    A mineral survey of the Spanish Peaks Primitive Area, Montana, disclosed a small low-grade deposit of demonstrated chromite and asbestos resources. The chances for discovery of additional chrome resources are uncertain and the area has little promise for the occurrence of other mineral or energy resources. A reevaluation, sampling at depth, and testing for possible extensions of the Table Mountain asbestos and chromium deposit should be undertaken in the light of recent interpretations regarding its geologic setting.

  8. Twin Peaks (B/W)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Twin Peaks are modest-size hills to the southwest of the Mars Pathfinder landing site. They were discovered on the first panoramas taken by the IMP camera on the 4th of July, 1997, and subsequently identified in Viking Orbiter images taken over 20 years ago. The peaks are approximately 30-35 meters (-100 feet) tall. North Twin is approximately 860 meters (2800 feet) from the lander, and South Twin is about a kilometer away (3300 feet). The scene includes bouldery ridges and swales or 'hummocks' of flood debris that range from a few tens of meters away from the lander to the distance of the South Twin Peak. The large rock at the right edge of the scene is nicknamed 'Hippo'. This rock is about a meter (3 feet) across and 25 meters (80 feet) distant.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The IMP was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

  9. Peak width issues with generalised 2D correlation NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Kirwan, Gemma M.; Adams, Michael J.

    2008-12-01

    Two-dimensional spectral correlation analysis is shown to be sensitive to fluctuations in spectral peak width as a function of perturbation variable. This is particularly significant where peak width fluctuations are of similar order of magnitude as the peak width values themselves and where changes in peak width are not random but are, for example, proportional to intensity. In such cases these trends appear in the asynchronous matrix as false peaks that serve to interfere with interpretation of the data. Complex, narrow band spectra such as provided by 1H NMR spectroscopy are demonstrated to be prone to such interference. 2D correlation analysis was applied to a series of NMR spectra corresponding to a commercial wine fermentation, in which the samples collected over a period of several days exhibit dramatic changes in concentration of minor and major components. The interference due to changing peak width effects is eliminated by synthesizing the recorded spectra using a constant peak width value prior to performing 2D correlation analysis.

  10. Reduction in peak oxygen uptake after prolonged bed rest

    NASA Technical Reports Server (NTRS)

    Greenleaf, J. E.; Kozlowski, S.

    1982-01-01

    The hypothesis that the magnitude of the reduction in peak oxygen uptake (VO2) after bed rest is directly proportional to the level of pre-bed rest peak VO2 is tested. Complete pre and post-bed rest working capacity and body weight data were obtained from studies involving 24 men (19-24 years old) and 8 women (23-34 years old) who underwent bed rest for 14-20 days with no remedial treatments. Results of regression analyses of the present change in post-bed rest peak VO2 on pre-bed rest peak VO2 with 32 subjects show correlation coefficients of -0.03 (NS) for data expressed in 1/min and -0.17 for data expressed in ml/min-kg. In addition, significant correlations are found that support the hypothesis only when peak VO2 data are analyzed separately from studies that utilized the cycle ergometer, particularly with subjects in the supine position, as opposed to data obtained from treadmill peak VO2 tests. It is concluded that orthostatic factors, associated with the upright body position and relatively high levels of physical fitness from endurance training, appear to increase the variability of pre and particularly post-bed rest peak VO2 data, which would lead to rejection of the hypothesis.

  11. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed.

  12. Absolute cross-section measurements for ionization of He Rydberg atoms in collisions with K

    NASA Astrophysics Data System (ADS)

    Deng, F.; Renwick, S.; Martínez, H.; Morgan, T. J.

    1995-11-01

    Absolute cross sections for ionization of 1.5-10.0 keV/amu Rydberg helium atoms in principal quantum states 12<=n<=15 due to collisions with potassium have been measured. The data are compared with the free-electron cross section at equal velocity. Our results for the collisional ionization cross sections (σi) agree both in shape and absolute magnitude with the data available for the total electron-scattering cross sections (σe) and support recent theoretical models for ionization of Rydberg atoms with neutral perturbers.

  13. Strong nonlinear dependence of the spectral amplification factors of deep Vrancea earthquakes magnitude

    NASA Astrophysics Data System (ADS)

    Marmureanu, Gheorghe; Ortanza Cioflan, Carmen; Marmureanu, Alexandru

    2010-05-01

    Nonlinear effects in ground motion during large earthquakes have long been a controversial issue between seismologists and geotechnical engineers. Aki wrote in 1993:"Nonlinear amplification at sediments sites appears to be more pervasive than seismologists used to think…Any attempt at seismic zonation must take into account the local site condition and this nonlinear amplification( Local site effects on weak and strong ground motion, Tectonophysics,218,93-111). In other words, the seismological detection of the nonlinear site effects requires a simultaneous understanding of the effects of earthquake source, propagation path and local geological site conditions. The difficulty for seismologists in demonstrating the nonlinear site effects has been due to the effect being overshadowed by the overall patterns of shock generation and path propagation. The researchers from National Institute for Earth Physics ,in order to make quantitative evidence of large nonlinear effects, introduced the spectral amplification factor (SAF) as ratio between maximum spectral absolute acceleration (Sa), relative velocity (Sv) , relative displacement (Sd) from response spectra for a fraction of critical damping at fundamental period and peak values of acceleration(a-max),velocity (v-max) and displacement (d-max),respectively, from processed strong motion record and pointed out that there is a strong nonlinear dependence on earthquake magnitude and site conditions.The spectral amplification factors(SAF) are finally computed for absolute accelerations at 5% fraction of critical damping (β=5%) in five seismic stations: Bucharest-INCERC(soft soils, quaternary layers with a total thickness of 800 m);Bucharest-Magurele (dense sand and loess on 350m); Cernavoda Nuclear Power Plant site (marl, loess, limestone on 270 m) Bacau(gravel and loess on 20m) and Iassy (loess, sand, clay, gravel on 60 m) for last strong and deep Vrancea earthquakes: March 4,1977 (MGR =7.2 and h=95 km);August 30

  14. Magnitudes and timescales of total solar irradiance variability

    NASA Astrophysics Data System (ADS)

    Kopp, Greg

    2016-07-01

    The Sun's net radiative output varies on timescales of minutes to gigayears. Direct measurements of the total solar irradiance (TSI) show changes in the spatially- and spectrally-integrated radiant energy on timescales as short as minutes to as long as a solar cycle. Variations of ~0.01% over a few minutes are caused by the ever-present superposition of convection and oscillations with very large solar flares on rare occasion causing slightly-larger measurable signals. On timescales of days to weeks, changing photospheric magnetic activity affects solar brightness at the ~0.1% level. The 11-year solar cycle shows variations of comparable magnitude with irradiances peaking near solar maximum. Secular variations are more difficult to discern, being limited by instrument stability and the relatively short duration of the space-borne record. Historical reconstructions of the Sun's irradiance based on indicators of solar-surface magnetic activity, such as sunspots, faculae, and cosmogenic isotope records, suggest solar brightness changes over decades to millennia, although the magnitudes of these variations have high uncertainties due to the indirect historical records on which they rely. Stellar evolution affects yet longer timescales and is responsible for the greatest solar variabilities. In this manuscript I summarize the Sun's variability magnitudes over different temporal regimes and discuss the irradiance record's relevance for solar and climate studies as well as for detections of exo-solar planets transiting Sun-like stars.

  15. Absolute Radiation Measurements in Earth and Mars Entry Conditions

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.

    2014-01-01

    This paper reports on the measurement of radiative heating for shock heated flows which simulate conditions for Mars and Earth entries. Radiation measurements are made in NASA Ames' Electric Arc Shock Tube at velocities from 3-15 km/s in mixtures of N2/O2 and CO2/N2/Ar. The technique and limitations of the measurement are summarized in some detail. The absolute measurements will be discussed in regards to spectral features, radiative magnitude and spatiotemporal trends. Via analysis of spectra it is possible to extract properties such as electron density, and rotational, vibrational and electronic temperatures. Relaxation behind the shock is analyzed to determine how these properties relax to equilibrium and are used to validate and refine kinetic models. It is found that, for some conditions, some of these values diverge from non-equilibrium indicating a lack of similarity between the shock tube and free flight conditions. Possible reasons for this are discussed.

  16. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  17. Magnitude M w in metropolitan France

    NASA Astrophysics Data System (ADS)

    Cara, Michel; Denieul, Marylin; Sèbe, Olivier; Delouis, Bertrand; Cansi, Yves; Schlupp, Antoine

    2016-12-01

    The recent seismicity catalogue of metropolitan France Sismicité Instrumentale de l'Hexagone (SI-Hex) covers the period 1962-2009. It is the outcome of a multipartner project conducted between 2010 and 2013. In this catalogue, moment magnitudes (M w) are mainly determined from short-period velocimetric records, the same records as those used by the Laboratoire de Détection Géophysique (LDG) for issuing local magnitudes (M L) since 1962. Two distinct procedures are used, whether M L-LDG is larger or smaller than 4. For M L-LDG >4, M w is computed by fitting the coda-wave amplitude on the raw records. Station corrections and regional properties of coda-wave attenuation are taken into account in the computations. For M L-LDG ≤4, M w is converted from M L-LDG through linear regression rules. In the smallest magnitude range M L-LDG <3.1, special attention is paid to the non-unity slope of the relation between the local magnitudes and M w. All M w determined during the SI-Hex project is calibrated according to reference M w of recent events. As for some small events, no M L-LDG has been determined; local magnitudes issued by other French networks or LDG duration magnitude (M D) are first converted into M L-LDG before applying the conversion rules. This paper shows how the different sources of information and the different magnitude ranges are combined in order to determine an unbiased set of M w for the whole 38,027 events of the catalogue.

  18. Limiting Maximum Magnitude by Fault Dimensions (Invited)

    NASA Astrophysics Data System (ADS)

    Stirling, M. W.

    2010-12-01

    A standard practise of seismic hazard modeling is to combine fault and background seismicity sources to produce a multidisciplinary source model for a region. Background sources are typically modeled with a Gutenberg-Richter magnitude-frequency distribution developed from historical seismicity catalogs, and fault sources are typically modeled with earthquakes that are limited in size by the mapped fault rupture dimensions. The combined source model typically exhibits a Gutenberg-Richter-like distribution due to there being many short faults relative to the number of longer faults. The assumption that earthquakes are limited by the mapped fault dimensions therefore appears to be consistent with the Gutenberg-Richter relationship, one of the fundamental laws of seismology. Recent studies of magnitude-frequency distributions for California and New Zealand have highlighted an excess of fault-derived earthquakes relative to the log-linear extrapolation of the Gutenberg-Richter relationship from the smaller magnitudes (known as the “bulge”). Relaxing the requirement of maximum magnitude being limited by fault dimensions is a possible solution for removing the “bulge” to produce a perfectly log-linear Gutenberg-Richter distribution. An alternative perspective is that the “bulge” does not represent a significant departure from a Gutenberg-Richter distribution, and may simply be an artefact of a small earthquake dataset relative to the more plentiful data at the smaller magnitudes. In other words the uncertainty bounds of the magnitude-frequency distribution at the moderate-to-large magnitudes may be far greater than the size of the “bulge”.

  19. Tracing multiple scattering patterns in absolute (e,2e) cross sections for H{sub 2} and He over a 4{pi} solid angle

    SciTech Connect

    Ren, X.; Senftleben, A.; Pflueger, T.; Dorn, A.; Ullrich, J.; Colgan, J.; Pindzola, M. S.; Al-Hagan, O.; Madison, D. H.; Bray, I.; Fursa, D. V.

    2010-09-15

    Absolutely normalized (e,2e) measurements for H{sub 2} and He covering the full solid angle of one ejected electron are presented for 16 eV sum energy of both final state continuum electrons. For both targets rich cross-section structures in addition to the binary and recoil lobes are identified and studied as a function of the fixed electron's emission angle and the energy sharing among both electrons. For H{sub 2} their behavior is consistent with multiple scattering of the projectile as discussed before [Al-Hagan et al., Nature Phys. 5, 59 (2009)]. For He the binary and recoil lobes are significantly larger than for H{sub 2} and partly cover the multiple scattering structures. To highlight these patterns we propose a alternative representation of the triply differential cross section. Nonperturbative calculations are in good agreement with the He results and show discrepancies for H{sub 2} in the recoil peak region. For H{sub 2} a perturbative approach reasonably reproduces the cross-section shape but deviates in absolute magnitude.

  20. Tracing multiple scattering patterns in absolute (e,2e) cross sections for H2 and He over a 4π solid angle

    NASA Astrophysics Data System (ADS)

    Ren, X.; Senftleben, A.; Pflüger, T.; Dorn, A.; Colgan, J.; Pindzola, M. S.; Al-Hagan, O.; Madison, D. H.; Bray, I.; Fursa, D. V.; Ullrich, J.

    2010-09-01

    Absolutely normalized (e,2e) measurements for H2 and He covering the full solid angle of one ejected electron are presented for 16 eV sum energy of both final state continuum electrons. For both targets rich cross-section structures in addition to the binary and recoil lobes are identified and studied as a function of the fixed electron’s emission angle and the energy sharing among both electrons. For H2 their behavior is consistent with multiple scattering of the projectile as discussed before [Al-Hagan , Nature Phys.PLRAAN1745-247310.1038/nphys1135 5, 59 (2009)]. For He the binary and recoil lobes are significantly larger than for H2 and partly cover the multiple scattering structures. To highlight these patterns we propose a alternative representation of the triply differential cross section. Nonperturbative calculations are in good agreement with the He results and show discrepancies for H2 in the recoil peak region. For H2 a perturbative approach reasonably reproduces the cross-section shape but deviates in absolute magnitude.

  1. Kitt Peak measurements of P/Halley positions

    NASA Technical Reports Server (NTRS)

    Belton, M. J. S.

    1984-01-01

    Techniques used for the acquisition and reduction of imaging data for astrometric positions of comet Halley at Kitt Peak National Observatory are described. These techniques are applicable to the comet while it is fainter than magnitude V approximately 21. They yield positions that are uncertain by + or - 0.9 arcsec. The reliability and consistency of the positions already derived could be improved by as much as a factor of four in a more ambitious astrometric program.

  2. WHEELER PEAK ROADLESS AREA, NEVADA.

    USGS Publications Warehouse

    Whitebread, Donald H.; Kluender, Steven E.

    1984-01-01

    Field investigations to evaluate the mineral-resource potential of the Wheeler Peak Roadless Area in east-central Nevada were conducted. The field studies included geologic mapping, geochemical sampling, geophysical surveys, and a survey of mines and prospects. Several areas in the sedimentary and granitic rocks in the lower plate of the Snake Range decollement have probable mineral-resource potential for tungsten, beryllium, and lead. A small area of gravels near the north border of the area has a probable mineral-resource potential for placer gold. The geologic setting is not conducive for the occurrence of energy resources.

  3. GRANITE PEAK ROADLESS AREA, CALIFORNIA.

    USGS Publications Warehouse

    Huber, Donald F.; Thurber, Horace K.

    1984-01-01

    The Granite Peak Roadless Area occupies an area of about 5 sq mi in the southern part of the Trinity Alps of the Klamath Mountains, about 12 mi north-northeast of Weaverville, California. Rock and stream-sediment samples were analyzed. All streams draining the roadless area were sampled and representative samples of the rock types in the area were collected. Background values were established for each element and anomalous values were examined within their geologic settings and evaluated for their significance. On the basis of mineral surveys there seems little likelihood for the occurrence of mineral or energy resources.

  4. GLACIER PEAK ROADLESS AREA, WASHINGTON.

    USGS Publications Warehouse

    Church, S.E.; Johnson, F.L.

    1984-01-01

    A mineral survey outlined areas of mineral-resource potential in the Glacier Peak Roadless Area, Washington. Substantiated resource potential for base and precious metals has been identified in four mining districts included in whole or in part within the boundary of the roadless area. Several million tons of demonstrated base- and precious-metal resources occur in numerous mines in these districts. Probable resource potential for precious metals exists along a belt of fractured and locally mineralized rock extending northeast from Monte Cristo to the northeast edge of the roadless area.

  5. Quantitative standards for absolute linguistic universals.

    PubMed

    Piantadosi, Steven T; Gibson, Edward

    2014-01-01

    Absolute linguistic universals are often justified by cross-linguistic analysis: If all observed languages exhibit a property, the property is taken to be a likely universal, perhaps specified in the cognitive or linguistic systems of language learners and users. In many cases, these patterns are then taken to motivate linguistic theory. Here, we show that cross-linguistic analysis will very rarely be able to statistically justify absolute, inviolable patterns in language. We formalize two statistical methods--frequentist and Bayesian--and show that in both it is possible to find strict linguistic universals, but that the numbers of independent languages necessary to do so is generally unachievable. This suggests that methods other than typological statistics are necessary to establish absolute properties of human language, and thus that many of the purported universals in linguistics have not received sufficient empirical justification.

  6. Automatic fitting of Gaussian peaks using abductive machine learning

    SciTech Connect

    Abdel-Aal, R.E.

    1998-02-01

    Analytical techniques have been used for many years for fitting Gaussian peaks in nuclear spectroscopy. However, the complexity of the approach warrants looking for machine-learning alternatives where intensive computations are required only once (during training), while actual analysis on individual spectra is greatly simplified and quickened. This should allow the use of simple portable systems for fast and automated analysis of large numbers of spectra, particularly in situations where accuracy may be traded for speed and simplicity. This paper proposes the use of abductive networks machine learning for this purpose. The Abductory Induction Mechanism (AIM) tool was used to build models for analyzing both single and double Gaussian peaks in the presence of noise depicting statistical uncertainties in collected spectra. AIM networks were synthesized by training on 1,000 representative simulated spectra and evaluated on 500 new spectra. A classifier network determines the multiplicity of single/double peaks with an accuracy of 98%. With statistical uncertainties corresponding to a peak count of 100, average percentage absolute errors for the height, position, and width of single peaks are 4.9, 2.9, and 4.2%, respectively. For double peaks, these average errors are within 7.0, 3.1, and 5.9%, respectively. Models have been developed which account for the effect of a linear background on a single peak. Performance is compared with a neural network application and with an analytical curve-fitting routine, and the new technique is applied to actual data of an alpha spectrum.

  7. Poor compensatory hyperventilation in morbidly obese women at peak exercise.

    PubMed

    Zavorsky, Gerald S; Murias, Juan M; Kim, Do Jun; Gow, Jennifer; Christou, Nicolas V

    2007-11-15

    This study was designed to compare differences in pulmonary gas exchange at rest and at peak exercise in two groups of women: (1) physically active, non-obese women and (2) women with morbid obesity. Fourteen morbidly obese women (body mass index or BMI=49+/-7 kg/m2; peak oxygen consumption or VO2 peak=14+/-2 ml/(kg min)) and 14 physically active non-obese women (BMI=22+/-2 kg/m2; VO2 peak=50+/-6 ml/(kg min)) performed an incremental, ramped exercise test to exhaustion on a cycle ergometer. Arterial blood was sampled at rest and at peak exercise. At rest, the alveolar to arterial oxygen partial pressure difference was 3x higher in the obese women (14+/-10 mmHg) compared to non-obese women (5+/-4 mmHg). Arterial carbon dioxide pressure (PaCO2) was identical in both groups at rest (37+/-4 mmHg). Only the non-obese women showed a decrease in PaCO2 rest to peak exercise (-5+/-3 mmHg). The slope between heart rate and VO2 during exercise was higher in the morbidly obese compared to non-obese women indicating that for the same absolute increase in VO2 a larger increase in heart rate is needed, demonstrating poorer cardiac efficiency in obese women. In conclusion, morbidly obese women have poorer exercise capacity, cardiac efficiency, and compensatory hyperventilation at peak exercise, and poorer gas exchange at rest compared to physically active, non-obese women.

  8. Absolute Distance Measurement with the MSTAR Sensor

    NASA Technical Reports Server (NTRS)

    Lay, Oliver P.; Dubovitsky, Serge; Peters, Robert; Burger, Johan; Ahn, Seh-Won; Steier, William H.; Fetterman, Harrold R.; Chang, Yian

    2003-01-01

    The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. The sensor uses a single laser in conjunction with fast phase modulators and low frequency detectors. We describe the design of the system - the principle of operation, the metrology source, beamlaunching optics, and signal processing - and show results for target distances up to 1 meter. We then demonstrate how the system can be scaled to kilometer-scale distances.

  9. Improving Children's Knowledge of Fraction Magnitudes.

    PubMed

    Fazio, Lisa K; Kennedy, Casey A; Siegler, Robert S

    2016-01-01

    We examined whether playing a computerized fraction game, based on the integrated theory of numerical development and on the Common Core State Standards' suggestions for teaching fractions, would improve children's fraction magnitude understanding. Fourth and fifth-graders were given brief instruction about unit fractions and played Catch the Monster with Fractions, a game in which they estimated fraction locations on a number line and received feedback on the accuracy of their estimates. The intervention lasted less than 15 minutes. In our initial study, children showed large gains from pretest to posttest in their fraction number line estimates, magnitude comparisons, and recall accuracy. In a more rigorous second study, the experimental group showed similarly large improvements, whereas a control group showed no improvement from practicing fraction number line estimates without feedback. The results provide evidence for the effectiveness of interventions emphasizing fraction magnitudes and indicate how psychological theories and research can be used to evaluate specific recommendations of the Common Core State Standards.

  10. Representations of the magnitudes of fractions.

    PubMed

    Schneider, Michael; Siegler, Robert S

    2010-10-01

    We tested whether adults can use integrated, analog, magnitude representations to compare the values of fractions. The only previous study on this question concluded that even college students cannot form such representations and instead compare fraction magnitudes by representing numerators and denominators as separate whole numbers. However, atypical characteristics of the presented fractions might have provoked the use of atypical comparison strategies in that study. In our 3 experiments, university and community college students compared more balanced sets of single-digit and multi-digit fractions and consistently exhibited a logarithmic distance effect. Thus, adults used integrated, analog representations, akin to a mental number line, to compare fraction magnitudes. We interpret differences between the past and present findings in terms of different stimuli eliciting different solution strategies.

  11. A Preliminary Analysis on Empirical Attenuation of Absolute Velocity Response Spectra (1 to 10s) in Japan

    NASA Astrophysics Data System (ADS)

    Dhakal, Y. P.; Kunugi, T.; Suzuki, W.; Aoi, S.

    2013-12-01

    (T) = c+ aMw - log10R - bR +∑gS +hD where Y (T) is the 5% damped peak vector response in cm/s derived from two horizontal component records for a natural period T in second; in (2) S is a dummy variable which is one if a site is located inside a sedimentary basin, otherwise zero. In (3), D is depth to the top of layer having a particular S-wave velocity. We used the deep underground S-wave velocity model available from Japan Seismic Hazard Information Station (J-SHIS). In (5), sites are classified to various sedimentary basins. Analyses show that the standard deviations decrease in the order of the models listed and the all coefficients are significant. Interestingly, coefficients g are found to be different from basin to basin at most periods, and the depth to the top of layer having S-wave velocity of 1.7km/s gives the smallest standard deviation of 0.31 at T=4.4s in (5). This study shows the possibility of describing the observed peak absolute velocity response values by using simple model parameters like site location and sedimentary depth soon after the location and magnitude of an earthquake are known.

  12. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  13. Absolute UV absorption cross sections of dimethyl substituted Criegee intermediate (CH3)2COO

    NASA Astrophysics Data System (ADS)

    Chang, Yuan-Pin; Chang, Chun-Hung; Takahashi, Kaito; Lin, Jim-Min, Jr.

    2016-06-01

    The absolute absorption cross sections of (CH3)2COO under a jet-cooled condition were measured via laser depletion to be (1.32 ± 0.10) × 10-17 cm2 molecule-1 at 308 nm and (9.6 ± 0.8) × 10-18 cm2 molecule-1 at 352 nm. The peak UV cross section is estimated to be (1.75 ± 0.14) × 10-17 cm2 molecule-1 at 330 nm, according to the UV spectrum of (CH3)2COO (Huang et al., 2015) scaled to the absolute cross section at 308 nm.

  14. ABSOLUTE PROPERTIES OF THE ECLIPSING BINARY STAR HY VIRGINIS

    SciTech Connect

    Sandberg Lacy, Claud H.; Fekel, Francis C. E-mail: fekel@evans.tsuniv.edu

    2011-12-15

    HY Vir is found to be a double-lined F0m+F5 binary star with relatively shallow (0.3 mag) partial eclipses. Previous studies of the system are improved with 7509 differential photometric observations from the URSA WebScope and 8862 from the NFO WebScope, and 68 high-resolution spectroscopic observations from the Tennessee State University 2 m automatic spectroscopic telescope, and the 1 m coude-feed spectrometer at Kitt Peak National Observatory. Very accurate (better than 0.5%) masses and radii are determined from analysis of the new light curves and radial velocity curves. Theoretical models match the absolute properties of the stars at an age of about 1.35 Gy.

  15. Negative absolute temperature for motional degrees of freedom.

    PubMed

    Braun, S; Ronzheimer, J P; Schreiber, M; Hodgman, S S; Rom, T; Bloch, I; Schneider, U

    2013-01-04

    Absolute temperature is usually bound to be positive. Under special conditions, however, negative temperatures-in which high-energy states are more occupied than low-energy states-are also possible. Such states have been demonstrated in localized systems with finite, discrete spectra. Here, we prepared a negative temperature state for motional degrees of freedom. By tailoring the Bose-Hubbard Hamiltonian, we created an attractively interacting ensemble of ultracold bosons at negative temperature that is stable against collapse for arbitrary atom numbers. The quasimomentum distribution develops sharp peaks at the upper band edge, revealing thermal equilibrium and bosonic coherence over several lattice sites. Negative temperatures imply negative pressures and open up new parameter regimes for cold atoms, enabling fundamentally new many-body states.

  16. Absolute measurement of undulator radiation in the extreme ultraviolet

    NASA Astrophysics Data System (ADS)

    Maezawa, H.; Mitani, S.; Suzuki, Y.; Kanamori, H.; Tamamushi, S.; Mikuni, A.; Kitamura, H.; Sasaki, T.

    1983-04-01

    The spectral brightness of undulator radiation emitted by the model PMU-1 incorporated in the SOR-RING, the dedicated synchrotron radiation source in Tokyo, has been studied in the extreme ultraviolet region from 21.6 to 72.9 eV as a function of the electron energy γ, the field parameter K, and the angle of observation ϴ in the absolute scale. A series of measurements covering the first and the second harmonic component of undulator radiation was compared with the fundamental formula λ n= {λ 0}/{2nγ 2}( {1+K 2}/{2}+γϴ 2 and the effects of finite emittance were studied. The brightness at the first peak was smaller than the theoretical value, while an enhanced second harmonic component was observed.

  17. A second type of magnitude effect: Reinforcer magnitude differentiates delay discounting between substance users and controls.

    PubMed

    Mellis, Alexandra M; Woodford, Alina E; Stein, Jeffrey S; Bickel, Warren K

    2017-01-01

    Basic research on delay discounting, examining preference for smaller-sooner or larger-later reinforcers, has demonstrated a variety of findings of considerable generality. One of these, the magnitude effect, is the observation that individuals tend to exhibit greater preference for the immediate with smaller magnitude reinforcers. Delay discounting has also proved to be a useful marker of addiction, as demonstrated by the highly replicated finding of greater discounting rates in substance users compared to controls. However, some research on delay discounting rates in substance users, particularly research examining discounting of small-magnitude reinforcers, has not found significant differences compared to controls. Here, we hypothesize that the magnitude effect could produce ceiling effects at small magnitudes, thus obscuring differences in delay discounting between groups. We examined differences in discounting between high-risk substance users and controls over a broad range of magnitudes of monetary amounts ($0.10, $1.00, $10.00, $100.00, and $1000.00) in 116 Amazon Mechanical Turk workers. We found no significant differences in discounting rates between users and controls at the smallest reinforcer magnitudes ($0.10 and $1.00) and further found that differences became more pronounced as magnitudes increased. These results provide an understanding of a second form of the magnitude effect: That is, differences in discounting between populations can become more evident as a function of reinforcer magnitude.

  18. Global survey of star clusters in the Milky Way. V. Integrated JHKS magnitudes and luminosity functions

    NASA Astrophysics Data System (ADS)

    Kharchenko, N. V.; Piskunov, A. E.; Schilbach, E.; Röser, S.; Scholz, R.-D.

    2016-01-01

    Aims: In this study we determine absolute integrated magnitudes in the J,H,KS passbands for Galactic star clusters from the Milky Way Star Clusters survey. In the wide solar neighbourhood, we derive the open cluster luminosity function (CLF) for different cluster ages. Methods: The integrated magnitudes are based on uniform cluster membership derived from the 2MAst catalogue (a merger of the PPMXL and 2MASS) and are computed by summing up the individual luminosities of the most reliable cluster members. We discuss two different techniques of constructing the CLF, a magnitude-limited and a distance-limited approach. Results: Absolute J,H,KS integrated magnitudes are obtained for 3061 open clusters, and 147 globular clusters. The integrated magnitudes and colours are accurate to about 0.8 and 0.2 mag, respectively. Based on the sample of open clusters we construct the general cluster luminosity function in the solar neighbourhood in the three passbands. In each passband the CLF shows a linear part covering a range of 6 to 7 mag at the bright end. The CLFs reach their maxima at an absolute magnitude of -2 mag, then drop by one order of magnitude. During cluster evolution, the CLF changes its slope within tight, but well-defined limits. The CLF of the youngest clusters has a steep slope of about 0.4 at bright magnitudes and a quasi-flat portion for faint clusters. For the oldest population, we find a flatter function with a slope of about 0.2. The CLFs at Galactocentric radii smaller than that of the solar circle differ from those in the direction of the Galactic anti-centre. The CLF in the inner area is flatter and the cluster surface density higher than the local one. In contrast, the CLF is somewhat steeper than the local one in the outer disk, and the surface density is lower. The corresponding catalogue of integrated magnitudes is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc

  19. Evaluation of containment peak pressure and structural response for a large-break loss-of-coolant accident in a VVER-440/213 NPP

    SciTech Connect

    Spencer, B.W.; Sienicki, J.J.; Kulak, R.F.; Pfeiffer, P.A.; Voeroess, L.; Techy, Z.; Katona, T.

    1998-07-01

    A collaborative effort between US and Hungarian specialists was undertaken to investigate the response of a VVER-440/213-type NPP to a maximum design-basis accident, defined as a guillotine rupture with double-ended flow from the largest pipe (500 mm) in the reactor coolant system. Analyses were performed to evaluate the magnitude of the peak containment pressure and temperature for this event; additional analyses were performed to evaluate the ultimate strength capability of the containment. Separate cases were evaluated assuming 100% effectiveness of the bubbler-condenser pressure suppression system as well as zero effectiveness. The pipe break energy release conditions were evaluated from three sources: (1) FSAR release rate based on Soviet safety calculations, (2) RETRAN-03 analysis and (3) ATHLET analysis. The findings indicated that for 100% bubbler-condenser effectiveness the peak containment pressures were less than the containment design pressure of 0.25 MPa. For the BDBA case of zero effectiveness of the bubbler-condenser system, the peak pressures were less than the calculated containment failure pressure of 0.40 MPa absolute.

  20. Early Warning for Large Magnitude Earthquakes: Is it feasible?

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Colombelli, S.; Kanamori, H.

    2011-12-01

    The mega-thrust, Mw 9.0, 2011 Tohoku earthquake has re-opened the discussion among the scientific community about the effectiveness of Earthquake Early Warning (EEW) systems, when applied to such large events. Many EEW systems are now under-testing or -development worldwide and most of them are based on the real-time measurement of ground motion parameters in a few second window after the P-wave arrival. Currently, we are using the initial Peak Displacement (Pd), and the Predominant Period (τc), among other parameters, to rapidly estimate the earthquake magnitude and damage potential. A well known problem about the real-time estimation of the magnitude is the parameter saturation. Several authors have shown that the scaling laws between early warning parameters and magnitude are robust and effective up to magnitude 6.5-7; the correlation, however, has not yet been verified for larger events. The Tohoku earthquake occurred near the East coast of Honshu, Japan, on the subduction boundary between the Pacific and the Okhotsk plates. The high quality Kik- and K- networks provided a large quantity of strong motion records of the mainshock, with a wide azimuthal coverage both along the Japan coast and inland. More than 300 3-component accelerograms have been available, with an epicentral distance ranging from about 100 km up to more than 500 km. This earthquake thus presents an optimal case study for testing the physical bases of early warning and to investigate the feasibility of a real-time estimation of earthquake size and damage potential even for M > 7 earthquakes. In the present work we used the acceleration waveform data of the main shock for stations along the coast, up to 200 km epicentral distance. We measured the early warning parameters, Pd and τc, within different time windows, starting from 3 seconds, and expanding the testing time window up to 30 seconds. The aim is to verify the correlation of these parameters with Peak Ground Velocity and Magnitude

  1. Probability of inducing given-magnitude earthquakes by perturbing finite volumes of rocks

    NASA Astrophysics Data System (ADS)

    Shapiro, Serge A.; Krüger, Oliver S.; Dinske, Carsten

    2013-07-01

    Fluid-induced seismicity results from an activation of finite rock volumes. The finiteness of perturbed volumes influences frequency-magnitude statistics. Previously we observed that induced large-magnitude events at geothermal and hydrocarbon reservoirs are frequently underrepresented in comparison with the Gutenberg-Richter law. This is an indication that the events are more probable on rupture surfaces contained within the stimulated volume. Here we theoretically and numerically analyze this effect. We consider different possible scenarios of event triggering: rupture surfaces located completely within or intersecting only the stimulated volume. We approximate the stimulated volume by an ellipsoid or cuboid and derive the statistics of induced events from the statistics of random thin flat discs modeling rupture surfaces. We derive lower and upper bounds of the probability to induce a given-magnitude event. The bounds depend strongly on the minimum principal axis of the stimulated volume. We compare the bounds with data on seismicity induced by fluid injections in boreholes. Fitting the bounds to the frequency-magnitude distribution provides estimates of a largest expected induced magnitude and a characteristic stress drop, in addition to improved estimates of the Gutenberg-Richter a and b parameters. The observed frequency-magnitude curves seem to follow mainly the lower bound. However, in some case studies there are individual large-magnitude events clearly deviating from this statistic. We propose that such events can be interpreted as triggered ones, in contrast to the absolute majority of the induced events following the lower bound.

  2. The color-magnitude distribution of small Jupiter Trojans

    NASA Astrophysics Data System (ADS)

    Wong, Ian; Brown, Michael E.; Emery, Joshua P.

    2014-11-01

    The Jupiter Trojans constitute a population of minor bodies that are situated in a 1:1 mean motion resonance with Jupiter and are concentrated in two swarms centered about the L4 and L5 Lagrangian points. Current theories of Solar System evolution describe a scenario in which the Trojans originated in a region beyond the primordial orbit of Neptune. It is hypothesized that during a subsequent period of chaotic dynamical disruptions in the outer Solar System, the primordial trans-Neptunian planetesimals were disrupted, and a fraction of them were scattered inwards and captured by Jupiter as Trojan asteroids, while the remaining objects were thrown outwards to larger heliocentric distances and eventually formed the Kuiper belt. If this is the case, a detailed study of the characteristics of Trojans may shed light on the relationships between the Trojans and other minor body populations in the outer Solar System, and more broadly, constrain models of late Solar System evolution. Several past studies of Trojans have revealed significant bimodalities with respect to various spectroscopic and photometric quantities, indicating the existence of two groupings among the Trojans - the so-called red and less-red sub-populations. In a previous work, we used primarily photometric data from the Sloan Digital Sky Survey to categorize several hundred Trojans with absolute magnitudes in the range H<12.3 into the two sub-populations. We demonstrated that the magnitude distributions of the color sub-populations are distinct to a high confidence level, suggesting that the red and less-red Trojans were formed in different locations and/or experienced different evolutionary histories. Most notably, we found that the discrepancy between the two color-magnitude distributions is concentrated at the faint end. Here, we present the results of a follow-up study, in which we analyze color measurements of a large number of small Trojans collected using the Suprime-Cam instrument on the Subaru

  3. Characterizing Earthquake Rupture Properties Using Peak High-Frequency Offset

    NASA Astrophysics Data System (ADS)

    Wen, L.; Meng, L.

    2014-12-01

    Teleseismic array back-projection (BP) of high frequency (~1Hz) seismic waves has been recently applied to image the aftershock sequence of the Tohoku-Oki earthquake. The BP method proves to be effective in capturing early aftershocks that are difficult to be detected due to the contamination of the mainshock coda wave. Furthermore, since the event detection is based on the identification of the local peaks in time series of the BP power, the resulting event location corresponds to the peak high-frequency energy rather than the hypocenter. In this work, we show that the comparison between the BP-determined catalog and conventional phase-picking catalog provides estimates of the spatial and temporal offset between the hypocenter and the peak high-frequency radiation. We propose to measure this peak high-frequency shift of global earthquakes between M4.0 to M7.0. We average the BP locations calibrated by multiple reference events to minimize the uncertainty due to the variation of 3D path effects. In our initial effort focusing on the foreshock and aftershock sequence of the 2014 Iquique earthquake, we find systematic shifts of the peak high-frequency energy towards the down-dip direction. We find that the amount of the shift is a good indication of rupture length, which scales with the earthquake magnitude. Further investigations of the peak high frequency offset may provide constraints on earthquake source properties such as rupture directivity, rupture duration, rupture speed, and stress drop.

  4. The peak electromagnetic power radiated by lightning return strokes

    NASA Technical Reports Server (NTRS)

    Krider, E. P.; Guo, C.

    1983-01-01

    Estimates of the peak electromagnetic (EM) power radiated by return strokes have been made by integrating the Poynting vector of measured fields over an imaginary hemispherical surface that is centered on the lightning source, assuming that ground losses are negligible. Values of the peak EM power from first and subsequent strokes have means and standard deviations of 2 + or - 2 x 10 to the 10th and 3 + or - 4 x 10 to the 9th W, respectively. The average EM power that is radiated by subsequent strokes, at the time of the field peak, is about 2 orders of magnitude larger than the optical power that is radiated by these strokes in the wavelength interval from 0.4 to 1.1 micron; hence an upper limit to the radiative efficiency of a subsequent stroke is of the order of 1 percent or less at this time.

  5. The effects of increasing heel height on forefoot peak pressure.

    PubMed

    Mandato, M G; Nester, E

    1999-02-01

    The purpose of this study was to determine the effect of increasing heel height on peak forefoot pressure. Thirty-five women were examined while wearing sneakers and shoes with 2-inch and 3-inch heels. An in-shoe pressure-measurement system was used to document the magnitude and location of plantar peak pressures. Pressure under the forefoot was found to increase significantly with increasing heel height. As the heel height increased, the peak pressure shifted toward the first metatarsal and the hallux. The reproducibility of data obtained with the in-shoe pressure-measurement system was tested in five subjects; the data were found to be reproducible to within approximately 3% of measured pressures.

  6. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

  7. Comparative vs. Absolute Judgments of Trait Desirability

    ERIC Educational Resources Information Center

    Hofstee, Willem K. B.

    1970-01-01

    Reversals of trait desirability are studied. Terms indicating conservativw behavior appeared to be judged relatively desirable in comparative judgement, while traits indicating dynamic and expansive behavior benefited from absolute judgement. The reversal effect was shown to be a general one, i.e. reversals were not dependent upon the specific…

  8. New Techniques for Absolute Gravity Measurements.

    DTIC Science & Technology

    1983-01-07

    Hammond, J.A. (1978) Bollettino Di Geofisica Teorica ed Applicata Vol. XX. 8. Hammond, J. A., and Iliff, R. L. (1979) The AFGL absolute gravity system...International Gravimetric Bureau, No. L:I-43. 7. Hammond. J.A. (1978) Bollettino Di Geofisica Teorica ed Applicata Vol. XX. 8. Hammond, J.A., and

  9. An Absolute Electrometer for the Physics Laboratory

    ERIC Educational Resources Information Center

    Straulino, S.; Cartacci, A.

    2009-01-01

    A low-cost, easy-to-use absolute electrometer is presented: two thin metallic plates and an electronic balance, usually available in a laboratory, are used. We report on the very good performance of the device that allows precise measurements of the force acting between two charged plates. (Contains 5 footnotes, 2 tables, and 6 figures.)

  10. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  11. Absolute Positioning Using the Global Positioning System

    DTIC Science & Technology

    1994-04-01

    Global Positioning System ( GPS ) has becom a useful tool In providing relativ survey...Includes the development of a low cost navigator for wheeled vehicles. ABSTRACT The Global Positioning System ( GPS ) has become a useful tool In providing...technique of absolute or point positioning involves the use of a single Global Positioning System ( GPS ) receiver to determine the three-dimenslonal

  12. Improving Children's Knowledge of Fraction Magnitudes

    ERIC Educational Resources Information Center

    Fazio, Lisa K.; Kennedy, Casey A.; Siegler, Robert S.

    2016-01-01

    We examined whether playing a computerized fraction game, based on the integrated theory of numerical development and on the Common Core State Standards' suggestions for teaching fractions, would improve children's fraction magnitude understanding. Fourth and fifth-graders were given brief instruction about unit fractions and played "Catch…

  13. Incentive theory: IV. Magnitude of reward

    PubMed Central

    Killeen, Peter R.

    1985-01-01

    Incentive theory is successfully applied to data from experiments in which the amount of food reward is varied. This is accomplished by assuming that incentive value is a negatively accelerated function of reward duration. The interaction of the magnitude of a reward with its delay is confirmed, and the causes and implications of this interaction are discussed. PMID:16812421

  14. An integrated model of choices and response times in absolute identification.

    PubMed

    Brown, Scott D; Marley, A A J; Donkin, Christopher; Heathcote, Andrew

    2008-04-01

    Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a model (SAMBA: selective attention, mapping, and ballistic accumulation) that integrates shorter and longer term memory processes and accounts for both the choices made and the associated response time distributions, including sequential effects in each. The model's predictions arise as a consequence of its architecture and require estimation of only a few parameters with values that are consistent across numerous data sets. The authors show that SAMBA provides a quantitative account of benchmark choice phenomena in classical absolute identification experiments and in contemporary data involving both choice and response time.

  15. Absolute Radiation Thermometry in the NIR

    NASA Astrophysics Data System (ADS)

    Bünger, L.; Taubert, R. D.; Gutschwager, B.; Anhalt, K.; Briaudeau, S.; Sadli, M.

    2017-04-01

    A near infrared (NIR) radiation thermometer (RT) for temperature measurements in the range from 773 K up to 1235 K was characterized and calibrated in terms of the "Mise en Pratique for the definition of the Kelvin" (MeP-K) by measuring its absolute spectral radiance responsivity. Using Planck's law of thermal radiation allows the direct measurement of the thermodynamic temperature independently of any ITS-90 fixed-point. To determine the absolute spectral radiance responsivity of the radiation thermometer in the NIR spectral region, an existing PTB monochromator-based calibration setup was upgraded with a supercontinuum laser system (0.45 μm to 2.4 μm) resulting in a significantly improved signal-to-noise ratio. The RT was characterized with respect to its nonlinearity, size-of-source effect, distance effect, and the consistency of its individual temperature measuring ranges. To further improve the calibration setup, a new tool for the aperture alignment and distance measurement was developed. Furthermore, the diffraction correction as well as the impedance correction of the current-to-voltage converter is considered. The calibration scheme and the corresponding uncertainty budget of the absolute spectral responsivity are presented. A relative standard uncertainty of 0.1 % (k=1) for the absolute spectral radiance responsivity was achieved. The absolute radiometric calibration was validated at four temperature values with respect to the ITS-90 via a variable temperature heatpipe blackbody (773 K ...1235 K) and at a gold fixed-point blackbody radiator (1337.33 K).

  16. Analysis of the magnitude and frequency of floods in Colorado

    USGS Publications Warehouse

    Vaill, J.E.

    2000-01-01

    Regionalized flood-frequency relations need to be updated on a regular basis (about every 10 years). The latest study on regionalized flood-frequency equations for Colorado used data collected through water year 1981. A study was begun in 1994 by the U.S. Geological Survey, in cooperation with the Colorado Department of Transportation and the Bureau of Land Management, to include streamflow data collected since water year 1981 in the regionalized flood-frequency relations for Colorado. Longer periods of streamflow data and improved statistical analysis methods were used to define regression relations for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for unregulated streams in Colorado. The regression relations can be applied to sites of interest on gaged and ungaged streams. Ordinary least-squares regression was used to determine the best explanatory basin or climatic characteristic variables for each peak-discharge characteristic, and generalized least-squares regression was used to determine the best regression relation. Drainage-basin area, mean annual precipitation, and mean basin slope were determined to be statistically significant explanatory variables in the regression relations. Separate regression relations were developed for each of five distinct hydrologic regions in the State. The mean standard errors of estimate and average standard error of prediction associated with the regression relations generally ranged from 40 to 80 percent, except for one hydrologic region where the errors ranged from about 200 to 300 percent. Methods are presented for determining the magnitude of peak discharges for sites located at gaging stations, for sites located near gaging stations on the same stream when the ratio of drainage-basin areas is between about 0.5 and 1.5, and for sites where the drainage basin crosses a flood-region boundary or a State boundary. Methods are presented for determining the magnitude of peak

  17. The effects of wildfire on the peak streamflow magnitude and frequency, Frijoles and Capulin Canyons, Bandelier National Monument, New Mexico

    USGS Publications Warehouse

    Veenhuis, J.E.

    2004-01-01

    In June of 1977, the La Mesa fire burned 15,270 acres in and around Frijoles Canyon, Bandelier National Monument and the adjacent Santa Fe National Forest, New Mexico. The Dome fire occurred in April of 1996 in Bandelier National Monument, burned 16,516 acres in Capulin Canyon and the surrounding Dome Wilderness area. Both canyons are characterized by extensive archeological artifacts, which could be threatened by increased runoff and accelerated rates of erosion after a wildfire. The U.S. Geological Survey (USGS) in cooperation with the National Park Service monitored the fires' effects on streamflow in both canyons. Copyright 2004 ASCE.

  18. Evaluation of the magnitude and frequency of floods in urban watersheds in Phoenix and Tucson, Arizona

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Paretti, Nicholas V.

    2014-01-01

    Flooding in urban areas routinely causes severe damage to property and often results in loss of life. To investigate the effect of urbanization on the magnitude and frequency of flood peaks, a flood frequency analysis was carried out using data from urbanized streamgaging stations in Phoenix and Tucson, Arizona. Flood peaks at each station were predicted using the log-Pearson Type III distribution, fitted using the expected moments algorithm and the multiple Grubbs-Beck low outlier test. The station estimates were then compared to flood peaks estimated by rural-regression equations for Arizona, and to flood peaks adjusted for urbanization using a previously developed procedure for adjusting U.S. Geological Survey rural regression peak discharges in an urban setting. Only smaller, more common flood peaks at the 50-, 20-, 10-, and 4-percent annual exceedance probabilities (AEPs) demonstrate any increase in magnitude as a result of urbanization; the 1-, 0.5-, and 0.2-percent AEP flood estimates are predicted without bias by the rural-regression equations. Percent imperviousness was determined not to account for the difference in estimated flood peaks between stations, either when adjusting the rural-regression equations or when deriving urban-regression equations to predict flood peaks directly from basin characteristics. Comparison with urban adjustment equations indicates that flood peaks are systematically overestimated if the rural-regression-estimated flood peaks are adjusted upward to account for urbanization. At nearly every streamgaging station in the analysis, adjusted rural-regression estimates were greater than the estimates derived using station data. One likely reason for the lack of increase in flood peaks with urbanization is the presence of significant stormwater retention and detention structures within the watershed used in the study.

  19. Technique for estimating magnitude and frequency of floods in Illinois

    USGS Publications Warehouse

    Curtis, George W.

    1977-01-01

    A technique is presented for estimating flood magnitudes at recurrence intervals ranging from 2 to 500 years, for unregulated rural streams in Illinois, with drainage areas ranging from 0.02 to 10,000 square miles. Multiple regression analyses, using streamflow data from 241 sampling sites, were used to define the flood-frequency relationships. The independent variables drainage area, slope, rainfall intensity, and an areal factor are used in the estimating equations to determine flood peaks. Examples are given to demonstrate a step-by-step procedure in computing a 100-year flood for a site on an ungaged stream and a site on a gaged stream in Illinois. The report is oriented toward planners and designers of engineering projects such as highways, bridges, culverts, flood-control structures, and drainage systems, and toward planners responsible for planning flood-plain use and establishing flood-insurance rates. (Woodard-USGS)

  20. Modeled future peak streamflows in four coastal Maine rivers

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Dudley, Robert W.

    2013-01-01

    To safely and economically design bridges and culverts, it is necessary to compute the magnitude of peak streamflows that have specified annual exceedance probabilities (AEPs). Annual precipitation and air temperature in the northeastern United States are, in general, projected to increase during the 21st century. It is therefore important for engineers and resource managers to understand how peak flows may change in the future. This report, prepared in cooperation with the Maine Department of Transportation (MaineDOT), presents modeled changes in peak flows at four basins in coastal Maine on the basis of projected changes in air temperature and precipitation. To estimate future peak streamflows at the four basins in this study, historical values for climate (temperature and precipitation) in the basins were adjusted by different amounts and input to a hydrologic model of each study basin. To encompass the projected changes in climate in coastal Maine by the end of the 21st century, air temperatures were adjusted by four different amounts, from -3.6 degrees Fahrenheit (ºF) (-2 degrees Celsius (ºC)) to +10.8 ºF (+6 ºC) of observed temperatures. Precipitation was adjusted by three different percentage values from -15 percent to +30 percent of observed precipitation. The resulting 20 combinations of temperature and precipitation changes (includes the no-change scenarios) were input to Precipitation-Runoff Modeling System (PRMS) watershed models, and annual daily maximum peak flows were calculated for each combination. Modeled peak flows from the adjusted changes in temperature and precipitation were compared to unadjusted (historical) modeled peak flows. Annual daily maximum peak flows increase or decrease, depending on whether temperature or precipitation is adjusted; increases in air temperature (with no change in precipitation) lead to decreases in peak flows, whereas increases in precipitation (with no change in temperature) lead to increases in peak flows. As

  1. Observations on the magnitude-frequency distribution of Earth-crossing asteroids

    NASA Technical Reports Server (NTRS)

    Shoemaker, Eugene M.; Shoemaker, Carolyn S.

    1987-01-01

    During the past decade, discovery of Earth-crossing asteroids has continued at the pace of several per year; the total number of known Earth crossers reached 70 as of September, 1986. The sample of discovered Earth crossers has become large enough to provide a fairly strong statistical basis for calculations of mean probabilities of asteroid collision with the Earth, the Moon, and Venus. It is also now large enough to begin to address the more difficult question of the magnitude-frequency distribution and size distribution of the Earth-crossing asteroids. Absolute V magnitude, H, was derived from reported magnitudes for each Earth crosser on the basis of a standard algorithm that utilizes a physically realistic phase function. The derived values of H range from 12.88 for (1627) Ivar to 21.6 for the Palomar-Leiden object 6344, which is the faintest and smallest asteroid discovered.

  2. A method for determining the V magnitude of asteroids from CCD images

    NASA Astrophysics Data System (ADS)

    Dymock, R.; Miles, R.

    2009-06-01

    We describe a method of obtaining the V magnitude of an asteroid using differential photometry, with the magnitudes of comparison stars derived from Carlsberg Meridian Catalogue 14 (CMC14) data. The availability of a large number of suitable CMC14 stars enables a reasonably accurate magnitude (+/-0.05 mag) to be determined without having to resort to more complicated absolute or all-sky photometry. An improvement in accuracy to +/-0.03 mag is possible if an ensemble of several CMC14 stars is used. This method is expected to be less accurate for stars located within +/-10° of the galactic equator owing to excessive interstellar reddening and stellar crowding. Non-refereed articles

  3. From 'sense of number' to 'sense of magnitude' - The role of continuous magnitudes in numerical cognition.

    PubMed

    Leibovich, Tali; Katzin, Naama; Harel, Maayan; Henik, Avishai

    2016-08-17

    In this review, we are pitting two theories against each other: the more accepted theory-the 'number sense' theory-suggesting that a sense of number is innate and non-symbolic numerosity is being processed independently of continuous magnitudes (e.g., size, area, density); and the newly emerging theory suggesting that (1) both numerosities and continuous magnitudes are processed holistically when comparing numerosities, and (2) a sense of number might not be innate. In the first part of this review, we discuss the 'number sense' theory. Against this background, we demonstrate how the natural correlation between numerosities and continuous magnitudes makes it nearly impossible to study non-symbolic numerosity processing in isolation from continuous magnitudes, and therefore the results of behavioral and imaging studies with infants, adults and animals can be explained, at least in part, by relying on continuous magnitudes. In the second part, we explain the 'sense of magnitude' theory and review studies that directly demonstrate that continuous magnitudes are more automatic and basic than numerosities. Finally, we present outstanding questions. Our conclusion is that there is not enough convincing evidence to support the number sense theory anymore. Therefore, we encourage researchers not to assume that number sense is simply innate, but to put this hypothesis to the test, and to consider if such an assumption is even testable in light of the correlation of numerosity and continuous magnitudes.

  4. Peak load management: Potential options

    SciTech Connect

    Englin, J.E.; De Steese, J.G.; Schultz, R.W.; Kellogg, M.A.

    1989-10-01

    This report reviews options that may be alternatives to transmission construction (ATT) applicable both generally and at specific locations in the service area of the Bonneville Power Administration (BPA). Some of these options have potential as specific alternatives to the Shelton-Fairmount 230-kV Reinforcement Project, which is the focus of this study. A listing of 31 peak load management (PLM) options is included. Estimated costs and normalized hourly load shapes, corresponding to the respective base load and controlled load cases, are considered for 15 of the above options. A summary page is presented for each of these options, grouped with respect to its applicability in the residential, commercial, industrial, and agricultural sectors. The report contains comments on PLM measures for which load shape management characteristics are not yet available. These comments address the potential relevance of the options and the possible difficulty that may be encountered in characterizing their value should be of interest in this investigation. The report also identifies options that could improve the efficiency of the three customer utility distribution systems supplied by the Shelton-Fairmount Reinforcement Project. Potential cogeneration options in the Olympic Peninsula are also discussed. These discussions focus on the options that appear to be most promising on the Olympic Peninsula. Finally, a short list of options is recommended for investigation in the next phase of this study. 9 refs., 24 tabs.

  5. Techniques for estimating magnitude and frequency of floods on streams in Indiana

    USGS Publications Warehouse

    Glatfelter, D.R.

    1984-01-01

    A rainfall-runoff model was tlsed to synthesize long-term peak data at 11 gaged locations on small streams. Flood-frequency curves developed from the long-term synthetic data were combined with curves based on short-term observed data to provide weighted estimates of flood magnitude and frequency at the rainfall-runoff stations.

  6. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  7. From Hubble's NGSL to Absolute Fluxes

    NASA Technical Reports Server (NTRS)

    Heap, Sara R.; Lindler, Don

    2012-01-01

    Hubble's Next Generation Spectral Library (NGSL) consists of R-l000 spectra of 374 stars of assorted temperature, gravity, and metallicity. Each spectrum covers the wavelength range, 0.18-1.00 microns. The library can be viewed and/or downloaded from the website, http://archive.stsci.edu/prepds/stisngsll. Stars in the NGSL are now being used as absolute flux standards at ground-based observatories. However, the uncertainty in the absolute flux is about 2%, which does not meet the requirements of dark-energy surveys. We are therefore developing an observing procedure that should yield fluxes with uncertainties less than 1 % and will take part in an HST proposal to observe up to 15 stars using this new procedure.

  8. Consistent thermostatistics forbids negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Dunkel, Jörn; Hilbert, Stefan

    2014-01-01

    Over the past 60 years, a considerable number of theories and experiments have claimed the existence of negative absolute temperature in spin systems and ultracold quantum gases. This has led to speculation that ultracold gases may be dark-energy analogues and also suggests the feasibility of heat engines with efficiencies larger than one. Here, we prove that all previous negative temperature claims and their implications are invalid as they arise from the use of an entropy definition that is inconsistent both mathematically and thermodynamically. We show that the underlying conceptual deficiencies can be overcome if one adopts a microcanonical entropy functional originally derived by Gibbs. The resulting thermodynamic framework is self-consistent and implies that absolute temperature remains positive even for systems with a bounded spectrum. In addition, we propose a minimal quantum thermometer that can be implemented with available experimental techniques.

  9. Absolute measurement of length with nanometric resolution

    NASA Astrophysics Data System (ADS)

    Apostol, D.; Garoi, F.; Timcu, A.; Damian, V.; Logofatu, P. C.; Nascov, V.

    2005-08-01

    Laser interferometer displacement measuring transducers have a well-defined traceability route to the definition of the meter. The laser interferometer is de-facto length scale for applications in micro and nano technologies. However their physical unit -half lambda is too large for nanometric resolution. Fringe interpolation-usual technique to improve the resolution-lack of reproducibility could be avoided using the principles of absolute distance measurement. Absolute distance refers to the use of interferometric techniques for determining the position of an object without the necessity of measuring continuous displacements between points. The interference pattern as produced by the interference of two point-like coherent sources is fitted to a geometric model so as to determine the longitudinal location of the target by minimizing least square errors. The longitudinal coordinate of the target was measured with accuracy better than 1 nm, for a target position range of 0.4μm.

  10. Computer processing of spectrograms for absolute intensities.

    PubMed

    Guttman, A; Golden, J; Galbraith, H J

    1967-09-01

    A computer program was developed to process photographically recorded spectra for absolute intensity. Test and calibration films are subjected to densitometric scans that provide digitally recorded densities on magnetic tapes. The nonlinear calibration data are fitted by least-squares cubic polynomials to yield a good approximation to the monochromatic H&D curves for commonly used emulsions (2475 recording film, Royal-X, Tri-X, 4-X). Several test cases were made. Results of these cases show that the machine processed absolute intensities are accurate to within 15%o. Arbitrarily raising the sensitivity threshold by 0.1 density units above gross fog yields cubic polynomial fits to the H&D curves that are radiometrically accurate within 10%. In addition, curves of gamma vs wavelength for 2475, Tri-X, and 4-X emulsions were made. These data show slight evidence of the photographic Purkinje effect in the 2475 emulsion.

  11. An absolute measure for a key currency

    NASA Astrophysics Data System (ADS)

    Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito

    It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no absolute general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an absolute measure for a key currency based on the strength of the periodicity. Moreover, we analyze the time evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.

  12. Probing absolute spin polarization at the nanoscale.

    PubMed

    Eltschka, Matthias; Jäck, Berthold; Assig, Maximilian; Kondrashov, Oleg V; Skvortsov, Mikhail A; Etzkorn, Markus; Ast, Christian R; Kern, Klaus

    2014-12-10

    Probing absolute values of spin polarization at the nanoscale offers insight into the fundamental mechanisms of spin-dependent transport. Employing the Zeeman splitting in superconducting tips (Meservey-Tedrow-Fulde effect), we introduce a novel spin-polarized scanning tunneling microscopy that combines the probing capability of the absolute values of spin polarization with precise control at the atomic scale. We utilize our novel approach to measure the locally resolved spin polarization of magnetic Co nanoislands on Cu(111). We find that the spin polarization is enhanced by 65% when increasing the width of the tunnel barrier by only 2.3 Å due to the different decay of the electron orbitals into vacuum.

  13. Absolute and relative dosimetry for ELIMED

    NASA Astrophysics Data System (ADS)

    Cirrone, G. A. P.; Cuttone, G.; Candiano, G.; Carpinelli, M.; Leonora, E.; Lo Presti, D.; Musumarra, A.; Pisciotta, P.; Raffaele, L.; Randazzo, N.; Romano, F.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Cirio, R.; Marchetto, F.; Sacchi, R.; Giordanengo, S.; Monaco, V.

    2013-07-01

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  14. A novel absolute measurement for the low-frequency figure correction of aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Cheng; Chang, Shenq-Tsong; Ho, Cheng-Fang; Kuo, Ching-Hsiang; Chung, Chien-Kai; Hsu, Wei-Yao; Tseng, Shih-Feng; Sung, Cheng-Kuo

    2015-07-01

    This study proposes an absolute measurement method with a computer-generated hologram (CGHs) to assist the identification of manufacturing form error, and gravity and mounting resulted distortions for a 300 mm aspherical mirror. This method adopts the frequency of peaks and valleys of each Zernike coefficient grabbed by the measurement with various orientations of the mirror in horizontal optical-axis configuration. In addition, the rotational-symmetric aberration (spherical aberration) is calibrated with random ball test method. According to the measured absolute surface figure, a high accuracy aspherical surface with peak to valley (P-V) value of 1/8 wave @ 632.8 nm was fabricated after surface figure correction with the reconstructed error map.

  15. Evolution and magnitudes of candidate Planet Nine

    NASA Astrophysics Data System (ADS)

    Linder, Esther F.; Mordasini, Christoph

    2016-05-01

    Context. The recently renewed interest in a possible additional major body in the outer solar system prompted us to study the thermodynamic evolution of such an object. We assumed that it is a smaller version of Uranus and Neptune. Aims: We modeled the temporal evolution of the radius, temperature, intrinsic luminosity, and the blackbody spectrum of distant ice giant planets. The aim is also to provide estimates of the magnitudes in different bands to assess whether the object might be detectable. Methods: Simulations of the cooling and contraction were conducted for ice giants with masses of 5, 10, 20, and 50 M⊕ that are located at 280, 700, and 1120 AU from the Sun. The core composition, the fraction of H/He, the efficiency of energy transport, and the initial luminosity were varied. The atmospheric opacity was set to 1, 50, and 100 times solar metallicity. Results: We find for a nominal 10 M⊕ planet at 700 AU at the current age of the solar system an effective temperature of 47 K, much higher than the equilibrium temperature of about 10 K, a radius of 3.7 R⊕, and an intrinsic luminosity of 0.006 L♃. It has estimated apparent magnitudes of Johnson V, R, I, L, N, Q of 21.7, 21.4, 21.0, 20.1, 19.9, and 10.7, and WISE W1-W4 magnitudes of 20.1, 20.1, 18.6, and 10.2. The Q and W4 band and other observations longward of about 13 μm pick up the intrinsic flux. Conclusions: If candidate Planet 9 has a significant H/He layer and an efficient energy transport in the interior, then its luminosity is dominated by the intrinsic contribution, making it a self-luminous planet. At a likely position on its orbit near aphelion, we estimate for a mass of 5, 10, 20, and 50 M⊕ a V magnitude from the reflected light of 24.3, 23.7, 23.3, and 22.6 and a Q magnitude from the intrinsic radiation of 14.6, 11.7, 9.2, and 5.8. The latter would probably have been detected by past surveys.

  16. Silicon Absolute X-Ray Detectors

    SciTech Connect

    Seely, John F.; Korde, Raj; Sprunck, Jacob; Medjoubi, Kadda; Hustache, Stephanie

    2010-06-23

    The responsivity of silicon photodiodes having no loss in the entrance window, measured using synchrotron radiation in the 1.75 to 60 keV range, was compared to the responsivity calculated using the silicon thickness measured using near-infrared light. The measured and calculated responsivities agree with an average difference of 1.3%. This enables their use as absolute x-ray detectors.

  17. Measurement of absolute gravity acceleration in Firenze

    NASA Astrophysics Data System (ADS)

    de Angelis, M.; Greco, F.; Pistorio, A.; Poli, N.; Prevedelli, M.; Saccorotti, G.; Sorrentino, F.; Tino, G. M.

    2011-01-01

    This paper reports the results from the accurate measurement of the acceleration of gravity g taken at two separate premises in the Polo Scientifico of the University of Firenze (Italy). In these laboratories, two separate experiments aiming at measuring the Newtonian constant and testing the Newtonian law at short distances are in progress. Both experiments require an independent knowledge on the local value of g. The only available datum, pertaining to the italian zero-order gravity network, was taken more than 20 years ago at a distance of more than 60 km from the study site. Gravity measurements were conducted using an FG5 absolute gravimeter, and accompanied by seismic recordings for evaluating the noise condition at the site. The absolute accelerations of gravity at the two laboratories are (980 492 160.6 ± 4.0) μGal and (980 492 048.3 ± 3.0) μGal for the European Laboratory for Non-Linear Spectroscopy (LENS) and Dipartimento di Fisica e Astronomia, respectively. Other than for the two referenced experiments, the data here presented will serve as a benchmark for any future study requiring an accurate knowledge of the absolute value of the acceleration of gravity in the study region.

  18. Chemical composition of French mimosa absolute oil.

    PubMed

    Perriot, Rodolphe; Breme, Katharina; Meierhenrich, Uwe J; Carenini, Elise; Ferrando, Georges; Baldovini, Nicolas

    2010-02-10

    Since decades mimosa (Acacia dealbata) absolute oil has been used in the flavor and perfume industry. Today, it finds an application in over 80 perfumes, and its worldwide industrial production is estimated five tons per year. Here we report on the chemical composition of French mimosa absolute oil. Straight-chain analogues from C6 to C26 with different functional groups (hydrocarbons, esters, aldehydes, diethyl acetals, alcohols, and ketones) were identified in the volatile fraction. Most of them are long-chain molecules: (Z)-heptadec-8-ene, heptadecane, nonadecane, and palmitic acid are the most abundant, and constituents such as 2-phenethyl alcohol, methyl anisate, and ethyl palmitate are present in smaller amounts. The heavier constituents were mainly triterpenoids such as lupenone and lupeol, which were identified as two of the main components. (Z)-Heptadec-8-ene, lupenone, and lupeol were quantified by GC-MS in SIM mode using external standards and represents 6%, 20%, and 7.8% (w/w) of the absolute oil. Moreover, odorant compounds were extracted by SPME and analyzed by GC-sniffing leading to the perception of 57 odorant zones, of which 37 compounds were identified by their odorant description, mass spectrum, retention index, and injection of the reference compound.

  19. Constrained Least Absolute Deviation Neural Networks

    PubMed Central

    Wang, Zhishun; Peterson, Bradley S.

    2008-01-01

    It is well known that least absolute deviation (LAD) criterion or L1-norm used for estimation of parameters is characterized by robustness, i.e., the estimated parameters are totally resistant (insensitive) to large changes in the sampled data. This is an extremely useful feature, especially, when the sampled data are known to be contaminated by occasionally occurring outliers or by spiky noise. In our previous works, we have proposed the least absolute deviation neural network (LADNN) to solve unconstrained LAD problems. The theoretical proofs and numerical simulations have shown that the LADNN is Lyapunov-stable and it can globally converge to the exact solution to a given unconstrained LAD problem. We have also demonstrated its excellent application value in time-delay estimation. More generally, a practical LAD application problem may contain some linear constraints, such as a set of equalities and/or inequalities, which is called constrained LAD problem, whereas the unconstrained LAD can be considered as a special form of the constrained LAD. In this paper, we present a new neural network called constrained least absolute deviation neural network (CLADNN) to solve general constrained LAD problems. Theoretical proofs and numerical simulations demonstrate that the proposed CLADNN is Lyapunov stable and globally converges to the exact solution to a given constrained LAD problem, independent of initial values. The numerical simulations have also illustrated that the proposed CLADNN can be used to robustly estimate parameters for nonlinear curve fitting, which is extensively used in signal and image processing. PMID:18269958

  20. Absolute standardization of the impurity (121)Te associated to the production of the radiopharmaceutical (123)I.

    PubMed

    Araújo, M T F; Poledna, R; Delgado, J U; Silva, R L; Iwahara, A; da Silva, C J; Tauhata, L; Oliveira, A E; de Almeida, M C M; Lopes, R T

    2016-03-01

    (123)I is widely used for radiodiagnostic procedures. It is produced by reaction of (124)Xe (p,2n) (123)Cs →(123)Xe →(123)I in cyclotrons. (121)Te and (125)I appear in a photon energy spectrum as impurities. An activity of (121)Te was calibrated absolutely by sum-peak method and its photon emitting probability was estimated, whose results were consistent with published results.

  1. Understanding the magnitude dependence of PGA and PGV in NGA-West 2 data

    USGS Publications Warehouse

    Baltay, Annemarie S.; Hanks, Thomas C.

    2014-01-01

    The Next Generation Attenuation‐West 2 (NGA‐West 2) 2014 ground‐motion prediction equations (GMPEs) model ground motions as a function of magnitude and distance, using empirically derived coefficients (e.g., Bozorgniaet al., 2014); as such, these GMPEs do not clearly employ earthquake source parameters beyond moment magnitude (M) and focal mechanism. To better understand the magnitude‐dependent trends in the GMPEs, we build a comprehensive earthquake source‐based model to explain the magnitude dependence of peak ground acceleration and peak ground velocity in the NGA‐West 2 ground‐motion databases and GMPEs. Our model employs existing models (Hanks and McGuire, 1981; Boore, 1983, 1986; Anderson and Hough, 1984) that incorporate a point‐source Brune model, including a constant stress drop and the high‐frequency attenuation parameter κ0, random vibration theory, and a finite‐fault assumption at the large magnitudes to describe the data from magnitudes 3 to 8. We partition this range into four different magnitude regions, each of which has different functional dependences on M. Use of the four magnitude partitions separately allows greater understanding of what happens in any one subrange, as well as the limiting conditions between the subranges. This model provides a remarkably good fit to the NGA data for magnitudes from 3magnitude data, for which the corner frequency is masked by the attenuation of high frequencies. That this simple, source‐based model matches the NGA‐West 2 GMPEs and data so well suggests that considerable simplicity underlies the parametrically complex NGA GMPEs.

  2. Equivalent comfort contours for vertical vibration of steering wheels: effect of vibration magnitude, grip force, and hand position.

    PubMed

    Morioka, Miyuki; Griffin, Michael J

    2009-09-01

    Vehicle drivers receive tactile feedback from steering-wheel vibration that depends on the frequency and magnitude of the vibration. From an experiment with 12 subjects, equivalent comfort contours were determined for vertical vibration of the hands at two positions with three grip forces. The perceived intensity of the vibration was determined using the method of magnitude estimation over a range of frequencies (4-250 Hz) and magnitudes (0.1-1.58 ms(-2) r.m.s.). Absolute thresholds for vibration perception were also determined for the two hand positions over the same frequency range. The shapes of the comfort contours were strongly dependent on vibration magnitude and also influenced by grip force, indicating that the appropriate frequency weighting depends on vibration magnitude and grip force. There was only a small effect of hand position. The findings are explained by characteristics of the Pacinian and non-Pacinian tactile channels in the glabrous skin of the hand.

  3. Apparent magnitude of earthshine: a simple calculation

    NASA Astrophysics Data System (ADS)

    Agrawal, Dulli Chandra

    2016-05-01

    The Sun illuminates both the Moon and the Earth with practically the same luminous fluxes which are in turn reflected by them. The Moon provides a dim light to the Earth whereas the Earth illuminates the Moon with somewhat brighter light which can be seen from the Earth and is called earthshine. As the amount of light reflected from the Earth depends on part of the Earth and the cloud cover, the strength of earthshine varies throughout the year. The measure of the earthshine light is luminance, which is defined in photometry as the total luminous flux of light hitting or passing through a surface. The expression for the earthshine light in terms of the apparent magnitude has been derived for the first time and evaluated for two extreme cases; firstly, when the Sun’s rays are reflected by the water of the oceans and secondly when the reflector is either thick clouds or snow. The corresponding values are -1.30 and -3.69, respectively. The earthshine value -3.22 reported by Jackson lies within these apparent magnitudes. This paper will motivate the students and teachers of physics to look for the illuminated Moon by earthlight during the waning or waxing crescent phase of the Moon and to reproduce the expressions derived here by making use of the inverse-square law of radiation, Planck’s expression for the power in electromagnetic radiation, photopic spectral luminous efficiency function and expression for the apparent magnitude of a body in terms of luminous fluxes.

  4. Resurgence and alternative-reinforcer magnitude.

    PubMed

    Craig, Andrew R; Browning, Kaitlyn O; Nall, Rusty W; Marshall, Ciara M; Shahan, Timothy A

    2017-03-01

    Resurgence is defined as an increase in the frequency of a previously reinforced target response when an alternative source of reinforcement is suspended. Despite an extensive body of research examining factors that affect resurgence, the effects of alternative-reinforcer magnitude have not been examined. Thus, the present experiments aimed to fill this gap in the literature. In Experiment 1, rats pressed levers for single-pellet reinforcers during Phase 1. In Phase 2, target-lever pressing was extinguished, and alternative-lever pressing produced either five-pellet, one-pellet, or no alternative reinforcement. In Phase 3, alternative reinforcement was suspended to test for resurgence. Five-pellet alternative reinforcement produced faster elimination and greater resurgence of target-lever pressing than one-pellet alternative reinforcement. In Experiment 2, effects of decreasing alternative-reinforcer magnitude on resurgence were examined. Rats pressed levers and pulled chains for six-pellet reinforcers during Phases 1 and 2, respectively. In Phase 3, alternative reinforcement was decreased to three pellets for one group, one pellet for a second group, and suspended altogether for a third group. Shifting from six-pellet to one-pellet alternative reinforcement produced as much resurgence as suspending alternative reinforcement altogether, while shifting from six pellets to three pellets did not produce resurgence. These results suggest that alternative-reinforcer magnitude has effects on elimination and resurgence of target behavior that are similar to those of alternative-reinforcer rate. Thus, both suppression of target behavior during alternative reinforcement and resurgence when conditions of alternative reinforcement are altered may be related to variables that affect the value of the alternative-reinforcement source.

  5. Orientation and Magnitude of Mars' Magnetic Field

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This image shows the orientation and magnitude of the magnetic field measured by the MGS magnetometer as it sped over the surface of Mars during an early aerobraking pass (Day of the year, 264; 'P6' periapsis pass). At each point along the spacecraft trajectory we've drawn vectors in the direction of the magnetic field measured at that instant; the length of the line is scaled to show the relative magnitude of the field. Imagine traveling along with the MGS spacecraft, holding a string with a magnetized needle on one end: this essentially a compass with a needle that is free to spin in all directions. As you pass over the surface the needle would swing rapidly, first pointing towards the planet and then rotating quickly towards 'up' and back down again. All in a relatively short span of time, say a minute or two, during which time the spacecraft has traveled a couple of hundred miles. You've just passed over one of many 'magnetic anomalies' thus far detected near the surface of Mars. A second major anomaly appears a little later along the spacecraft track, about 1/4 the magnitude of the first - can you find it? The short scale length of the magnetic field signature locates the source near the surface of Mars, perhaps in the crust, a 10 to 75 kilometer thick outer shell of the planet (radius 3397 km).

    The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO. JPL is an operating division of California Institute of Technology (Caltech).

  6. The intensities and magnitudes of volcanic eruptions

    USGS Publications Warehouse

    Sigurdsson, H.

    1991-01-01

    Ever since 1935, when C.F Richter devised the earthquake magnitude scale that bears his name, seismologists have been able to view energy release from earthquakes in a systematic and quantitative manner. The benefits have been obvious in terms of assessing seismic gaps and the spatial and temporal trends of earthquake energy release. A similar quantitative treatment of volcanic activity is of course equally desirable, both for gaining a further understanding of the physical principles of volcanic eruptions and for volcanic-hazard assessment. A systematic volcanologic data base would be of great value in evaluating such features as volcanic gaps, and regional and temporal trends in energy release.  

  7. Precise Relative Earthquake Magnitudes from Cross Correlation

    SciTech Connect

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  8. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the “observability” of absolute time direction as well as absolute handedness-left or right- are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical- chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discus as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  9. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the 'observability' of absolute time direction as well as absolute handedness - left or right - are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical-chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discuss as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  10. The Phenomenology of Aesthetic Peak Experiences.

    ERIC Educational Resources Information Center

    Panzarella, Robert

    1980-01-01

    Descriptions of music and visual art peak experiences obtained from persons were content analyzed and factor analyzed. The peak experience accounts for mirrored conflicts in aesthetic norms and suggests a greater role for individual differences in aesthetic theories. (Author)

  11. 27 CFR 9.140 - Atlas Peak.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...). (c) Boundaries. The Atlas Peak viticultural area is located in Napa County, California. It lies entirely within the Napa Valley viticultural area. The beginning point is Haystack (peak) found in...

  12. Strong motion duration and earthquake magnitude relationships

    SciTech Connect

    Salmon, M.W.; Short, S.A.; Kennedy, R.P.

    1992-06-01

    Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ``strong motion duration`` has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions.

  13. Peak-flow characteristics of Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.

  14. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  15. Absolute Radiometric Calibration of EUNIS-06

    NASA Technical Reports Server (NTRS)

    Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.

    2007-01-01

    The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's absolute radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an absolute accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's absolute spectral sensitivity to +- 25%, considering all sources of error, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.

  16. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  17. Brownian motion: Absolute negative particle mobility

    NASA Astrophysics Data System (ADS)

    Ros, Alexandra; Eichhorn, Ralf; Regtmeier, Jan; Duong, Thanh Tu; Reimann, Peter; Anselmetti, Dario

    2005-08-01

    Noise effects in technological applications, far from being a nuisance, can be exploited with advantage - for example, unavoidable thermal fluctuations have found application in the transport and sorting of colloidal particles and biomolecules. Here we use a microfluidic system to demonstrate a paradoxical migration mechanism in which particles always move in a direction opposite to the net acting force (`absolute negative mobility') as a result of an interplay between thermal noise, a periodic and symmetric microstructure, and a biased alternating-current electric field. This counterintuitive phenomenon could be used for bioanalytical purposes, for example in the separation and fractionation of colloids, biological molecules and cells.

  18. Arbitrary segments of absolute negative mobility

    NASA Astrophysics Data System (ADS)

    Chen, Ruyin; Nie, Linru; Chen, Chongyang; Wang, Chaojie

    2017-01-01

    In previous research work, investigators have reported only one or two segments of absolute negative mobility (ANM) in a periodic potential. In fact, many segments of ANM also occur in the system considered here. We investigate transport of an inertial particle in a gating ratchet periodic potential subjected to a constant bias force. Our numerical results show that its mean velocity can decrease with the bias force increasing, i.e. ANM phenomenon. Furthermore, the ANM can take place arbitrary segments, even up to more than thirty. Intrinsic physical mechanism and conditions for arbitrary segments of ANM to occur are discussed in detail.

  19. Absolute quantification of myocardial blood flow.

    PubMed

    Yoshinaga, Keiichiro; Manabe, Osamu; Tamaki, Nagara

    2016-07-21

    With the increasing availability of positron emission tomography (PET) myocardial perfusion imaging, the absolute quantification of myocardial blood flow (MBF) has become popular in clinical settings. Quantitative MBF provides an important additional diagnostic or prognostic information over conventional visual assessment. The success of MBF quantification using PET/computed tomography (CT) has increased the demand for this quantitative diagnostic approach to be more accessible. In this regard, MBF quantification approaches have been developed using several other diagnostic imaging modalities including single-photon emission computed tomography, CT, and cardiac magnetic resonance. This review will address the clinical aspects of PET MBF quantification and the new approaches to MBF quantification.

  20. An absolute radius scale for Saturn's rings

    NASA Technical Reports Server (NTRS)

    Nicholson, Philip D.; Cooke, Maren L.; Pelton, Emily

    1990-01-01

    Radio and stellar occultation observations of Saturn's rings made by the Voyager spacecraft are discussed. The data reveal systematic discrepancies of almost 10 km in some parts of the rings, limiting some of the investigations. A revised solution for Saturn's rotation pole has been proposed which removes the discrepancies between the stellar and radio occultation profiles. Corrections to previously published radii vary from -2 to -10 km for the radio occultation, and +5 to -6 km for the stellar occultation. An examination of spiral density waves in the outer A Ring supports that the revised absolute radii are in error by no more than 2 km.

  1. Absolute Rate Theories of Epigenetic Stability

    NASA Astrophysics Data System (ADS)

    Walczak, Aleksandra M.; Onuchic, Jose N.; Wolynes, Peter G.

    2006-03-01

    Spontaneous switching events in most characterized genetic switches are rare, resulting in extremely stable epigenetic properties. We show how simple arguments lead to theories of the rate of such events much like the absolute rate theory of chemical reactions corrected by a transmission factor. Both the probability of the rare cellular states that allow epigenetic escape, and the transmission factor, depend on the rates of DNA binding and unbinding events and on the rates of protein synthesis and degradation. Different mechanisms of escape from the stable attractors occur in the nonadiabatic, weakly adiabatic and strictly adiabatic regimes, characterized by the relative values of those input rates.

  2. Absolute rate theories of epigenetic stability

    NASA Astrophysics Data System (ADS)

    Walczak, Aleksandra M.; Onuchic, José N.; Wolynes, Peter G.

    2005-12-01

    Spontaneous switching events in most characterized genetic switches are rare, resulting in extremely stable epigenetic properties. We show how simple arguments lead to theories of the rate of such events much like the absolute rate theory of chemical reactions corrected by a transmission factor. Both the probability of the rare cellular states that allow epigenetic escape and the transmission factor depend on the rates of DNA binding and unbinding events and on the rates of protein synthesis and degradation. Different mechanisms of escape from the stable attractors occur in the nonadiabatic, weakly adiabatic, and strictly adiabatic regimes, characterized by the relative values of those input rates. rate theory | stochastic gene expression | gene switches

  3. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  4. Absolute Priority for a Vehicle in VANET

    NASA Astrophysics Data System (ADS)

    Shirani, Rostam; Hendessi, Faramarz; Montazeri, Mohammad Ali; Sheikh Zefreh, Mohammad

    In today's world, traffic jams waste hundreds of hours of our life. This causes many researchers try to resolve the problem with the idea of Intelligent Transportation System. For some applications like a travelling ambulance, it is important to reduce delay even for a second. In this paper, we propose a completely infrastructure-less approach for finding shortest path and controlling traffic light to provide absolute priority for an emergency vehicle. We use the idea of vehicular ad-hoc networking to reduce the imposed travelling time. Then, we simulate our proposed protocol and compare it with a centrally controlled traffic light system.

  5. Peak-Seeking Control Using Gradient and Hessian Estimates

    NASA Technical Reports Server (NTRS)

    Ryan, John J.; Speyer, Jason L.

    2010-01-01

    A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.

  6. Trends in peak flows of selected streams in Kansas

    USGS Publications Warehouse

    Rasmussen, T.J.; Perry, C.A.

    2001-01-01

    The possibility of a systematic change in flood potential led to an investigation of trends in the magnitude of annual peak flows in Kansas. Efficient design of highway bridges and other flood-plain structures depends on accurate understanding of flood characteristics. The Kendall's tau test was used to identify trends at 40 stream-gaging stations during the 40-year period 1958-97. Records from 13 (32 percent) of the stations showed significant trends at the 95-percent confidence level. Only three of the records (8 percent) analyzed had increasing trends, whereas 10 records (25 percent) had decreasing trends, all of which were for stations located in the western one-half of the State. An analysis of flow volume using mean annual discharge at 29 stations in Kansas resulted in 6 stations (21 percent) with significant trends in flow volumes. All six trends were decreasing and occurred in the western one-half of the State. The Kendall's tau test also was used to identify peak-flow trends over the entire period of record for 54 stream-gaging stations in Kansas. Of the 23 records (43 percent) showing significant trends, 16 (30 percent) were decreasing, and 7 (13 percent) were increasing. The trend test then was applied to 30-year periods moving in 5-year increments to identify time periods within each station record when trends were occurring. Systematic changes in precipitation patterns and long-term declines in ground-water levels in some stream basins may be contributing to peak-flow trends. To help explain the cause of the streamflow trends, the Kendall's tau test was applied to total annual precipitation and ground-water levels in Kansas. In western Kansas, the lack of precipitation and presence of decreasing trends in ground-water levels indicated that declining water tables are contributing to decreasing trends in peak streamflow. Declining water tables are caused by ground-water withdrawals and other factors such as construction of ponds and terraces. Peak

  7. Asteroid taxonomy and the H,G_{12} magnitude system

    NASA Astrophysics Data System (ADS)

    Oszkiewicz, D.; Bowell, E.; Wasserman, L.; Muinonen, K.; Penttilä, A.

    2014-07-01

    We review the asteroid magnitude systems. The conventionally used H,G system (approved by the IAU in 1985) was recently replaced by the H,G_{12} and H,G_1,G_2 systems (approved by the IAU in 2012). The new phase curves were already applied to a large quantity of photometric data (Oszkiewicz et al, 2011). In particular, absolute magnitudes and slope parameters were computed for about half a million asteroids and are publicly available through the Planetary Research Group (University of Helsinki) websites. Several correlations of the shape of the phase curves with asteroid physical parameters were also explored. In general, the steepness of a phase curve relates to the physical properties of an asteroid's surface such as for example composition, porosity, packing density, roughness, and grain size distribution. However, most of those cannot be studied with the currently available data. Some conclusions regarding links to albedo and taxonomy can still be made. First, the G_1 and G_2 parameters correlate with albedo. Generally, the higher the albedo the lower and higher are the G_1 and G_2 parameters, respectively. Second, the G_{12} parameter distributions for the different asteroid taxonomic complexes are statistically different. For example, the C-complex asteroids tend to have high G_{12}'s, S-complex asteroids low G_{12}'s, and objects from the X-complex lean towards average values (Oszkiewicz et al. 2012). Additionally, asteroid families with a few exceptions show homogeneity of the G_{12} parameter (Figure). This is yet another confirmation of homogeneity of asteroid families and therefore the overall tendency to retain the same physical properties across family members. We study the usability of the G_{12} parameter in topics such as breaking the X-complex degeneracy and taxonomical classification. In particular, we combine the G_{12}'s with the Sloan Digital Sky Survey (SDSS) and the Wide-Field Infrared Survey Explorer (WISE) data (Oszkiewicz et al. 2014) to

  8. Absolute Spectrophotometry of 237 Open Cluster Stars

    NASA Astrophysics Data System (ADS)

    Clampitt, L.; Burstein, D.

    1994-12-01

    We present absolute spectrophotometry of 237 stars in 7 nearby open clusters: Hyades, Pleiades, Alpha Persei, Praesepe, Coma Berenices, IC 4665, and M 39. The observations were taken using the Wampler single-channel scanner (Wampler 1966) on the Crossley 0.9m telescope at Lick Observatory from July 1973 through December 1974. 21 bandpasses spanning the spectral range 3500 Angstroms to 7780 Angstroms were observed for each star, with bandwiths ranging from 32Angstroms to 64 Angstroms. Data are standardized to the Hayes--Latham (1975) system. Our measurements are compared to filter colors on the Johnson BV, Stromgren ubvy, and Geneva U V B_1 B_2 V_1 G systems, as well as to spectrophotometry of a few stars published by Gunn, Stryker & Tinsley and in the Spectrophotometric Standards Catalog (Adelman; as distributed by the NSSDC). Both internal and external comparisons to the filter systems indicate a formal statistical accuracy per bandpass of 0.01 to 0.02 mag, with apparent larger ( ~ 0.03 mag) differences in absolute calibration between this data set and existing spectrophotometry. These data will comprise part of the spectrophotometry that will be used to calibrate the Beijing-Arizona-Taipei-Connecticut Color Survey of the Sky (see separate paper by Burstein et al. at this meeting).

  9. Linear ultrasonic motor for absolute gravimeter.

    PubMed

    Jian, Yue; Yao, Zhiyuan; Silberschmidt, Vadim V

    2017-02-01

    Thanks to their compactness and suitability for vacuum applications, linear ultrasonic motors are considered as substitutes for classical electromagnetic motors as driving elements in absolute gravimeters. Still, their application is prevented by relatively low power output. To overcome this limitation and provide better stability, a V-type linear ultrasonic motor with a new clamping method is proposed for a gravimeter. In this paper, a mechanical model of stators with flexible clamping components is suggested, according to a design criterion for clamps of linear ultrasonic motors. After that, an effect of tangential and normal rigidity of the clamping components on mechanical output is studied. It is followed by discussion of a new clamping method with sufficient tangential rigidity and a capability to facilitate pre-load. Additionally, a prototype of the motor with the proposed clamping method was fabricated and the performance tests in vertical direction were implemented. Experimental results show that the suggested motor has structural stability and high dynamic performance, such as no-load speed of 1.4m/s and maximal thrust of 43N, meeting the requirements for absolute gravimeters.

  10. Why to compare absolute numbers of mitochondria.

    PubMed

    Schmitt, Sabine; Schulz, Sabine; Schropp, Eva-Maria; Eberhagen, Carola; Simmons, Alisha; Beisker, Wolfgang; Aichler, Michaela; Zischka, Hans

    2014-11-01

    Prompted by pronounced structural differences between rat liver and rat hepatocellular carcinoma mitochondria, we suspected these mitochondrial populations to differ massively in their molecular composition. Aiming to reveal these mitochondrial differences, we came across the issue on how to normalize such comparisons and decided to focus on the absolute number of mitochondria. To this end, fluorescently stained mitochondria were quantified by flow cytometry. For rat liver mitochondria, this approach resulted in mitochondrial protein contents comparable to earlier reports using alternative methods. We determined similar protein contents for rat liver, heart and kidney mitochondria. In contrast, however, lower protein contents were determined for rat brain mitochondria and for mitochondria from the rat hepatocellular carcinoma cell line McA 7777. This result challenges mitochondrial comparisons that rely on equal protein amounts as a typical normalization method. Exemplarily, we therefore compared the activity and susceptibility toward inhibition of complex II of rat liver and hepatocellular carcinoma mitochondria and obtained significant discrepancies by either normalizing to protein amount or to absolute mitochondrial number. Importantly, the latter normalization, in contrast to the former, demonstrated a lower complex II activity and higher susceptibility toward inhibition in hepatocellular carcinoma mitochondria compared to liver mitochondria. These findings demonstrate that solely normalizing to protein amount may obscure essential molecular differences between mitochondrial populations.

  11. The absolute threshold of cone vision

    PubMed Central

    Koeing, Darran; Hofer, Heidi

    2013-01-01

    We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115

  12. [Estimation of absolute risk for fracture].

    PubMed

    Fujiwara, Saeko

    2009-03-01

    Osteoporosis treatment aims to prevent fractures and maintain the QOL of the elderly. However, persons at high risk of future fracture cannot be effectively identified on the basis of bone density (BMD) alone, although BMD is used as an diagnostic criterion. Therefore, the WHO recommended that absolute risk for fracture (10-year probability of fracture) for each individual be evaluated and used as an index for intervention threshold. The 10-year probability of fracture is calculated based on age, sex, BMD at the femoral neck (body mass index if BMD is not available), history of previous fractures, parental hip fracture history, smoking, steroid use, rheumatoid arthritis, secondary osteoporosis and alcohol consumption. The WHO has just announced the development of a calculation tool (FRAX: WHO Fracture Risk Assessment Tool) in February this year. Fractures could be prevented more effectively if, based on each country's medical circumstances, an absolute risk value for fracture to determine when to start medical treatment is established and persons at high risk of fracture are identified and treated accordingly.

  13. Absolute stereochemistry of altersolanol A and alterporriols.

    PubMed

    Kanamaru, Saki; Honma, Miho; Murakami, Takanori; Tsushima, Taro; Kudo, Shinji; Tanaka, Kazuaki; Nihei, Ken-Ichi; Nehira, Tatsuo; Hashimoto, Masaru

    2012-02-01

    The absolute stereochemistry of altersolanol A (1) was established by observing a positive exciton couplet in the circular dichroism (CD) spectrum of the C3,C4-O-bis(2-naphthoyl) derivative 10 and by chemical correlations with known compound 8. Before the discussion, the relative stereochemistry of 1 was confirmed by X-ray crystallographic analysis. The shielding effect at C7'-OMe group by C1-O-benzoylation established the relative stereochemical relationship between the C8-C8' axial bonding and the C1-C4/C1'-C4' polyol moieties of alterporriols E (3), an atropisomer of the C8-C8' dimer of 1. As 3 could be obtained by dimerization of 1 in vitro, the absolute configuration of its central chirality elements (C1-C4) must be identical to those of 1. Spectral comparison between the experimental and theoretical CD spectra supported the above conclusion. Axial stereochemistry of novel C4-O-deoxy dimeric derivatives, alterporriols F (4) and G (5), were also revealed by comparison of their CD spectra to those of 2 and 3.

  14. Absolute Electron Extraction Efficiency of Liquid Xenon

    NASA Astrophysics Data System (ADS)

    Kamdin, Katayun; Mizrachi, Eli; Morad, James; Sorensen, Peter

    2016-03-01

    Dual phase liquid/gas xenon time projection chambers (TPCs) currently set the world's most sensitive limits on weakly interacting massive particles (WIMPs), a favored dark matter candidate. These detectors rely on extracting electrons from liquid xenon into gaseous xenon, where they produce proportional scintillation. The proportional scintillation from the extracted electrons serves to internally amplify the WIMP signal; even a single extracted electron is detectable. Credible dark matter searches can proceed with electron extraction efficiency (EEE) lower than 100%. However, electrons systematically left at the liquid/gas boundary are a concern. Possible effects include spontaneous single or multi-electron proportional scintillation signals in the gas, or charging of the liquid/gas interface or detector materials. Understanding EEE is consequently a serious concern for this class of rare event search detectors. Previous EEE measurements have mostly been relative, not absolute, assuming efficiency plateaus at 100%. I will present an absolute EEE measurement with a small liquid/gas xenon TPC test bed located at Lawrence Berkeley National Laboratory.

  15. Standardization of the cumulative absolute velocity

    SciTech Connect

    O'Hara, T.F.; Jacobson, J.P. )

    1991-12-01

    EPRI NP-5930, A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set.

  16. Swarm's Absolute Scalar Magnetometers Burst Mode Results

    NASA Astrophysics Data System (ADS)

    Coisson, P.; Vigneron, P.; Hulot, G.; Crespo Grau, R.; Brocco, L.; Lalanne, X.; Sirol, O.; Leger, J. M.; Jager, T.; Bertrand, F.; Boness, A.; Fratter, I.

    2014-12-01

    Each of the three Swarm satellites embarks an Absolute Scalar Magnetometer (ASM) to provide absolute scalar measurements of the magnetic field with high accuracy and stability. Nominal data acquisition of these ASMs is 1 Hz. But they can also run in a so-called "burst mode" and provide data at 250 Hz. During the commissioning phase of the mission, seven burst mode acquisition campaigns have been run simultaneously for all satellites, obtaining a total of ten days of burs-mode data. These campaigns allowed the identification of issues related to the operations of the piezo-electric motor and the heaters connected to the ASM, that do not impact the nominal 1 Hz scalar data. We analyze the burst mode data to identify high frequency geomagnetic signals, focusing the analysis in two regions: the low latitudes, where we seek signatures of ionospheric irregularities, and the high latitudes, to identify high frequency signals related to polar region currents. Since these campaigns have been conducted during the initial months of the mission, the three satellites where still close to each other, allowing to analyze the spatial coherency of the signals. Wavelet analysis have revealed 31 Hz signals appearing in the night-side in the equatorial region.

  17. On the trail of double peak hydrographs

    NASA Astrophysics Data System (ADS)

    Martínez-Carreras, Núria; Hissler, Christophe; Gourdol, Laurent; Klaus, Julian; Juilleret, Jérôme; François Iffly, Jean; McDonnell, Jeffrey J.; Pfister, Laurent

    2016-04-01

    A double peak hydrograph features two peaks as a response to a unique rainfall pulse. The first peak occurs at the same time or shortly after the precipitation has started and it corresponds to a fast catchment response to precipitation. The delayed peak normally starts during the recession of the first peak, when the precipitation has already ceased. Double peak hydrographs may occur for various reasons. They can occur (i) in large catchments when lag times in tributary responses are large, (ii) in urban catchments where the first peak is often caused by direct surface runoff on impervious land cover, and the delayed peak to slower subsurface flow, and (iii) in non-urban catchments, where the first and the delayed discharge peaks are explained by different runoff mechanisms (e.g. overland flow, subsurface flow and/or deep groundwater flow) that have different response times. Here we focus on the third case, as a formal description of the different hydrological mechanisms explaining these complex hydrological dynamics across catchments with diverse physiographic characteristics is still needed. Based on a review of studies documenting double peak events we have established a formal classification of catchments presenting double peak events based on their regolith structure (geological substratum and/or its weathered products). We describe the different hydrological mechanisms that trigger these complex hydrological dynamics across each catchment type. We then use hydrometric time series of precipitation, runoff, soil moisture and groundwater levels collected in the Weierbach (0.46 km2) headwater catchment (Luxembourg) to better understand double peak hydrograph generation. Specifically, we aim to find out (1) if the generation of a double peak hydrograph is a threshold process, (2) if the hysteretic relationships between storage and discharge are consistent during single and double peak hydrographs, and (3) if different functional landscape units (the hillslopes

  18. Extracting infrared absolute reflectance from relative reflectance measurements.

    PubMed

    Berets, Susan L; Milosevic, Milan

    2012-06-01

    Absolute reflectance measurements are valuable to the optics industry for development of new materials and optical coatings. Yet, absolute reflectance measurements are notoriously difficult to make. In this paper, we investigate the feasibility of extracting the absolute reflectance from a relative reflectance measurement using a reference material with known refractive index.

  19. A Conceptual Approach to Absolute Value Equations and Inequalities

    ERIC Educational Resources Information Center

    Ellis, Mark W.; Bryson, Janet L.

    2011-01-01

    The absolute value learning objective in high school mathematics requires students to solve far more complex absolute value equations and inequalities. When absolute value problems become more complex, students often do not have sufficient conceptual understanding to make any sense of what is happening mathematically. The authors suggest that the…

  20. Surface Characterization of pNIPAM Under Varying Absolute Humidity

    NASA Astrophysics Data System (ADS)

    Chhabra, Arnav; Kanapuram, Ravitej; Leva, Harrison; Trejo, Juan; Kim, Tae Jin; Hidrovo, Carlos

    2012-11-01

    Poly(N-isopropylacrylamide) has become ubiquitously known as a ``smart'' polymer, showing many promising applications in tissue engineering and drug delivery systems. These applications are particularly reliant on its trenchant, thermally induced hydrophilic-hydrophobic transition that occurs at the lower critical solution temperature (LCST). This feature imparts the pNIPAM programmable adsorption and release capabilities, thus eliminating the need for additional enzymes when removing cells from pNIPAM coated surfaces and leaving the extracellular matrix proteins of the cells largely untouched. The dependence of the LCST on molecular weight, solvent systems, and various salts has been studied extensively. However, what has not been explored is the effect of humidity on the characteristic properties of the polymer, specifically the LCST and the magnitude of the hydrophilic-hydrophobic transition. We studied the surface energy variation of pNIPAM as a function of humidity by altering the absolute humidity and keeping the ambient temperature constant. Our experiments were conducted inside a cuboidal environmental chamber with control over the temperature and humidity inside the chamber. A controlled needle was employed to dispense size-regulated droplets. Throughout this process, a CCD camera was used to image the droplet and the static contact angle was determined using image processing techniques. The behavior of pNIPAM as a function of humidity is presented and discussed.

  1. Absolute properties of the eclipsing binary star AP Andromedae

    SciTech Connect

    Sandberg Lacy, Claud H.; Torres, Guillermo; Fekel, Francis C.; Muterspaugh, Matthew W. E-mail: gtorres@cfa.harvard.edu E-mail: matthew1@coe.tsuniv.edu

    2014-06-01

    AP And is a well-detached F5 eclipsing binary star for which only a very limited amount of information was available before this publication. We have obtained very extensive measurements of the light curve (19,097 differential V magnitude observations) and a radial velocity curve (83 spectroscopic observations) which allow us to fit orbits and determine the absolute properties of the components very accurately: masses of 1.277 ± 0.004 and 1.251 ± 0.004 M {sub ☉}, radii of 1.233 ± 0.006 and 1.1953 ± 0.005 R {sub ☉}, and temperatures of 6565 ± 150 K and 6495 ± 150 K. The distance to the system is about 400 ± 30 pc. Comparison with the theoretical properties of the stellar evolutionary models of the Yonsei-Yale series of Yi et al. shows good agreement between the observations and the theory at an age of about 500 Myr and a slightly sub-solar metallicity.

  2. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  3. Magnitude 8.1 Earthquake off the Solomon Islands

    NASA Technical Reports Server (NTRS)

    2007-01-01

    On April 1, 2007, a magnitude 8.1 earthquake rattled the Solomon Islands, 2,145 kilometers (1,330 miles) northeast of Brisbane, Australia. Centered less than ten kilometers beneath the Earth's surface, the earthquake displaced enough water in the ocean above to trigger a small tsunami. Though officials were still assessing damage to remote island communities on April 3, Reuters reported that the earthquake and the tsunami killed an estimated 22 people and left as many as 5,409 homeless. The most serious damage occurred on the island of Gizo, northwest of the earthquake epicenter, where the tsunami damaged the hospital, schools, and hundreds of houses, said Reuters. This image, captured by the Landsat-7 satellite, shows the location of the earthquake epicenter in relation to the nearest islands in the Solomon Island group. Gizo is beyond the left edge of the image, but its triangular fringing coral reefs are shown in the upper left corner. Though dense rain forest hides volcanic features from view, the very shape of the islands testifies to the geologic activity of the region. The circular Kolombangara Island is the tip of a dormant volcano, and other circular volcanic peaks are visible in the image. The image also shows that the Solomon Islands run on a northwest-southeast axis parallel to the edge of the Pacific plate, the section of the Earth's crust that carries the Pacific Ocean and its islands. The earthquake occurred along the plate boundary, where the Australia/Woodlark/Solomon Sea plates slide beneath the denser Pacific plate. Friction between the sinking (subducting) plates and the overriding Pacific plate led to the large earthquake on April 1, said the United States Geological Survey (USGS) summary of the earthquake. Large earthquakes are common in the region, though the section of the plate that produced the April 1 earthquake had not caused any quakes of magnitude 7 or larger since the early 20th century, said the USGS.

  4. Use of Absolute and Comparative Performance Feedback in Absolute and Comparative Judgments and Decisions

    ERIC Educational Resources Information Center

    Moore, Don A.; Klein, William M. P.

    2008-01-01

    Which matters more--beliefs about absolute ability or ability relative to others? This study set out to compare the effects of such beliefs on satisfaction with performance, self-evaluations, and bets on future performance. In Experiment 1, undergraduate participants were told they had answered 20% correct, 80% correct, or were not given their…

  5. Annual peak discharges from small drainage areas in Montana through September 1976

    USGS Publications Warehouse

    Johnson, M.V.; Omang, R.J.; Hull, J.A.

    1977-01-01

    Annual peak discharge from small drainage areas is tabulated for 336 sites in Montana. The 1976 additions included data collected at 206 sites. The program which investigates the magnitude and frequency of floods from small drainage areas in Montana, was begun July 1, 1955. Originally 45 crest-stage gaging stations were established. The purpose of the program is to collect sufficient peak-flow data, which through analysis could provide methods for estimating the magnitude and frequency of floods at any point in Montana. The ultimate objective is to provide methods for estimating the 100-year flood with the reliability needed for road design. (Woodard-USGS)

  6. ABSOLUTE PROPERTIES OF THE ECLIPSING BINARY STAR BF DRACONIS

    SciTech Connect

    Sandberg Lacy, Claud H.; Torres, Guillermo; Fekel, Francis C.; Sabby, Jeffrey A.; Claret, Antonio E-mail: gtorres@cfa.harvard.edu E-mail: jsabby@siue.edu

    2012-06-15

    BF Dra is now known to be an eccentric double-lined F6+F6 binary star with relatively deep (0.7 mag) partial eclipses. Previous studies of the system are improved with 7494 differential photometric observations from the URSA WebScope and 9700 from the NFO WebScope, 106 high-resolution spectroscopic observations from the Tennessee State University 2 m automatic spectroscopic telescope and the 1 m coude-feed spectrometer at Kitt Peak National Observatory, and 31 accurate radial velocities from the CfA. Very accurate (better than 0.6%) masses and radii are determined from analysis of the two new light curves and four radial velocity curves. Theoretical models match the absolute properties of the stars at an age of about 2.72 Gyr and [Fe/H] = -0.17, and tidal theory correctly confirms that the orbit should still be eccentric. Our observations of BF Dra constrain the convective core overshooting parameter to be larger than about 0.13 H{sub p}. We find, however, that standard tidal theory is unable to match the observed slow rotation rates of the components' surface layers.

  7. Absolute pitch exhibits phenotypic and genetic overlap with synesthesia.

    PubMed

    Gregersen, Peter K; Kowalsky, Elena; Lee, Annette; Baron-Cohen, Simon; Fisher, Simon E; Asher, Julian E; Ballard, David; Freudenberg, Jan; Li, Wentian

    2013-05-15

    Absolute pitch (AP) and synesthesia are two uncommon cognitive traits that reflect increased neuronal connectivity and have been anecdotally reported to occur together in an individual. Here we systematically evaluate the occurrence of synesthesia in a population of 768 subjects with documented AP. Out of these 768 subjects, 151 (20.1%) reported synesthesia, most commonly with color. These self-reports of synesthesia were validated in a subset of 21 study subjects, using an established methodology. We further carried out combined linkage analysis of 53 multiplex families with AP and 36 multiplex families with synesthesia. We observed a peak NPL LOD = 4.68 on chromosome 6q, as well as evidence of linkage on chromosome 2, using a dominant model. These data establish the close phenotypic and genetic relationship between AP and synesthesia. The chromosome 6 linkage region contains 73 genes; several leading candidate genes involved in neurodevelopment were investigated by exon resequencing. However, further studies will be required to definitively establish the identity of the causative gene(s) in the region.

  8. Experimental absolute cross section for photoionization of Xe^7+

    NASA Astrophysics Data System (ADS)

    Schippers, S.; Müller, A.; Esteves, D.; Habibi, M.; Aguilar, A.; Kilcoyne, A. L. D.

    2010-03-01

    Collision processes with highly charged xenon ions are of interest for UV-radiation generation in plasma discharges, for fusion research and for space craft propulsion. Here we report results for the photoionization of Xe^7+ ionsootnotetextS. Schippers et al., J. Phys.: Conf. Ser. (in print) which were measured at the photon-ion end station of ALS beamline 10.0.1. As compared with the only previous experimental studyootnotetextJ. M. Bizau et al., Phys. Rev. Lett. 84, 435 (2000) of this reaction, the present cross sections were obtained at higher energy resolution (50--80 meV vs. 200--500 meV) and on an absolute cross section scale. In the experimental photon energy range of 95--145 eV the cross section is dominated by resonances associated with 4d->5f excitation and subsequent autoionization. The most prominent feature in the measured spectrum is the 4d^9,s,f, resonance at 121.14±0.02 eV which reaches a peak cross section of 1.2 Gb at 50 meV photon energy spread. The experimental resonance strength of 160 Mb eV (corresponding to an absorption oscillator strength of 1.46) is in fair agreement with the theoretical result^2.

  9. Difference in peak weight transfer and timing based on golf handicap.

    PubMed

    Queen, Robin M; Butler, Robert J; Dai, Boyi; Barnes, C Lowry

    2013-09-01

    Weight shift during the golf swing has been a topic of discussion among golf professionals; however, it is still unclear how weight shift varies in golfers of different performance levels. The main purpose of this study was to examine the following: (a) the changes in the peak ground reaction forces (GRF) and the timing of these events between high (HHCP) and low handicap (LHCP) golfers and (b) the differences between the leading and trailing legs. Twenty-eight male golfers were recruited and divided based on having an LHCP < 9 or HHCP > 9. Three-dimensional GRF peaks and the timing of the peaks were recorded bilaterally during a golf swing. The golf swing was divided into different phases: (a) address to the top of the backswing, (b) top of the backswing to ball contact, and (c) ball contact to the end of follow through. Repeated measures analyses of variance (α = 0.05) were completed for each study variable: the magnitude and the timing of peak vertical GRF, peak lateral GRF, and peak medial GRF (α = 0.05). The LHCP group had a greater transfer of vertical force from the trailing foot to the leading foot in phase 2 than the HHCP group. The LHCP group also demonstrated earlier timing of peak vertical force throughout the golf swing than the HHCP group. The LHCP and HHCP groups demonstrated different magnitudes of peak lateral force. The LHCP group had an earlier timing of peak lateral GRF in phase 2 and earlier timing of peak medial GRF in phases 1 and 2 than the HHCP group. In general, LHCP golfers demonstrated greater and earlier force generation than HHCP golfers. It may be relevant to consider both the magnitude of the forces and the timing of these events during golf-specific training to improve performance. These data reveal weight shifting differences that can be addressed by teaching professionals to help their students better understand weight transfer during the golf swing to optimize performance.

  10. Magnitude-based scaling of tsunami propagation

    NASA Astrophysics Data System (ADS)

    Simanjuntak, M. Arthur; Greenslade, Diana J. M.

    2011-07-01

    Most current operational tsunami prediction systems are based upon databases of precomputed tsunami scenarios, where some form of linear scaling is applied to the precomputed model runs in order to represent specific earthquake magnitudes. This can introduce errors due to assumptions made about the rupture width and possible effects on dispersion. In this paper, we perform a series of numerical experiments on uniform depth domains, using the Method of Splitting Tsunamis (MOST) model, and develop estimates of the maximum error that an assumed discrepancy in the width of a rupture will produce in the resulting field of maximum tsunami amplitude. This estimate was produced from fitting the decay of maximum amplitude with normalized distance for various resolutions of the source widths to the grid size, resulting in a simple power law whose coefficients effectively vary with wavelength resolution. This provides a quantification of the effect that linear scaling of precomputed scenarios will have on forecasts of tsunami amplitude. This estimate of scaling bias is investigated in relation to the scenario database that is currently in use within the Joint Australian Tsunami Warning Centre.

  11. Estimating magnitude and duration of incident delays

    SciTech Connect

    Garib, A.; Radwan, A.E.; Al-Deek, H.

    1997-11-01

    Traffic congestion is a major operational problem on urban freeways. In the case of recurring congestion, travelers can plan their trips according to the expected occurrence and severity of recurring congestion. However, nonrecurring congestion cannot be managed without real-time prediction. Evaluating the efficiency of intelligent transportation systems (ITS) technologies in reducing incident effects requires developing models that can accurately predict incident duration along with the magnitude of nonrecurring congestion. This paper provides two statistical models for estimating incident delay and a model for predicting incident duration. The incident delay models showed that up to 85% of variation in incident delay can be explained by incident duration, number of lanes affected, number of vehicles involved, and traffic demand before the incident. The incident duration prediction model showed that 81% of variation in incident duration can be predicted by number of lanes affected, number of vehicles involved, truck involvement, time of day, police response time, and weather condition. These findings have implications for on-line applications within the context of advanced traveler information systems (ATIS).

  12. Passive radio frequency peak power multiplier

    DOEpatents

    Farkas, Zoltan D.; Wilson, Perry B.

    1977-01-01

    Peak power multiplication of a radio frequency source by simultaneous charging of two high-Q resonant microwave cavities by applying the source output through a directional coupler to the cavities and then reversing the phase of the source power to the coupler, thereby permitting the power in the cavities to simultaneously discharge through the coupler to the load in combination with power from the source to apply a peak power to the load that is a multiplication of the source peak power.

  13. Symbolic magnitude modulates perceptual strength in binocular rivalry.

    PubMed

    Paffen, Chris L E; Plukaard, Sarah; Kanai, Ryota

    2011-06-01

    Basic aspects of magnitude (such as luminance contrast) are directly represented by sensory representations in early visual areas. However, it is unclear how symbolic magnitudes (such as Arabic numerals) are represented in the brain. Here we show that symbolic magnitude affects binocular rivalry: perceptual dominance of numbers and objects of known size increases with their magnitude. Importantly, variations in symbolic magnitude acted like variations in luminance contrast: we found that an increase in numerical magnitude of adding one lead to an equivalent increase in perceptual dominance as a contrast increment of 0.32%. Our results support the claim that magnitude is extracted automatically, since the increase in perceptual dominance came about in the absence of a magnitude-related task. Our findings show that symbolic, acculturated knowledge about magnitude interacts with visual perception and affects perception in a manner similar to lower-level aspects of magnitude such as luminance contrast.

  14. Analysis of earthquake body wave spectra for potency and magnitude values: implications for magnitude scaling relations

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda; White, Malcolm C.; Vernon, Frank L.

    2016-11-01

    We develop a simple methodology for reliable automated estimation of the low-frequency asymptote in seismic body wave spectra of small to moderate local earthquakes. The procedure corrects individual P- and S-wave spectra for propagation and site effects and estimates the seismic potency from a stacked spectrum. The method is applied to >11 000 earthquakes with local magnitudes 0 < ML < 4 that occurred in the Southern California plate-boundary region around the San Jacinto fault zone during 2013. Moment magnitude Mw values, derived from the spectra and the scaling relation of Hanks & Kanamori, follow a Gutenberg-Richter distribution with a larger b-value (1.22) from that associated with the ML values (0.93) for the same earthquakes. The completeness magnitude for the Mw values is 1.6 while for ML it is 1.0. The quantity (Mw - ML) linearly increases in the analysed magnitude range as ML decreases. An average earthquake with ML = 0 in the study area has an Mw of about 0.9. The developed methodology and results have important implications for earthquake source studies and statistical seismology.

  15. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  16. In vivo absorption spectroscopy for absolute measurement.

    PubMed

    Furukawa, Hiromitsu; Fukuda, Takashi

    2012-10-01

    In in vivo spectroscopy, there are differences between individual subjects in parameters such as tissue scattering and sample concentration. We propose a method that can provide the absolute value of a particular substance concentration, independent of these individual differences. Thus, it is not necessary to use the typical statistical calibration curve, which assumes an average level of scattering and an averaged concentration over individual subjects. This method is expected to greatly reduce the difficulties encountered during in vivo measurements. As an example, for in vivo absorption spectroscopy, the method was applied to the reflectance measurement in retinal vessels to monitor their oxygen saturation levels. This method was then validated by applying it to the tissue phantom under a variety of absorbance values and scattering efficiencies.

  17. Determining Absolute Zero Using a Tuning Fork

    NASA Astrophysics Data System (ADS)

    Goldader, Jeffrey D.

    2008-04-01

    The Celsius and Kelvin temperature scales, we tell our students, are related. We explain that a change in temperature of 1°C corresponds to a change of 1 Kelvin and that atoms and molecules have zero kinetic energy at zero Kelvin, -273°C. In this paper, we will show how students can derive the relationship between the Celsius and Kelvin temperature scales using a simple, well-known physics experiment. By making multiple measurements of the speed of sound at different temperatures, using the classic physics experiment of determining the speed of sound with a tuning fork and variable-length tube, they can determine the temperature at which the speed of sound is zero—absolute zero.

  18. MAGSAT: Vector magnetometer absolute sensor alignment determination

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.

    1981-01-01

    A procedure is described for accurately determining the absolute alignment of the magnetic axes of a triaxial magnetometer sensor with respect to an external, fixed, reference coordinate system. The method does not require that the magnetic field vector orientation, as generated by a triaxial calibration coil system, be known to better than a few degrees from its true position, and minimizes the number of positions through which a sensor assembly must be rotated to obtain a solution. Computer simulations show that accuracies of better than 0.4 seconds of arc can be achieved under typical test conditions associated with existing magnetic test facilities. The basic approach is similar in nature to that presented by McPherron and Snare (1978) except that only three sensor positions are required and the system of equations to be solved is considerably simplified. Applications of the method to the case of the MAGSAT Vector Magnetometer are presented and the problems encountered discussed.

  19. An estimate of global absolute dynamic topography

    NASA Technical Reports Server (NTRS)

    Tai, C.-K.; Wunsch, C.

    1984-01-01

    The absolute dynamic topography of the world ocean is estimated from the largest scales to a short-wavelength cutoff of about 6700 km for the period July through September, 1978. The data base consisted of the time-averaged sea-surface topography determined by Seasat and geoid estimates made at the Goddard Space Flight Center. The issues are those of accuracy and resolution. Use of the altimetric surface as a geoid estimate beyond the short-wavelength cutoff reduces the spectral leakage in the estimated dynamic topography from erroneous small-scale geoid estimates without contaminating the low wavenumbers. Comparison of the result with a similarly filtered version of Levitus' (1982) historical average dynamic topography shows good qualitative agreement. There is quantitative disagreement, but it is within the estimated errors of both methods of calculation.

  20. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  1. Absolute measurements of fast neutrons using yttrium.

    PubMed

    Roshan, M V; Springham, S V; Rawat, R S; Lee, P; Krishnan, M

    2010-08-01

    Yttrium is presented as an absolute neutron detector for pulsed neutron sources. It has high sensitivity for detecting fast neutrons. Yttrium has the property of generating a monoenergetic secondary radiation in the form of a 909 keV gamma-ray caused by inelastic neutron interaction. It was calibrated numerically using MCNPX and does not need periodic recalibration. The total yttrium efficiency for detecting 2.45 MeV neutrons was determined to be f(n) approximately 4.1x10(-4) with an uncertainty of about 0.27%. The yttrium detector was employed in the NX2 plasma focus experiments and showed the neutron yield of the order of 10(8) neutrons per discharge.

  2. The geomorphic structure of the runoff peak

    NASA Astrophysics Data System (ADS)

    Rigon, R.; D'Odorico, P.; Bertoldi, G.

    2011-06-01

    This paper develops a theoretical framework to investigate the core dependence of peak flows on the geomorphic properties of river basins. Based on the theory of transport by travel times, and simple hydrodynamic characterization of floods, this new framework invokes the linearity and invariance of the hydrologic response to provide analytical and semi-analytical expressions for peak flow, time to peak, and area contributing to the peak runoff. These results are obtained for the case of constant-intensity hyetograph using the Intensity-Duration-Frequency (IDF) curves to estimate extreme flow values as a function of the rainfall return period. Results show that, with constant-intensity hyetographs, the time-to-peak is greater than rainfall duration and usually shorter than the basin concentration time. Moreover, the critical storm duration is shown to be independent of rainfall return period as well as the area contributing to the flow peak. The same results are found when the effects of hydrodynamic dispersion are accounted for. Further, it is shown that, when the effects of hydrodynamic dispersion are negligible, the basin area contributing to the peak discharge does not depend on the channel velocity, but is a geomorphic propriety of the basin. As an example this framework is applied to three watersheds. In particular, the runoff peak, the critical rainfall durations and the time to peak are calculated for all links within a network to assess how they increase with basin area.

  3. Origin of weak lensing convergence peaks

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Haiman, Zoltán

    2016-08-01

    Weak lensing convergence peaks are a promising tool to probe nonlinear structure evolution at late times, providing additional cosmological information beyond second-order statistics. Previous theoretical and observational studies have shown that the cosmological constraints on Ωm and σ8 are improved by a factor of up to ≈2 when peak counts and second-order statistics are combined, compared to using the latter alone. We study the origin of lensing peaks using observational data from the 154 deg2 Canada-France-Hawaii Telescope Lensing Survey. We found that while high peaks (with height κ >3.5 σκ , where σκ is the rms of the convergence κ ) are typically due to one single massive halo of ≈1 015M⊙ , low peaks (κ ≲σκ ) are associated with constellations of 2-8 smaller halos (≲1 013M⊙ ). In addition, halos responsible for forming low peaks are found to be significantly offset from the line of sight towards the peak center (impact parameter ≳ their virial radii), compared with ≈0.25 virial radii for halos linked with high peaks, hinting that low peaks are more immune to baryonic processes whose impact is confined to the inner regions of the dark matter halos. Our findings are in good agreement with results from the simulation work by Yang et al. [Phys. Rev. D 84, 043529 (2011)].

  4. Measured and modelled absolute gravity in Greenland

    NASA Astrophysics Data System (ADS)

    Nielsen, E.; Forsberg, R.; Strykowski, G.

    2012-12-01

    Present day changes in the ice volume in glaciated areas like Greenland will change the load on the Earth and to this change the lithosphere will respond elastically. The Earth also responds to changes in the ice volume over a millennial time scale. This response is due to the viscous properties of the mantle and is known as Glaical Isostatic Adjustment (GIA). Both signals are present in GPS and absolute gravity (AG) measurements and they will give an uncertainty in mass balance estimates calculated from these data types. It is possible to separate the two signals if both gravity and Global Positioning System (GPS) time series are available. DTU Space acquired an A10 absolute gravimeter in 2008. One purpose of this instrument is to establish AG time series in Greenland and the first measurements were conducted in 2009. Since then are 18 different Greenland GPS Network (GNET) stations visited and six of these are visited more then once. The gravity signal consists of three signals; the elastic signal, the viscous signal and the direct attraction from the ice masses. All of these signals can be modelled using various techniques. The viscous signal is modelled by solving the Sea Level Equation with an appropriate ice history and Earth model. The free code SELEN is used for this. The elastic signal is modelled as a convolution of the elastic Greens function for gravity and a model of present day ice mass changes. The direct attraction is the same as the Newtonian attraction and is calculated as this. Here we will present the preliminary results of the AG measurements in Greenland. We will also present modelled estimates of the direct attraction, the elastic and the viscous signals.

  5. Absolute bioavailability of quinine formulations in Nigeria.

    PubMed

    Babalola, C P; Bolaji, O O; Ogunbona, F A; Ezeomah, E

    2004-09-01

    This study compared the absolute bioavailability of quinine sulphate as capsule and as tablet against the intravenous (i.v.) infusion of the drug in twelve male volunteers. Six of the volunteers received intravenous infusion over 4 h as well as the capsule formulation of the drug in a cross-over manner, while the other six received the tablet formulation. Blood samples were taken at predetermined time intervals and plasma analysed for quinine (QN) using reversed-phase HPLC method. QN was rapidly absorbed after the two oral formulations with average t(max) of 2.67 h for both capsule and tablet. The mean elimination half-life of QN from the i.v. and oral dosage forms varied between 10 and 13.5 hr and were not statistically different (P > 0.05). On the contrary, the maximum plasma concentration (C(max)) and area under the curve (AUC) from capsule were comparable to those from i.v. (P > 0.05), while these values were markedly higher than values from tablet formulation (P < 0.05). The therapeutic QN plasma levels were not achieved with the tablet formulation. The absolute bioavailability (F) were 73% (C.l., 53.3 - 92.4%) and 39 % (C.I., 21.7 - 56.6%) for the capsule and tablet respectively and the difference was significant (P < 0.05). The subtherapeutic levels obtained from the tablet form used in this study may cause treatment failure during malaria and caution should be taken when predictions are made from results obtained from different formulations of QN.

  6. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  7. Total Galaxy Magnitudes and Effective Radii from Petrosian Magnitudes and Radii

    NASA Astrophysics Data System (ADS)

    Graham, Alister W.; Driver, Simon P.; Petrosian, Vahé; Conselice, Christopher J.; Bershady, Matthew A.; Crawford, Steven M.; Goto, Tomotsugu

    2005-10-01

    Petrosian magnitudes were designed to help with the difficult task of determining a galaxy's total light. Although these magnitudes [taken here as the flux within 2RP, with the inverted Petrosian index 1/η(RP)=0.2] can represent most of an object's flux, they do of course miss the light outside the Petrosian aperture (2RP). The size of this flux deficit varies monotonically with the shape of a galaxy's light profile, i.e., its concentration. In the case of a de Vaucouleurs R1/4 profile, the deficit is 0.20 mag; for an R1/8 profile this figure rises to 0.50 mag. Here we provide a simple method for recovering total (Sérsic) magnitudes from Petrosian magnitudes using only the galaxy concentration (R90/R50 or R80/R20) within the Petrosian aperture. The corrections hold to the extent that Sérsic's model provides a good description of a galaxy's luminosity profile. We show how the concentration can also be used to convert Petrosian radii into effective half-light radii, enabling a robust measure of the mean effective surface brightness. Our technique is applied to the Sloan Digital Sky Survey Data Release 2 (SDSS DR2) Petrosian parameters, yielding good agreement with the total magnitudes, effective radii, and mean effective surface brightnesses obtained from the New York University Value-Added Galaxy Catalog Sérsic R1/n fits by Blanton and coworkers. Although the corrective procedure described here is specifically applicable to the SDSS DR2 and DR3, it is generally applicable to all imaging data where any Petrosian index and concentration can be constructed.

  8. The functional significance of absolute power with respect to event-related desynchronization.

    PubMed

    Doppelmayr, M M; Klimesch, W; Pachinger, T; Ripper, B

    1998-01-01

    The question is examined whether the extent of changes in relative band power as measured by event-related desynchronization (ERD) depends on absolute band power. The results for target stimuli of a simple oddball task indicate that the prestimulus (reference) level of absolute band power has indeed a strong influence on ERD. Whereas for the alpha band large band power in the reference interval is related to a strong degree of alpha suppression as measured by ERD, the opposite holds true for the theta band. Here, a low level of band power during the reference interval is related to a pronounced increase in band power during the processing of the target stimulus. In contrast to alpha and theta, ERD in the delta band is not influenced by the magnitude of band power in the reference interval.

  9. Absolute surface metrology with a phase-shifting interferometer for incommensurate transverse spatial shifts.

    PubMed

    Bloemhof, E E

    2014-02-10

    We consider the detailed implementation and practical utility of a novel absolute optical metrology scheme recently proposed for use with a phase-shifting interferometer (PSI). This scheme extracts absolute phase differences between points on the surface of the optic under test by differencing phase maps made with slightly different transverse spatial shifts of that optic. These absolute phase (or height) differences, which for single-pixel shifts are automatically obtained in the well-known Hudgin geometry, yield the underlying absolute surface map by standard wavefront reconstruction techniques. The PSI by itself maps surface height only relative to that of a separate reference optic known or assumed to be flat. In practice, even relatively high-quality (and expensive) transmission flats or spheres used to reference a PSI are flat or spherical only to a few dozen nanometers peak to valley (P-V) over typical 4 in. apertures. The new technique for removing the effects of the reference surface is in principle accurate as well as simple, and may represent a significant advance in optical metrology. Here it is shown that transverse shifts need not match the pixel size; somewhat counterintuitively, the single-pixel spatial resolution of the PSI is retained even when transverse shifts are much coarser. Practical considerations for shifts not necessarily commensurate with pixel size, and broader applications, are discussed.

  10. The dynamic control ratio at the equilibrium point (DCRe): introducing relative and absolute reliability scores.

    PubMed

    Alt, Tobias; Knicker, Axel J; Strüder, Heiko K

    2017-04-01

    Analytical methods to assess thigh muscle balance need to provide reliable data to allow meaningful interpretation. However, reproducibility of the dynamic control ratio at the equilibrium point has not been evaluated yet. Therefore, the aim of this study was to compare relative and absolute reliability indices of its angle and moment values with conventional and functional hamstring-quadriceps ratios. Furthermore, effects of familiarisation and angular velocity on reproducibility were analysed. A number of 33 male volunteers participated in 3 identical test sessions. Peak moments (PMs) were determined unilaterally during maximum concentric and eccentric knee flexion (prone) and extension (supine position) at 0.53, 1.57 and 2.62 rad · s(-1). A repeated measure, ANOVA, confirmed systematic bias. Intra-class correlation coefficients and standard errors of measurement indicated relative and absolute reliability. Correlation coefficients were averaged over respective factors and tested for significant differences. All balance scores showed comparable low-to-moderate relative (<0.8-0.9) and good absolute reliability (<10%). Relative reproducibility of dynamic control equilibrium parameters augmented with increasing angular velocity, but not with familiarisation. At 2.62 rad · s(-1), high (moment: 0.906) to moderate (angle: 0.833) relative reliability scores with accordingly high absolute indices (4.9% and 6.4%) became apparent. Thus, the dynamic control equilibrium is an equivalent method for the reliable assessment of thigh muscle balance.

  11. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  12. Peak-locking reduction for particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Michaelis, Dirk; Neal, Douglas R.; Wieneke, Bernhard

    2016-10-01

    A parametric study of the factors contributing to peak-locking, a known bias error source in particle image velocimetry (PIV), is conducted using synthetic data that are processed with a state-of-the-art PIV algorithm. The investigated parameters include: particle image diameter, image interpolation techniques, the effect of asymmetric versus symmetric window deformation, number of passes and the interrogation window size. Some of these parameters are found to have a profound effect on the magnitude of the peak-locking error. The effects for specific PIV cameras are also studied experimentally using a precision turntable to generate a known rotating velocity field. Image time series recorded using this experiment show a linear range of pixel and sub-pixel shifts ranging from 0 to  ±4 pixels. Deviations in the constant vorticity field (ω z ) reveal how peak-locking can be affected systematically both by varying parameters of the detection system such as the focal distance and f-number, and also by varying the settings of the PIV analysis. A new a priori technique for reducing the bias errors associated with peak-locking in PIV is introduced using an optical diffuser to avoid undersampled particle images during the recording of the raw images. This technique is evaluated against other a priori approaches using experimental data and is shown to perform favorably. Finally, a new a posteriori anti peak-locking filter (APLF) is developed and investigated, which shows promising results for both synthetic data and real measurements for very small particle image sizes.

  13. Estimation of flood peaks from channel characteristics in Ohio

    USGS Publications Warehouse

    Roth, D.K.

    1985-01-01

    Regression equations were developed to estimate flood peaks with selected recurrence intervals of 2 to 100 years for Ohio streams with alluvial and bedrock channels. CHannel-geometry characteristics, rather than basin characteristics, were used as independent variables. Width of active channel was the only channel-geometry characteristic significant at the 5-percent level in the estimating equations for alluvial channels. Standard errors of estimate for those equations range for the 100-year flood peak. The equations were developed from data collected at 142 gaging stations that have active-channel widths ranging from 2 to 495 feet. For streams with bedrock or firm channels, depth of the bankfull channel and active-channel width were statistically significant characteristics at the 5-percent level for all but the 2-year recurrence interval flood-peak equation, for which only active-channel width was statistically significant. Standard errors of estimate range from 33 percent for the 5-year flood peak to 40 percent for the 100-year flood peak when both significant variables are included in the equations. Standard errors of estimate range from 36 percent to 46 percent when only the active-channel width independent variable is used. These equations are based on channel-geometry data collected at 20 gaging stations that have active-channel widths ranging from 14 to 240 feet and average bankfull-channel depths ranging from 2.5 to 9.2 feet. Channel-geometry characteristics also were measured at 168 ungaged sites to provide information that can be used to better define the geographic-area boundaries in three areas of Ohio where the boundaries were previously defined in a flood magnitude and frequency report.

  14. Nonlinear Susceptibility Magnitude Imaging of Magnetic Nanoparticles.

    PubMed

    Ficko, Bradley W; Giacometti, Paolo; Diamond, Solomon G

    2015-03-15

    This study demonstrates a method for improving the resolution of susceptibility magnitude imaging (SMI) using spatial information that arises from the nonlinear magnetization characteristics of magnetic nanoparticles (mNPs). In this proof-of-concept study of nonlinear SMI, a pair of drive coils and several permanent magnets generate applied magnetic fields and a coil is used as a magnetic field sensor. Sinusoidal alternating current (AC) in the drive coils results in linear mNP magnetization responses at primary frequencies, and nonlinear responses at harmonic frequencies and intermodulation frequencies. The spatial information content of the nonlinear responses is evaluated by reconstructing tomographic images with sequentially increasing voxel counts using the combined linear and nonlinear data. Using the linear data alone it is not possible to accurately reconstruct more than 2 voxels with a pair of drive coils and a single sensor. However, nonlinear SMI is found to accurately reconstruct 12 voxels (R(2) = 0.99, CNR = 84.9) using the same physical configuration. Several time-multiplexing methods are then explored to determine if additional spatial information can be obtained by varying the amplitude, phase and frequency of the applied magnetic fields from the two drive coils. Asynchronous phase modulation, amplitude modulation, intermodulation phase modulation, and frequency modulation all resulted in accurate reconstruction of 6 voxels (R(2) > 0.9) indicating that time multiplexing is a valid approach to further increase the resolution of nonlinear SMI. The spatial information content of nonlinear mNP responses and the potential for resolution enhancement with time multiplexing demonstrate the concept and advantages of nonlinear SMI.

  15. Magnitude and frequency of summer floods in western New Mexico and eastern Arizona

    USGS Publications Warehouse

    Kennon, F.W.

    1955-01-01

    Numerous small reservoirs and occasional water-spreading structures are being built on the ephemeral streams draining the public and Indian lands of the Southwest as part of the Soil and Moisture Conservation Program of the Bureau of Land Management and Bureau of Indian Affairs.  Economic design of these structures requires some knowledge of the flood rates and volumes.  Information concerning flood frequencies on areas less than 100 square miles is deficient throughout the country, particularly on intermittent streams of the Southwest.  Design engineers require a knowledge of the frequency and magnitude of flood volumes for the planning of adequate reservoir capacities and a knowledge of frequency and magnitude of flood peaks for spillway design.  Hence, this study deals with both flood volumes and peaks, the same statistical methods being used to develop frequency curves for each.

  16. The moment magnitude M w and the energy magnitude M e: common roots and differences

    NASA Astrophysics Data System (ADS)

    Bormann, Peter; di Giacomo, Domenico

    2011-04-01

    Starting from the classical empirical magnitude-energy relationships, in this article, the derivation of the modern scales for moment magnitude M w and energy magnitude M e is outlined and critically discussed. The formulas for M w and M e calculation are presented in a way that reveals, besides the contributions of the physically defined measurement parameters seismic moment M 0 and radiated seismic energy E S, the role of the constants in the classical Gutenberg-Richter magnitude-energy relationship. Further, it is shown that M w and M e are linked via the parameter Θ = log( E S/ M 0), and the formula for M e can be written as M e = M w + (Θ + 4.7)/1.5. This relationship directly links M e with M w via their common scaling to classical magnitudes and, at the same time, highlights the reason why M w and M e can significantly differ. In fact, Θ is assumed to be constant when calculating M w. However, variations over three to four orders of magnitude in stress drop Δ σ (as well as related variations in rupture velocity V R and seismic wave radiation efficiency η R) are responsible for the large variability of actual Θ values of earthquakes. As a result, for the same earthquake, M e may sometimes differ by more than one magnitude unit from M w. Such a difference is highly relevant when assessing the actual damage potential associated with a given earthquake, because it expresses rather different static and dynamic source properties. While M w is most appropriate for estimating the earthquake size (i.e., the product of rupture area times average displacement) and thus the potential tsunami hazard posed by strong and great earthquakes in marine environs, M e is more suitable than M w for assessing the potential hazard of damage due to strong ground shaking, i.e., the earthquake strength. Therefore, whenever possible, these two magnitudes should be both independently determined and jointly considered. Usually, only M w is taken as a unified magnitude in many

  17. Series Distance - a metric for the quantification of hydrograph errors and forecast uncertainty, simultaneously for timing and magnitude

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Seibert, S.

    2013-12-01

    Applying metrics to quantify the similarity or dissimilarity of hydrographs is a central task in hydrological modeling, used both in model calibration and the evaluation of simulations or forecasts. Motivated by the shortcomings of standard objective metrics such as the Root Mean Square Error (RMSE) or the Mean Absolute Peak Time Error (MAPTE) and the advantages of visual inspection as a powerful tool for simultaneous, case-specific and multi-criteria (yet subjective) evaluation, we will present an objective metric termed Series Distance (Ehret and Zehe, 2011), which is in close accordance with visual evaluation. The Series Distance quantifies the similarity of two hydrographs not as the sum of amplitude differences at similar points in time (as e.g. RMSE or NASH do), but as the sum of space-time distances between hydrologically similar points of hydrograph pairs (e.g. observation and simulation) which indicates the same underlying hydrological process (e.g. event start, first half of the first rising limb, first peak etc.), which is in close concordance with visual inspection. The challenge is to automatically identify hydrologically similar points in pairs of hydrographs, which includes identification of events in the hydrographs and distinction of relevant and non-relevant rise/fall segments within events. With Series Distance, amplitude and timing errors are calculated simultaneously but separately, i.e. it returns bivariate distributions of timing and amplitude errors. These bivariate error distributions can be applied to determine time-amplitude 'uncertainty clouds' around predictions or forecasts instead of solely magnitude-error based 'uncertainty ranges' based on e.g. RMSE error distributions. This has the potential to reduce, at equal levels of exceedance probability, the size of the uncertainty range around a prediction or forecast, as timing uncertainty is not falsely represented as amplitude uncertainty. We will present the theory of Series Distance as

  18. Do dark matter halos explain lensing peaks?

    NASA Astrophysics Data System (ADS)

    Zorrilla Matilla, José Manuel; Haiman, Zoltán; Hsu, Daniel; Gupta, Arushi; Petri, Andrea

    2016-10-01

    We have investigated a recently proposed halo-based model, Camelus, for predicting weak-lensing peak counts, and compared its results over a collection of 162 cosmologies with those from N-body simulations. While counts from both models agree for peaks with S /N >1 (where S /N is the ratio of the peak height to the r.m.s. shape noise), we find ≈50 % fewer counts for peaks near S /N =0 and significantly higher counts in the negative S /N tail. Adding shape noise reduces the differences to within 20% for all cosmologies. We also found larger covariances that are more sensitive to cosmological parameters. As a result, credibility regions in the {Ωm,σ8} are ≈30 % larger. Even though the credible contours are commensurate, each model draws its predictive power from different types of peaks. Low peaks, especially those with 2 peaks (S /N >3 ). Our results confirm the importance of using a cosmology-dependent covariance with at least a 14% improvement in parameter constraints. We identified the covariance estimation as the main driver behind differences in inference, and suggest possible ways to make Camelus even more useful as a highly accurate peak count emulator.

  19. Training Lessons Learned from Peak Performance Episodes.

    ERIC Educational Resources Information Center

    Fobes, James L.

    A major challenge confronting the United States Army is to obtain optimal performance from both its human and machine resources. This study examines episodes of peak performance in soldiers and athletes. Three cognitive components were found to enable episodes of peak performance: psychological readiness (activating optimal arousal and emotion…

  20. The geomorphic structure of the runoff peak

    NASA Astrophysics Data System (ADS)

    Rigon, R.; D'Odorico, P.; Bertoldi, G.

    2011-01-01

    This paper develops a theoretical framework to investigate the core dependence of peak flows on the geomorphic properties of river basins. Based on the theory of transport by travel times, and simple hydrodynamic characterization of floods, this new framework invokes the linearity and invariance of the hydrologic response to provide analytical and semi-analitical expressions for peak flow, time to peak, and area contributing to the peak runoff. These results are obtained for the case of constant-intensity hyetograph using the Intensity-Duration-Frequency (IDF) curves to estimate extreme flow values as a function of the rainfall return period. Results show that, with constant-intensity hyetographs, the time-to-peak is greater than rainfall duration and usually shorter than the basin concentration time. Moreover, the critical storm duration is shown to be independent of rainfall return period. Further, it is shown that the basin area contributing to the peak discharge does not depend on the channel velocity, but is a geomorphic propriety of the basin. The same results are found when the effects of hydrodynamic dispersion are accounted for. As an example this framework is applied to three watersheds. In particular, the runoff peak, the critical rainfall durations and the time to peak are calculated for all links within a network to assess how they increase with basin area.

  1. Changes in muscular activity and lumbosacral kinematics in response to handling objects of unknown mass magnitude.

    PubMed

    Elsayed, Walaa; Farrag, Ahmed; El-Sayyad, Mohsen; Marras, William

    2015-04-01

    The aim of this study was to evaluate the main and interaction effects of mass knowledge and mass magnitude on trunk muscular activity and lumbosacral kinematics. Eighteen participants performed symmetric box lifts of three different mass magnitudes (1.1 kg, 5 kg, 15 kg) under known and unknown mass knowledge conditions. Outcome measures were normalized peak electromyography of four trunk muscles in addition to three dimensional lumbosacral angles and acceleration. The results indicated that three out of four muscles exhibited significantly greater activity when handling unknown masses (p<.05). Meanwhile, only sagittal angular acceleration was significantly higher when handling unknown masses (115.6 ± 42.7°/s(2)) compared to known masses (109.3 ± 31.5°/s(2)). Similarly, the mass magnitude and mass knowledge interaction significantly impacted the same muscles along with the sagittal lumbosacral angle and angular acceleration (p<.05) with the greatest difference between knowledge conditions being consistently occurring under the 1.1 kg mass magnitude condition. Thus, under these conditions, it was concluded that mass magnitude has more impact than mass knowledge. However, handling objects of unknown mass magnitude could be hazardous, particularly when lifting light masses, in that they can increase mechanical burden on the lumbosacral spine due to increased muscular exertion and acceleration.

  2. The Boson peak in supercooled water

    PubMed Central

    Kumar, Pradeep; Wikfeldt, K. Thor; Schlesinger, Daniel; Pettersson, Lars G. M.; Stanley, H. Eugene

    2013-01-01

    We perform extensive molecular dynamics simulations of the TIP4P/2005 model of water to investigate the origin of the Boson peak reported in experiments on supercooled water in nanoconfined pores, and in hydration water around proteins. We find that the onset of the Boson peak in supercooled bulk water coincides with the crossover to a predominantly low-density-like liquid below the Widom line TW. The frequency and onset temperature of the Boson peak in our simulations of bulk water agree well with the results from experiments on nanoconfined water. Our results suggest that the Boson peak in water is not an exclusive effect of confinement. We further find that, similar to other glass-forming liquids, the vibrational modes corresponding to the Boson peak are spatially extended and are related to transverse phonons found in the parent crystal, here ice Ih. PMID:23771033

  3. Absolute identification of muramic acid, at trace levels, in human septic synovial fluids in vivo and absence in aseptic fluids.

    PubMed

    Fox, A; Fox, K; Christensson, B; Harrelson, D; Krahmer, M

    1996-09-01

    This is the first report of a study employing the state-of-the-art technique of gas chromatography-tandem mass spectrometry for absolute identification of muramic acid (a marker for peptidoglycan) at trace levels in a human or animal body fluid or tissue. Daughter mass spectra of synovial fluid muramic acid peaks (> or = 30 ng/ml) were identical to those of pure muramic acid. Absolute chemical identification at this level represents a 1,000-fold increase in sensitivity over previous gas chromatography-mass spectrometry identifications. Muramic acid was positively identified in synovial fluids during infection and was eliminated over time but was absent from aseptic fluids.

  4. Predictors of the peak width for networks with exponential links

    USGS Publications Warehouse

    Troutman, B.M.; Karlinger, M.R.

    1989-01-01

    We investigate optimal predictors of the peak (S) and distance to peak (T) of the width function of drainage networks under the assumption that the networks are topologically random with independent and exponentially distributed link lengths. Analytical results are derived using the fact that, under these assumptions, the width function is a homogeneous Markov birth-death process. In particular, exact expressions are derived for the asymptotic conditional expectations of S and T given network magnitude N and given mainstream length H. In addition, a simulation study is performed to examine various predictors of S and T, including N, H, and basin morphometric properties; non-asymptotic conditional expectations and variances are estimated. The best single predictor of S is N, of T is H, and of the scaled peak (S divided by the area under the width function) is H. Finally, expressions tested on a set of drainage basins from the state of Wyoming perform reasonably well in predicting S and T despite probable violations of the original assumptions. ?? 1989 Springer-Verlag.

  5. Deconvolution of mixed gamma emitters using peak parameters

    SciTech Connect

    Gadd, Milan S; Garcia, Francisco; Magadalena, Vigil M

    2011-01-14

    When evaluating samples containing mixtures of nuclides using gamma spectroscopy the situation sometimes arises where the nuclides present have photon emissions that cannot be resolved by the detector. An example of this is mixtures of {sup 241}Am and plutonium that have L x-ray emissions with slightly different energies which cannot be resolved using a high-purity germanium detector. It is possible to deconvolute the americium L x-rays from those plutonium based on the {sup 241}Am 59.54 keV photon. However, this requires accurate knowledge of the relative emission yields. Also, it often results in high uncertainties in the plutonium activity estimate due to the americium yields being approximately an order of magnitude greater than those for plutonium. In this work, an alternative method of determining the relative fraction of plutonium in mixtures of {sup 241}Am and {sup 239}Pu based on L x-ray peak location and shape parameters is investigated. The sensitivity and accuracy of the peak parameter method is compared to that for conventional peak decovolution.

  6. Absolute positioning by multi-wavelength interferometry referenced to the frequency comb of a femtosecond laser.

    PubMed

    Wang, Guochao; Jang, Yoon-Soo; Hyun, Sangwon; Chun, Byung Jae; Kang, Hyun Jay; Yan, Shuhua; Kim, Seung-Woo; Kim, Young-Jin

    2015-04-06

    A multi-wavelength interferometer utilizing the frequency comb of a femtosecond laser as the wavelength ruler is tested for its capability of ultra-precision positioning for machine axis control. The interferometer uses four different wavelengths phase-locked to the frequency comb and then determines the absolute position through a multi-channel scheme of detecting interference phases in parallel so as to enable fast, precise and stable measurements continuously over a few meters of axis-travel. Test results show that the proposed interferometer proves itself as a potential candidate of absolute-type position transducer needed for next-generation ultra-precision machine axis control, demonstrating linear errors of less than 61.9 nm in peak-to-valley over a 1-meter travel with an update rate of 100 Hz when compared to an incremental-type He-Ne laser interferometer.

  7. Moment Magnitude ( M W) and Local Magnitude ( M L) Relationship for Earthquakes in Northeast India

    NASA Astrophysics Data System (ADS)

    Baruah, Santanu; Baruah, Saurabh; Bora, P. K.; Duarah, R.; Kalita, Aditya; Biswas, Rajib; Gogoi, N.; Kayal, J. R.

    2012-11-01

    An attempt has been made to examine an empirical relationship between moment magnitude ( M W) and local magnitude ( M L) for the earthquakes in the northeast Indian region. Some 364 earthquakes that were recorded during 1950-2009 are used in this study. Focal mechanism solutions of these earthquakes include 189 Harvard-CMT solutions ( M W ≥ 4.0) for the period 1976-2009, 61 published solutions and 114 solutions obtained for the local earthquakes (2.0 ≤ M L ≤ 5.0) recorded by a 27-station permanent broadband network during 2001-2009 in the region. The M W- M L relationships in seven selected zones of the region are determined by linear regression analysis. A significant variation in the M W- M L relationship and its zone specific dependence are reported here. It is found that M W is equivalent to M L with an average uncertainty of about 0.13 magnitude units. A single relationship is, however, not adequate to scale the entire northeast Indian region because of heterogeneous geologic and geotectonic environments where earthquakes occur due to collisions, subduction and complex intra-plate tectonics.

  8. Sources and magnitude of bias associated with determination of polychlorinated biphenyls in environmental samples

    USGS Publications Warehouse

    Eganhouse, R.P.; Gossett, R.W.

    1991-01-01

    Recently complled data on the composition of commercial Aroclor mixtures and ECD (electron capture detector) response factors for all 209 PCB congeners are used to develop estimates of the bias associated with determination of polychlorinated blphenyis. During quantitation of multlcomponent peaks by congener-specific procedures error is introduced because of variable ECD response to isomeric PCBs. Under worst case conditions, the magnitude of this bias can range from less than 2% to as much as 600%. Multicomponent peaks containing the more highly and the lower chlorinated congeners experience the most bias. For this reason, quantitation of ??PCB in Aroclor mixtures dominated by these species (e.g. 1016) are potentially subject to the greatest error. Comparison of response factor data for ECDs from two laboratories shows that the sign and magnitude of calibration bias for a given multicomponent peak is variable and depends, in part, on the response characteristics of individual detectors. By using the most abundant congener (of each multicomponent peak) for purposes of calibration, one can reduce the maximum bias to less than 55%. Moreover, due to cancellation of errors, the bias resulting from summation of all peak concentrations (i.e. ??PCB) becomes vanishingly small (200%) and highly variable in sign and magnitude. In this case, bias originates not only from the incomplete chromatographic resolution of PCB congeners but also the overlapping patterns of the Aroclor mixtures. Together these results illustrate the advantages of the congener-specific method of PCB quantitation over the traditional Aroclor Method and the extreme difficulty of estimating bias incurred by the latter procedure on a post hoc basis.

  9. RSAT peak-motifs: motif analysis in full-size ChIP-seq datasets.

    PubMed

    Thomas-Chollier, Morgane; Herrmann, Carl; Defrance, Matthieu; Sand, Olivier; Thieffry, Denis; van Helden, Jacques

    2012-02-01

    ChIP-seq is increasingly used to characterize transcription factor binding and chromatin marks at a genomic scale. Various tools are now available to extract binding motifs from peak data sets. However, most approaches are only available as command-line programs, or via a website but with size restrictions. We present peak-motifs, a computational pipeline that discovers motifs in peak sequences, compares them with databases, exports putative binding sites for visualization in the UCSC genome browser and generates an extensive report suited for both naive and expert users. It relies on time- and memory-efficient algorithms enabling the treatment of several thousand peaks within minutes. Regarding time efficiency, peak-motifs outperforms all comparable tools by several orders of magnitude. We demonstrate its accuracy by analyzing data sets ranging from 4000 to 1,28,000 peaks for 12 embryonic stem cell-specific transcription factors. In all cases, the program finds the expected motifs and returns additional motifs potentially bound by cofactors. We further apply peak-motifs to discover tissue-specific motifs in peak collections for the p300 transcriptional co-activator. To our knowledge, peak-motifs is the only tool that performs a complete motif analysis and offers a user-friendly web interface without any restriction on sequence size or number of peaks.

  10. Automatic Locking of Laser Frequency to an Absorption Peak

    NASA Technical Reports Server (NTRS)

    Koch, Grady J.

    2006-01-01

    An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that

  11. Quantifying Surface Processes and Stratigraphic Characteristics Resulting from Large Magnitude High Frequency and Small Magnitude Low Frequency Relative Sea Level Cycles: An Experimental Study

    NASA Astrophysics Data System (ADS)

    Yu, L.; Li, Q.; Esposito, C. R.; Straub, K. M.

    2015-12-01

    Relative Sea-Level (RSL) change, which is a primary control on sequence stratigraphic architecture, has a close relationship with climate change. In order to explore the influence of RSL change on the stratigraphic record, we conducted three physical experiments which shared identical boundary conditions but differed in their RSL characteristics. Specifically, the three experiments differed with respect to two non-dimensional numbers that compare the magnitude and periodicity of RSL cycles to the spatial and temporal scales of autogenic processes, respectively. The magnitude of RSL change is quantified with H*, defined as the peak to trough difference in RSL during a cycle divided by a system's maximum autogenic channel depth. The periodicity of RSL change is quantified with T*, defined as the period of RSL cycles divided by the time required to deposit one channel depth of sediment, on average, everywhere in the basin. Experiments performed included: 1) a control experiment lacking RSL cycles, used to define a system's autogenics, 2) a high magnitude, high frequency RSL cycles experiment, and 3) a low magnitude, low frequency cycles experiment. We observe that the high magnitude, high frequency experiment resulted in the thickest channel bodies with the lowest width-to-depth ratios, while the low magnitude, long period experiment preserves a record of gradual shoreline transgression and regression producing facies that are the most continuous in space. We plan to integrate our experimental results with Delft3D numerical experiments models that sample similar non-dimensional characteristics of RSL cycles. Quantifying the influence of RSL change, normalized as a function of the spatial and temporal scales of autogenic processes will strengthen our ability to predict stratigraphic architecture and invert stratigraphy for paleo-environmental conditions.

  12. Magnitude of flood flows for selected annual exceedance probabilities in Rhode Island through 2010

    USGS Publications Warehouse

    Zarriello, Phillip J.; Ahearn, Elizabeth A.; Levin, Sara B.

    2012-01-01

    Heavy persistent rains from late February through March 2010 caused severe widespread flooding in Rhode Island that set or nearly set record flows and water levels at many long-term streamgages in the State. In response, the U.S. Geological Survey, in partnership with the Federal Emergency Management Agency, conducted a study to update estimates of flood magnitudes at streamgages and regional equations for estimating flood flows at ungaged locations. This report provides information needed for flood plain management, transportation infrastructure design, flood insurance studies, and other purposes that can help minimize future flood damages and risks. The magnitudes of floods were determined from the annual peak flows at 43 streamgages in Rhode Island (20 sites), Connecticut (14 sites), and Massachusetts (9 sites) using the standard Bulletin 17B log-Pearson type III method and a modification of this method called the expected moments algorithm (EMA) for 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability (AEP) floods. Annual-peak flows were analyzed for the period of record through the 2010 water year; however, records were extended at 23 streamgages using the maintenance of variance extension (MOVE) procedure to best represent the longest period possible for determining the generalized skew and flood magnitudes. Generalized least square regression equations were developed from the flood quantiles computed at 41 streamgages (2 streamgages in Rhode Island with reported flood quantiles were not used in the regional regression because of regulation or redundancy) and their respective basin characteristics to estimate magnitude of floods at ungaged sites. Of 55 basin characteristics evaluated as potential explanatory variables, 3 were statistically significant—drainage area, stream density, and basin storage. The pseudo-coefficient of determination (pseudo-R2) indicates these three explanatory variables explain 95 to 96 percent of the variance

  13. Electrocapillary instability of magnetic fluid peak.

    PubMed

    Mkrtchyan, Levon; Zakinyan, Arthur; Dikansky, Yuri

    2013-07-23

    This Article presents an experimental study of the capillary electrostatic instability occurring under the effect of a constant electric field on a magnetic fluid individual peak. The peaks under study occur at disintegration of a magnetic fluid layer applied on a flat electrode surface under the effect of a perpendicular magnetic field. The electrocapillary instability shows itself as an emission of charged drops jets from the peak point in direction of the opposing electrode. The charged drops emission repeats periodically and results in the peak shape pulsations. It is shown that a magnetic field affects the electrocapillary instability occurrence regularities and can stimulate its development. The critical electric and magnetic field strengths at which the instability occurs have been measured; their dependence on the peak size is shown. The hysteresis in the system has been studied; it consists in that the charged drops emission stops at a lesser electric (or magnetic) field strength than that of the initial occurrence. The peak pulsations frequency depending on the magnetic and electric field strengths and on the peak size has been measured.

  14. Numerical magnitude affects temporal memories but not time encoding.

    PubMed

    Cai, Zhenguang G; Wang, Ruiming

    2014-01-01

    Previous research has suggested that the perception of time is influenced by concurrent magnitude information (e.g., numerical magnitude in digits, spatial distance), but the locus of the effect is unclear, with some findings suggesting that concurrent magnitudes such as space affect temporal memories and others suggesting that numerical magnitudes in digits affect the clock speed during time encoding. The current paper reports 6 experiments in which participants perceived a stimulus duration and then reproduced it. We showed that though a digit of a large magnitude (e.g., 9), relative to a digit of a small magnitude (e.g., 2), led to a longer reproduced duration when the digits were presented during the perception of the stimulus duration, such a magnitude effect disappeared when the digits were presented during the reproduction of the stimulus duration. These findings disconfirm the account that large numerical magnitudes accelerate the speed of an internal clock during time encoding, as such an account incorrectly predicts that a large numerical magnitude should lead to a shorter reproduced duration when presented during reproduction. Instead, the findings suggest that numerical magnitudes, like other magnitudes such as space, affect temporal memories when numerical magnitudes and temporal durations are concurrently held in memory. Under this account, concurrent numerical magnitudes have the chance to influence the memory of the perceived duration when they are presented during perception but not when they are presented at the reproduction stage.

  15. The modulation of implicit magnitude on time estimates.

    PubMed

    Ma, Qingxia; Yang, Zhen; Zhang, Zhijie

    2012-01-01

    Studies in time and quantity have shown that explicit magnitude (e.g. Arabic numerals, luminance, or size) modulates time estimates with smaller magnitude biasing the judgment of time towards underestimation and larger magnitude towards overestimation. However, few studies have examined the effect of implicit magnitude on time estimates. The current study used a duration estimation task to investigate the effects of implicit magnitude on time estimation in three experiments. During the duration estimation task, the target words named objects of various lengths (Experiment 1), weights (Experiment 2) and volumes (Experiment 3) were presented on the screen and participants were asked to reproduce the amount of time the words remained on the screen via button presses. Results indicated that the time estimates were modulated by the implicit magnitude of the word's referent with words named objects of smaller magnitude (shorter, lighter, or smaller) being judged to last a shorter time, and words named objects of greater magnitude (longer, heavier, or bigger) being judged to last a longer time. These findings were consistent with previous studies examining the effect of implicit spatial length on time estimates. More importantly, current results extended the implicit magnitude of length to the implicit magnitude of weight and volume and demonstrated a functional interaction between time and implicit magnitude in all three aspects of quantity, suggesting a common generalized magnitude system. These results provided new evidence to support a theory of magnitude (ATOM).

  16. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  17. What is Needed for Absolute Paleointensity?

    NASA Astrophysics Data System (ADS)

    Valet, J. P.

    2015-12-01

    Many alternative approaches to the Thellier and Thellier technique for absolute paleointensity have been proposed during the past twenty years. One reason is the time consuming aspect of the experiments. Another reason is to avoid uncertainties in determinations of the paleofield which are mostly linked to the presence of multidomain grains. Despite great care taken by these new techniques, there is no indication that they always provide the right answer and in fact sometimes fail. We are convinced that the most valid approach remains the original double heating Thellier protocol provided that natural remanence is controlled by pure magnetite with a narrow distribution of small grain sizes, mostly single domains. The presence of titanium, even in small amount generates biases which yield incorrect field values. Single domain grains frequently dominate the magnetization of glass samples, which explains the success of this selective approach. They are also present in volcanic lava flows but much less frequently, and therefore contribute to the low success rate of most experiments. However the loss of at least 70% of the magnetization at very high temperatures prior to the Curie point appears to be an essential prerequisite that increases the success rate to almost 100% and has been validated from historical flows and from recent studies. This requirement can easily be tested by thermal demagnetization while low temperature experiments can document the detection of single domain magnetite using the δFC/δZFC parameter as suggested (Moskowitz et al, 1993) for biogenic magnetite.

  18. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    SciTech Connect

    Jian-Zhou Zhu and Gregory W. Hammett

    2011-01-10

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, "On some statistical properties of hydrodynamical and magnetohydrodynamical fields," Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.

  19. Absolute flux measurements for swift atoms

    NASA Technical Reports Server (NTRS)

    Fink, M.; Kohl, D. A.; Keto, J. W.; Antoniewicz, P.

    1987-01-01

    While a torsion balance in vacuum can easily measure the momentum transfer from a gas beam impinging on a surface attached to the balance, this measurement depends on the accommodation coefficients of the atoms with the surface and the distribution of the recoil. A torsion balance is described for making absolute flux measurements independent of recoil effects. The torsion balance is a conventional taut suspension wire design and the Young modulus of the wire determines the relationship between the displacement and the applied torque. A compensating magnetic field is applied to maintain zero displacement and provide critical damping. The unique feature is to couple the impinging gas beam to the torsion balance via a Wood's horn, i.e., a thin wall tube with a gradual 90 deg bend. Just as light is trapped in a Wood's horn by specular reflection from the curved surfaces, the gas beam diffuses through the tube. Instead of trapping the beam, the end of the tube is open so that the atoms exit the tube at 90 deg to their original direction. Therefore, all of the forward momentum of the gas beam is transferred to the torsion balance independent of the angle of reflection from the surfaces inside the tube.

  20. Estimation of peak winds from hourly observations

    NASA Technical Reports Server (NTRS)

    Graves, M. E.

    1973-01-01

    Two closely related methods to obtain estimates of the hourly peak wind at Cape Kennedy were compared by statistical tests. The methods evaluated the Monin-Obukhov stability length and the standard deviation of the hourly observed wind speed, so as to augment the latter quantity by F standard deviations. F is an optimized factor. A third method utilizing an optimized gust factor was also applied to the hourly wind. The latter procedure estimated 2952 peak winds with an rms error of 2.81 knots, an accuracy which was not surpassed by the other methods. Peak ground wind speed data were developed for use in space shuttle design operation analyses.

  1. Estimation of magnitude and frequency of floods for streams in Puerto Rico : new empirical models

    USGS Publications Warehouse

    Ramos-Gines, Orlando

    1999-01-01

    Flood-peak discharges and frequencies are presented for 57 gaged sites in Puerto Rico for recurrence intervals ranging from 2 to 500 years. The log-Pearson Type III distribution, the methodology recommended by the United States Interagency Committee on Water Data, was used to determine the magnitude and frequency of floods at the gaged sites having 10 to 43 years of record. A technique is presented for estimating flood-peak discharges at recurrence intervals ranging from 2 to 500 years for unregulated streams in Puerto Rico with contributing drainage areas ranging from 0.83 to 208 square miles. Loglinear multiple regression analyses, using climatic and basin characteristics and peak-discharge data from the 57 gaged sites, were used to construct regression equations to transfer the magnitude and frequency information from gaged to ungaged sites. The equations have contributing drainage area, depth-to-rock, and mean annual rainfall as the basin and climatic characteristics in estimating flood peak discharges. Examples are given to show a step-by-step procedure in calculating a 100-year flood at a gaged site, an ungaged site, a site near a gaged location, and a site between two gaged sites.

  2. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    2015-12-01

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  3. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  4. Positioning, alignment and absolute pointing of the ANTARES neutrino telescope

    NASA Astrophysics Data System (ADS)

    Fehr, F.; Distefano, C.; Antares Collaboration

    2010-01-01

    A precise detector alignment and absolute pointing is crucial for point-source searches. The ANTARES neutrino telescope utilises an array of hydrophones, tiltmeters and compasses for the relative positioning of the optical sensors. The absolute calibration is accomplished by long-baseline low-frequency triangulation of the acoustic reference devices in the deep-sea with a differential GPS system at the sea surface. The absolute pointing can be independently verified by detecting the shadow of the Moon in cosmic rays.

  5. Exploring the relationship between the magnitudes of seismic events

    NASA Astrophysics Data System (ADS)

    Spassiani, Ilaria; Sebastiani, Giovanni

    2016-02-01

    The distribution of the magnitudes of seismic events is generally assumed to be independent on past seismicity. However, by considering events in causal relation, for example, mother-daughter, it seems natural to assume that the magnitude of a daughter event is conditionally dependent on one of the corresponding mother events. In order to find experimental evidence supporting this hypothesis, we analyze different catalogs, both real and simulated, in two different ways. From each catalog, we obtain the law of the magnitude of the triggered events by kernel density. The results obtained show that the distribution density of the magnitude of the triggered events varies with the magnitude of their corresponding mother events. As the intuition suggests, an increase of the magnitude of the mother events induces an increase of the probability of having "high" values of the magnitude of the triggered events. In addition, we see a statistically significant increasing linear dependence of the magnitude means.

  6. Atmospheric acoustic propagation: Characterization of magnitude and phase variability

    NASA Astrophysics Data System (ADS)

    Norris, David Earl

    This thesis explores the effects of atmospheric turbulence on the variability of propagated acoustic signals. Spatially distributed acoustic and turbulence measurements were made at sixteen frequencies under 600 Hz for both upwind and downwind propagation at ranges of 150 and 200 m, respectively. Observations were collected in convectively neutral and strong wind conditions. From the distributed measurements, ray angles of arrival were calculated. The arrival angles were consistent with direct, upward refracted rays for upwind propagation and direct/ground-reflected, downward refracted rays for downwind propagation. In the downwind case, the arrival angles displayed significant variability at the lower frequencies, possibly due to the presence of a ground wave. Predictions from eigenrays traced through mean wind and temperature profiles agreed well with downwind observations at the higher frequencies. The received complex acoustic signal at each source frequency was recovered by applying a standard Hilbert transform technique. Magnitude and phase fluctuations were calculated and compared to predictions from a scattering model restricted to the inertial subrange of atmospheric turbulence. The measured log-amplitude variances were in excellent agreement with predictions, suggesting that atmospheric length scales of order 1 m most influenced the variability of the signal's magnitude. Phase fluctuations that exhibited strong correlation across frequency were transformed into travel-time fluctuations. The travel-time fluctuations were found to be insensitive to minor path differences and strongly correlated with turbulent velocity fluctuations. The dominant length scales were interpreted to be of order 100 m. These correspond to the large-scale turbulent eddies in the convective boundary layer. A theoretical model based upon the two-dimensional turbulent energy spectrum was derived to predict the cross-correlation between travel-time fluctuations and velocity

  7. Absolute and Convective Instability of a Liquid Jet

    NASA Technical Reports Server (NTRS)

    Lin, S. P.; Hudman, M.; Chen, J. N.

    1999-01-01

    The existence of absolute instability in a liquid jet has been predicted for some time. The disturbance grows in time and propagates both upstream and downstream in an absolutely unstable liquid jet. The image of absolute instability is captured in the NASA 2.2 sec drop tower and reported here. The transition from convective to absolute instability is observed experimentally. The experimental results are compared with the theoretical predictions on the transition Weber number as functions of the Reynolds number. The role of interfacial shear relative to all other relevant forces which cause the onset of jet breakup is explained.

  8. Amplification of postwildfire peak flow by debris

    USGS Publications Warehouse

    Kean, Jason W.; Mcguire, Luke; Rengers, Francis; Smith, Joel B.; Staley, Dennis M.

    2016-01-01

    In burned steeplands, the peak depth and discharge of postwildfire runoff can substantially increase from the addition of debris. Yet methods to estimate the increase over water flow are lacking. We quantified the potential amplification of peak stage and discharge using video observations of postwildfire runoff, compiled data on postwildfire peak flow (Qp), and a physically based model. Comparison of flood and debris flow data with similar distributions in drainage area (A) and rainfall intensity (I) showed that the median runoff coefficient (C = Qp/AI) of debris flows is 50 times greater than that of floods. The striking increase in Qp can be explained using a fully predictive model that describes the additional flow resistance caused by the emergence of coarse-grained surge fronts. The model provides estimates of the amplification of peak depth, discharge, and shear stress needed for assessing postwildfire hazards and constraining models of bedrock incision.

  9. Amplification of postwildfire peak flow by debris

    NASA Astrophysics Data System (ADS)

    Kean, J. W.; McGuire, L. A.; Rengers, F. K.; Smith, J. B.; Staley, D. M.

    2016-08-01

    In burned steeplands, the peak depth and discharge of postwildfire runoff can substantially increase from the addition of debris. Yet methods to estimate the increase over water flow are lacking. We quantified the potential amplification of peak stage and discharge using video observations of postwildfire runoff, compiled data on postwildfire peak flow (Qp), and a physically based model. Comparison of flood and debris flow data with similar distributions in drainage area (A) and rainfall intensity (I) showed that the median runoff coefficient (C = Qp/AI) of debris flows is 50 times greater than that of floods. The striking increase in Qp can be explained using a fully predictive model that describes the additional flow resistance caused by the emergence of coarse-grained surge fronts. The model provides estimates of the amplification of peak depth, discharge, and shear stress needed for assessing postwildfire hazards and constraining models of bedrock incision.

  10. Observing at Kitt Peak National Observatory.

    ERIC Educational Resources Information Center

    Cohen, Martin

    1981-01-01

    Presents an abridged version of a chapter from the author's book "In Quest of Telescopes." Includes personal experiences at Kitt Peak National Observatory, and comments on telescopes, photographs, and making observations. (SK)

  11. Tectonics, Climate and Earth's highest peaks

    NASA Astrophysics Data System (ADS)

    Robl, Jörg; Prasicek, Günther; Hergarten, Stefan

    2016-04-01

    Prominent peaks characterized by high relief and steep slopes are among the most spectacular morphological features on Earth. In collisional orogens they result from the interplay of tectonically driven crustal thickening and climatically induced destruction of overthickened crust by erosional surface processes. The glacial buzz-saw hypothesis proposes a superior status of climate in limiting mountain relief and peak altitude due to glacial erosion. It implies that peak altitude declines with duration of glacial occupation, i.e., towards high latitudes. This is in strong contrast with high peaks existing in high latitude mountain ranges (e.g. Mt. St. Elias range) and the idea of peak uplift due to isostatic compensation of spatially variable erosional unloading an over-thickened orogenic crust. In this study we investigate landscape dissection, crustal thickness and vertical strain rates in tectonically active mountain ranges to evaluate the influence of erosion on (latitudinal) variations in peak altitude. We analyze the spatial distribution of serval thousand prominent peaks on Earth extracted from the global ETOPO1 digital elevation model with a novel numerical tool. We compare this dataset to crustal thickness, thickening rate (vertical strain rate) and mean elevation. We use the ratios of mean elevation to peak elevation (landscape dissection) and peak elevation to crustal thickness (long-term impact of erosion on crustal thickness) as indicators for the influence of erosional surface processes on peak uplift and the vertical strain rate as a proxy for the mechanical state of the orogen. Our analysis reveals that crustal thickness and peak elevation correlate well in orogens that have reached a mechanically limited state (vertical strain rate near zero) where plate convergence is already balanced by lateral extrusion and gravitational collapse and plateaus are formed. On the Tibetan Plateau crustal thickness serves to predict peak elevation up to an altitude

  12. LNG production for peak shaving operations

    SciTech Connect

    Price, B.C.

    1999-07-01

    LNG production facilities are being developed as an alternative or in addition to underground storage throughout the US to provide gas supply during peak gas demand periods. These facilities typically involved a small liquefaction unit with a large LNG storage tank and gas sendout facilities capable of responding to peak loads during the winter. Black and Veatch is active in the development of LNG peak shaving projects for clients using a patented mixed refrigerant technology for efficient production of LNG at a low installed cost. The mixed refrigerant technology has been applied in a range of project sizes both with gas turbine and electric motor driven compression systems. This paper will cover peak shaving concepts as well as specific designs and projects which have been completed to meet this market need.

  13. Helping System Engineers Bridge the Peaks

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Tkachuk, Oksana; Person, Suzette; Biatek, Jason; Whalen, Michael W.; Castle, Joseph; Castle, JosephGundy-Burlet, Karen

    2014-01-01

    In our experience at NASA, system engineers generally follow the Twin Peaks approach when developing safety-critical systems. However, iterations between the peaks require considerable manual, and in some cases duplicate, effort. A significant part of the manual effort stems from the fact that requirements are written in English natural language rather than a formal notation. In this work, we propose an approach that enables system engineers to leverage formal requirements and automated test generation to streamline iterations, effectively "bridging the peaks". The key to the approach is a formal language notation that a) system engineers are comfortable with, b) is supported by a family of automated V&V tools, and c) is semantically rich enough to describe the requirements of interest. We believe the combination of formalizing requirements and providing tool support to automate the iterations will lead to a more efficient Twin Peaks implementation at NASA.

  14. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  15. Numerical Magnitude Processing in Children with Mild Intellectual Disabilities

    ERIC Educational Resources Information Center

    Brankaer, Carmen; Ghesquiere, Pol; De Smedt, Bert

    2011-01-01

    The present study investigated numerical magnitude processing in children with mild intellectual disabilities (MID) and examined whether these children have difficulties in the ability to represent numerical magnitudes and/or difficulties in the ability to access numerical magnitudes from formal symbols. We compared the performance of 26 children…

  16. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  17. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  18. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  19. 48 CFR 1852.236-74 - Magnitude of requirement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...

  20. Sign-And-Magnitude Up/Down Counter

    NASA Technical Reports Server (NTRS)

    Cole, Steven W.

    1991-01-01

    Magnitude-and-sign counter includes conventional up/down counter for magnitude part and special additional circuitry for sign part. Negative numbers indicated more directly. Counter implemented by programming erasable programmable logic device (EPLD) or programmable logic array (PLA). Used in place of conventional up/down counter to provide sign and magnitude values directly to other circuits.

  1. Symbolic Magnitude Modulates Perceptual Strength in Binocular Rivalry

    ERIC Educational Resources Information Center

    Paffen, Chris L. E.; Plukaard, Sarah; Kanai, Ryota

    2011-01-01

    Basic aspects of magnitude (such as luminance contrast) are directly represented by sensory representations in early visual areas. However, it is unclear how symbolic magnitudes (such as Arabic numerals) are represented in the brain. Here we show that symbolic magnitude affects binocular rivalry: perceptual dominance of numbers and objects of…

  2. Absolute Plate Velocities from Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Kreemer, Corné; Zheng, Lin; Gordon, Richard

    2015-04-01

    The orientation of seismic anisotropy inferred beneath plate interiors may provide a means to estimate the motions of the plate relative to the sub-asthenospheric mantle. Here we analyze two global sets of shear-wave splitting data, that of Kreemer [2009] and an updated and expanded data set, to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. We also explore the effect of using geologically current plate velocities (i.e., the MORVEL set of angular velocities [DeMets et al. 2010]) compared with geodetically current plate velocities (i.e., the GSRM v1.2 angular velocities [Kreemer et al. 2014]). We demonstrate that the errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. The SKS-MORVEL absolute plate angular velocities (based on the Kreemer [2009] data set) are determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11° Ma-1 (95% confidence limits) right-handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2° ) differs insignificantly from that for continental lithosphere (σ=21.6° ). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4° ) than for continental

  3. Orion Absolute Navigation System Progress and Challenge

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; D'Souza, Christopher

    2012-01-01

    The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.

  4. Evaluation of the Absolute Regional Temperature Potential

    NASA Technical Reports Server (NTRS)

    Shindell, D. T.

    2012-01-01

    The Absolute Regional Temperature Potential (ARTP) is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90-28degS, 28degS-28degN, 28-60degN and 60-90degN) as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within +/-20%of the actual responses, though there are some exceptions for 90-28degS and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the +/-20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39-45% and 9-39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.

  5. Absolute determination of local tropospheric OH concentrations

    NASA Technical Reports Server (NTRS)

    Armerding, Wolfgang; Comes, Franz-Josef

    1994-01-01

    Long path absorption (LPA) according to Lambert Beer's law is a method to determine absolute concentrations of trace gases such as tropospheric OH. We have developed a LPA instrument which is based on a rapid tuning of the light source which is a frequency doubled dye laser. The laser is tuned across two or three OH absorption features around 308 nm with a scanning speed of 0.07 cm(exp -1)/microsecond and a repetition rate of 1.3 kHz. This high scanning speed greatly reduces the fluctuation of the light intensity caused by the atmosphere. To obtain the required high sensitivity the laser output power is additionally made constant and stabilized by an electro-optical modulator. The present sensitivity is of the order of a few times 10(exp 5) OH per cm(exp 3) for an acquisition time of a minute and an absorption path length of only 1200 meters so that a folding of the optical path in a multireflection cell was possible leading to a lateral dimension of the cell of a few meters. This allows local measurements to be made. Tropospheric measurements have been carried out in 1991 resulting in the determination of OH diurnal variation at specific days in late summer. Comparison with model calculations have been made. Interferences are mainly due to SO2 absorption. The problem of OH self generation in the multireflection cell is of minor extent. This could be shown by using different experimental methods. The minimum-maximum signal to noise ratio is about 8 x 10(exp -4) for a single scan. Due to the small size of the absorption cell the realization of an open air laboratory is possible in which by use of an additional UV light source or by additional fluxes of trace gases the chemistry can be changed under controlled conditions allowing kinetic studies of tropospheric photochemistry to be made in open air.

  6. Absolute Radiometric Calibration of KOMPSAT-3A

    NASA Astrophysics Data System (ADS)

    Ahn, H. Y.; Shin, D. Y.; Kim, J. S.; Seo, D. C.; Choi, C. U.

    2016-06-01

    This paper presents a vicarious radiometric calibration of the Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) performed by the Korea Aerospace Research Institute (KARI) and the Pukyong National University Remote Sensing Group (PKNU RSG) in 2015.The primary stages of this study are summarized as follows: (1) A field campaign to determine radiometric calibrated target fields was undertaken in Mongolia and South Korea. Surface reflectance data obtained in the campaign were input to a radiative transfer code that predicted at-sensor radiance. Through this process, equations and parameters were derived for the KOMPSAT-3A sensor to enable the conversion of calibrated DN to physical units, such as at-sensor radiance or TOA reflectance. (2) To validate the absolute calibration coefficients for the KOMPSAT-3A sensor, we performed a radiometric validation with a comparison of KOMPSAT-3A and Landsat-8 TOA reflectance using one of the six PICS (Libya 4). Correlations between top-of-atmosphere (TOA) radiances and the spectral band responses of the KOMPSAT-3A sensors at the Zuunmod, Mongolia and Goheung, South Korea sites were significant for multispectral bands. The average difference in TOA reflectance between KOMPSAT-3A and Landsat-8 image over the Libya 4, Libya site in the red-green-blue (RGB) region was under 3%, whereas in the NIR band, the TOA reflectance of KOMPSAT-3A was lower than the that of Landsat-8 due to the difference in the band passes of two sensors. The KOMPSAT-3Aensor includes a band pass near 940 nm that can be strongly absorbed by water vapor and therefore displayed low reflectance. Toovercome this, we need to undertake a detailed analysis using rescale methods, such as the spectral bandwidth adjustment factor.

  7. Double peak sensory responses: effects of capsaicin.

    PubMed

    Aprile, I; Tonali, P; Stalberg, E; Di Stasio, E; Caliandro, P; Foschini, M; Vergili, G; Padua, L

    2007-10-01

    The aim of this study is to verify whether degeneration of skin receptors or intradermal nerve endings by topical application of capsaicin modifies the double peak response obtained by submaximal anodal stimulation. Five healthy volunteers topically applied capsaicin to the finger-tip of digit III (on the distal phalanx) four times daily for 4-5 weeks. Before and after local capsaicin applications, we studied the following electrophysiological findings: compound sensory action potential (CSAP), double peak response, sensory threshold and double peak stimulus intensity. Local capsaicin application causes disappearance or decrease of the second component of the double peak, which gradually increases after the suspension of capsaicin. Conversely, no significant differences were observed for CSAP, sensory threshold and double peak stimulus intensity. This study suggests that the second component of the double peak may be a diagnostic tool suitable to show an impairment of the extreme segments of sensory nerve fibres in distal sensory axonopathy in the early stages of damage, when receptors or skin nerve endings are impaired but undetectable by standard nerve conduction studies.

  8. Near-Infrared Absolute Photometry of the Uranian Satellites

    NASA Astrophysics Data System (ADS)

    Momary, T. W.; Baines, K. H.; Yanamandra-Fisher, P.; Lebofsky, L. A.; Golisch, W.

    1996-09-01

    We report the first absolutely-calibrated photometry of the Uranian satellites Miranda, Ariel, and Titania, in canonical near-infrared filters. These satellites were observed in July, August, and September of 1995, with the NSFCAM instrument at the NASA/IRTF. Results are reported for J, H, and K filters near 1.26, 1.62, and 2.21 mu m, and two special ~ 0.15-mu m-wide filters placed at 1.73 and 2.27 mu m. We measure an opposition surge for Miranda in the near-infrared of at least 0.48 mag/deg between phase angles of 1.0deg and 0.6deg , compared to a much shallower 0.015 +/- 0.006 mag/deg surge reported by Buratti et al. (Icarus 84, 203-214, 1990) for the visible. Miranda, which is brighter than Titania throughout the visible (Karkoschka et al., Icarus submitted), becomes the darker of the two satellites in the near- infrared, being some 20.5% dimmer than Titania in H, 8.8% dimmer in 1.73 mu m, and 9.1% dimmer in K. All three satellites are brightest at 1.73 mu m, with Ariel being fully 1/3 brighter than Miranda or Titania, whereas the three satellites are evenly spaced in albedo at 0.7 mu m, in the visible (Ariel being 15% brighter than Miranda, which is in turn 15% brighter than Titania). Specifically, Ariel reaches a peak full disk albedo of 0.4161 +/- 0.0125 for 1.0deg phase at 1.73 mu m. By comparison, the peak albedos of Miranda and Titania are only 0.2730 +/- 0.0082 and 0.2969 +/- 0.0089, respectively, at this wavelength (though these latter observations were at 2.4deg phase). Continuing the trend seen in the visible, Ariel is the brightest of the Uranian satellites throughout the near-infrared. Finally, all three satellites show a distinct increase in full-disk albedo between H and 1.73 mu m filters, on the order of 20%, which is the expected signature of water ice, in agreement with spectra taken by Brown and Cruikshank (Icarus 55, 83-92, 1983).

  9. Near-Infrared Absolute Photometry of the Saturnian Satellites

    NASA Astrophysics Data System (ADS)

    Momary, T. W.; Baines, K. H.; Yanamandra-Fisher, P. A.; Lebofsky, L. A.; Golisch, W.; Kaminski, C.

    1998-09-01

    We report absolutely-calibrated photometry of the Saturnian satellites in canonical near-infrared filters, including the first such spectrum of the leading side of Enceladus. The satellites were observed during Ring Plane Crossing in August and September of 1995 with the NSFCAM instrument at the NASA/IRTF. These observations were also simultaneous with those of the Uranian system, taken with the same instrument and filters, and analyzed by Baines et al. (Icarus 132, 266-284, 1998). Results are reported for J, H, and K filters near 1.27, 1.62, and 2.20 mu m, and two 0.1 mu m-wide filters centered at 1.73 and 2.27 mu m. We find that Enceladus has a peak brightness at 1.27 mu m with a geometric albedo of 0.898 +/- 0.063, in contrast to the Uranian satellites Miranda, Ariel, and Titania, which are relatively dim at this wavelength (albedos of roughly 0.3). The J-H band depth of Enceladus is about 30%, characteristic of spectra of Rhea, Tethys, and the trailing side of Iapetus, taken from Clark et al. (Icarus 58, 265-281, 1984) and convolved with our filters. By contrast, the darker Uranian satellites display a J-H band depth of less than 10%. From H to 1.73 mu m, the full-disk albedo of Enceladus increases by 27%, similar to the Uranian satellites. The dip in the Enceladus spectrum from J to H, as well as the subsequent rise from H to 1.73 mu m, are an expected signature of water ice. Finally, preliminary results for the albedos of Tethys, Dione, Rhea, and Mimas, as well as Enceladus, at 2.27 mu m compare favorably with the visible albedos of Buratti and Veverka (Icarus, 58, 254-264, 1984).

  10. Hurricane Mitch: Peak Discharge for Selected River Reachesin Honduras

    USGS Publications Warehouse

    Smith, Mark E.; Phillips, Jeffrey V.; Spahr, Norman E.

    2002-01-01

    peak discharge are based on post-flood surveys of the river channel (observed high-water marks, cross sections, and hydraulic properties) and model computation of peak discharge. Determination of the flood peaks associated with Hurricane Mitch will help scientists understand the magnitude of this devastating hurricane. Peak-discharge information also is critical for the proper design of hydraulic structures (such as bridges and levees), delineation of theoretical flood boundaries, and development of stage-discharge relations at streamflow-monitoring sites.

  11. Development of an Empirical Local Magnitude Formula for Northern Oklahoma

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Karimi, S.; Moores, A. O.

    2015-12-01

    In this paper we focus on determining a local magnitude formula for northern Oklahoma that is unbiased with distance by empirically constraining the attenuation properties within the region of interest based on the amplitude of observed seismograms. For regional networks detecting events over several hundred kilometres, distance correction terms play an important role in determining the magnitude of an event. Standard distance correction terms such as Hutton and Boore (1987) may have a significant bias with distance if applied in a region with different attenuation properties, resulting in an incorrect magnitude. We have presented data from a regional network of broadband seismometers installed in bedrock in northern Oklahoma. The events with magnitude in the range of 2.0 and 4.5, distributed evenly across this network are considered. We find that existing models show a bias with respect to hypocentral distance. Observed amplitude measurements demonstrate that there is a significant Moho bounce effect that mandates the use of a trilinear attenuation model in order to avoid bias in the distance correction terms. We present two different approaches of local magnitude calibration. The first maintains the classic definition of local magnitude as proposed by Richter. The second method calibrates local magnitude so that it agrees with moment magnitude where a regional moment tensor can be computed. To this end, regional moment tensor solutions and moment magnitudes are computed for events with magnitude larger than 3.5 to allow calibration of local magnitude to moment magnitude. For both methods the new formula results in magnitudes systematically lower than previous values computed with Eaton's (1992) model. We compare the resulting magnitudes and discuss the benefits and drawbacks of each method. Our results highlight the importance of correct calibration of the distance correction terms for accurate local magnitude assessment in regional networks.

  12. Historical changes in annual peak flows in Maine and implications for flood-frequency analyses

    USGS Publications Warehouse

    Hodgkins, Glenn A.

    2010-01-01

    To safely and economically design bridges, culverts, and other structures that are in or near streams (fig. 1 for example), it is necessary to determine the magnitude of peak streamflows such as the 100-year flow. Flood-frequency analyses use statistical methods to compute peak flows for selected recurrence intervals (100 years, for example). The recurrence interval is the average number of years between peak flows that are equal to or greater than a specified peak flow. Floodfrequency analyses are based on annual peak flows at a stream. It has long been assumed that annual peak streamflows are stationary over very long periods of time, except in river basins subject to urbanization, regulation, and other direct human activities. Stationarity is the concept that natural systems fluctuate within an envelope of variability that does not change over time (Milly and others, 2008). Because of the potential effects of global warming on peak flows, the assumption of peak-flow stationarity has recently been questioned (Milly and others, 2008). Maine has many streamgaging stations with 50 to 105 years of recorded annual peak streamflows. This long-term record has been tested for historical flood-frequency stationarity, to provide some insight into future flood frequency (Hodgkins, 2010). This fact sheet, prepared by the U.S. Geological Survey (USGS) in cooperation with the Maine Department of Transportation (MaineDOT), provides a partial summary of the results of the study by Hodgkins (2010).

  13. A framework for accurate determination of the T₂ distribution from multiple echo magnitude MRI images.

    PubMed

    Bai, Ruiliang; Koay, Cheng Guan; Hutchinson, Elizabeth; Basser, Peter J

    2014-07-01

    Measurement of the T2 distribution in tissues provides biologically relevant information about normal and abnormal microstructure and organization. Typically, the T2 distribution is obtained by fitting the magnitude MR images acquired by a multi-echo MRI pulse sequence using an inverse Laplace transform (ILT) algorithm. It is well known that the ideal magnitude MR signal follows a Rician distribution. Unfortunately, studies attempting to establish the validity and efficacy of the ILT algorithm assume that these input signals are Gaussian distributed. Violation of the normality (or Gaussian) assumption introduces unexpected artifacts, including spurious cerebrospinal fluid (CSF)-like long T2 components; bias of the true geometric mean T2 values and in the relative fractions of various components; and blurring of nearby T2 peaks in the T2 distribution. Here we apply and extend our previously proposed magnitude signal transformation framework to map noisy Rician-distributed magnitude multi-echo MRI signals into Gaussian-distributed signals with high accuracy and precision. We then perform an ILT on the transformed data to obtain an accurate T2 distribution. Additionally, we demonstrate, by simulations and experiments, that this approach corrects the aforementioned artifacts in magnitude multi-echo MR images over a large range of signal-to-noise ratios.

  14. Mapping numerical magnitudes along the right lines: differentiating between scale and bias.

    PubMed

    Karolis, Vyacheslav; Iuculano, Teresa; Butterworth, Brian

    2011-11-01

    Previous investigations on the subjective scale of numerical representations assumed that the scale type can be inferred directly from stimulus-response mapping. This is not a valid assumption, as mapping from the subjective scale into behavior may be nonlinear and/or distorted by response bias. Here we present a method for differentiating between logarithmic and linear hypotheses robust to the effect of distorting processes. The method exploits the idea that a scale is defined by transformational rules and that combinatorial operations with stimulus magnitudes should be closed under admissible transformations on the subjective scale. The method was implemented with novel variants of the number line task. In the line-marking task, participants marked the position of an Arabic numeral within an interval defined by various starting numbers and lengths. In the line construction task, participants constructed an interval given its part. Two alternative approaches to the data analysis, numerical and analytical, were used to evaluate the linear and log components. Our results are consistent with the linear hypothesis about the subjective scale with responses affected by a bias to overestimate small magnitudes and underestimate large magnitudes. We also observed that in the line-marking task, participants tended to overestimate as the interval start increased, and in the line construction task, they tended to overconstruct as the interval length increased. This finding suggests that magnitudes were encoded differently in the 2 tasks: in terms of their absolute magnitudes in the line-marking task and in terms of numerical differences in the line construction task.

  15. Predicting Peak Flows following Forest Fires

    NASA Astrophysics Data System (ADS)

    Elliot, William J.; Miller, Mary Ellen; Dobre, Mariana

    2016-04-01

    Following forest fires, peak flows in perennial and ephemeral streams often increase by a factor of 10 or more. This increase in peak flow rate may overwhelm existing downstream structures, such as road culverts, causing serious damage to road fills at stream crossings. In order to predict peak flow rates following wildfires, we have applied two different tools. One is based on the U.S.D.A Natural Resource Conservation Service Curve Number Method (CN), and the other is by applying the Water Erosion Prediction Project (WEPP) to the watershed. In our presentation, we will describe the science behind the two methods, and present the main variables for each model. We will then provide an example of a comparison of the two methods to a fire-prone watershed upstream of the City of Flagstaff, Arizona, USA, where a fire spread model was applied for current fuel loads, and for likely fuel loads following a fuel reduction treatment. When applying the curve number method, determining the time to peak flow can be problematic for low severity fires because the runoff flow paths are both surface and through shallow lateral flow. The WEPP watershed version incorporates shallow lateral flow into stream channels. However, the version of the WEPP model that was used for this study did not have channel routing capabilities, but rather relied on regression relationships to estimate peak flows from individual hillslope polygon peak runoff rates. We found that the two methods gave similar results if applied correctly, with the WEPP predictions somewhat greater than the CN predictions. Later releases of the WEPP model have incorporated alternative methods for routing peak flows that need to be evaluated.

  16. Absolute peptide quantification by lutetium labeling and nanoHPLC-ICPMS with isotope dilution analysis.

    PubMed

    Rappel, Christina; Schaumlöffel, Dirk

    2009-01-01

    The need of analytical methods for absolute quantitative protein analysis spurred research on new developments in recent years. In this work, a novel approach was developed for accurate absolute peptide quantification based on metal labeling with lutetium diethylenetriamine pentaacetic acid (Lu-DTPA) and nanoflow high-performance liquid chromatography-inductively coupled plasma isotope dilution mass spectrometry (nanoHPLC-ICP-IDMS). In a two-step procedure peptides were derivatized at amino groups with diethylenetriamine pentaacetic anhydride (DTPAA) followed by chelation of lutetium. Electrospray ionization mass spectrometry (ESI MS) of the reaction product demonstrated highly specific peptide labeling. Under optimized nanoHPLC conditions the labeled peptides were baseline-separated, and the excess labeling reagent did not interfere. A 176Lu-labeled spike was continuously added to the column effluent for quantification by ICP-IDMS. The recovery of a Lu-DTPA-labeled standard peptide was close to 100% indicating high labeling efficiency and accurate absolute quantification. The precision of the entire method was 4.9%. The detection limit for Lu-DTPA-tagged peptides was 179 amol demonstrating that lutetium-specific peptide quantification was by 4 orders of magnitude more sensitive than detection by natural sulfur atoms present in cysteine or methionine residues. Furthermore, the application to peptides in insulin tryptic digest allowed the identification of interfering reagents decreasing the labeling efficiency. An additional advantage of this novel approach is the analysis of peptides, which do not naturally feature ICPMS-detectable elements.

  17. Robust control design with real parameter uncertainty using absolute stability theory. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    How, Jonathan P.; Hall, Steven R.

    1993-01-01

    The purpose of this thesis is to investigate an extension of mu theory for robust control design by considering systems with linear and nonlinear real parameter uncertainties. In the process, explicit connections are made between mixed mu and absolute stability theory. In particular, it is shown that the upper bounds for mixed mu are a generalization of results from absolute stability theory. Both state space and frequency domain criteria are developed for several nonlinearities and stability multipliers using the wealth of literature on absolute stability theory and the concepts of supply rates and storage functions. The state space conditions are expressed in terms of Riccati equations and parameter-dependent Lyapunov functions. For controller synthesis, these stability conditions are used to form an overbound of the H2 performance objective. A geometric interpretation of the equivalent frequency domain criteria in terms of off-axis circles clarifies the important role of the multiplier and shows that both the magnitude and phase of the uncertainty are considered. A numerical algorithm is developed to design robust controllers that minimize the bound on an H2 cost functional and satisfy an analysis test based on the Popov stability multiplier. The controller and multiplier coefficients are optimized simultaneously, which avoids the iteration and curve-fitting procedures required by the D-K procedure of mu synthesis. Several benchmark problems and experiments on the Middeck Active Control Experiment at M.I.T. demonstrate that these controllers achieve good robust performance and guaranteed stability bounds.

  18. Sounding rocket measurement of the absolute solar EUV flux utilizing a silicon photodiode

    SciTech Connect

    Ogawa, H.S.; McMullin, D.; Judge, D.L. ); Canfield, L.R. )

    1990-04-01

    A newly developed stable and high quantum efficiency silicon photodiode was used to obtain an accurate measurement of the integrated absolute magnitude of the solar extreme ultraviolet photon flux in the spectral region between 50 and 800 {angstrom}. The detector was flown aboard a solar point sounding rocket launched from White Sands Missile Range in New Mexico on October 24, 1988. The adjusted daily 10.7-cm solar radio flux and sunspot number were 168.4 and 121, respectively. The unattenuated absolute value of the solar EUV flux at 1 AU in the specified wavelength region was 6.81 {times} 10{sup 10} photons cm{sup {minus}2} s{sup {minus}1}. Based on a nominal probable error of 7% for National Institute of Standards and Technology detector efficiency measurements in the 50- to 500-{angstrom} region (5% on longer wavelength measurements between 500 and 1216 {angstrom}), and based on experimental errors associated with their rocket instrumentation and analysis, a conservative total error estimate of {approximately}14% is assigned to the absolute integral solar flux obtained.

  19. Determination of Absolute Zero Using a Computer-Based Laboratory

    ERIC Educational Resources Information Center

    Amrani, D.

    2007-01-01

    We present a simple computer-based laboratory experiment for evaluating absolute zero in degrees Celsius, which can be performed in college and undergraduate physical sciences laboratory courses. With a computer, absolute zero apparatus can help demonstrators or students to observe the relationship between temperature and pressure and use…

  20. A Global Forecast of Absolute Poverty and Employment.

    ERIC Educational Resources Information Center

    Hopkins, M. J. D.

    1980-01-01

    Estimates are made of absolute poverty and employment under the hypothesis that existing trends continue. Concludes that while the number of people in absolute poverty is not likely to decline by 2000, the proportion will fall. Jobs will have to grow 3.9% per year in developing countries to achieve full employment. (JOW)

  1. Absolute Humidity and the Seasonality of Influenza (Invited)

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.

    2010-12-01

    Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.

  2. Novalis' Poetic Uncertainty: A "Bildung" with the Absolute

    ERIC Educational Resources Information Center

    Mika, Carl

    2016-01-01

    Novalis, the Early German Romantic poet and philosopher, had at the core of his work a mysterious depiction of the "absolute." The absolute is Novalis' name for a substance that defies precise knowledge yet calls for a tentative and sensitive speculation. How one asserts a truth, represents an object, and sets about encountering things…

  3. The differing magnitude distributions of the two Jupiter Trojan color populations

    SciTech Connect

    Wong, Ian; Brown, Michael E.; Emery, Joshua P.

    2014-12-01

    The Jupiter Trojans are a significant population of minor bodies in the middle solar system that have garnered substantial interest in recent years. Several spectroscopic studies of these objects have revealed notable bimodalities with respect to near-infrared spectra, infrared albedo, and color, which suggest the existence of two distinct groups among the Trojan population. In this paper, we analyze the magnitude distributions of these two groups, which we refer to as the red and less red color populations. By compiling spectral and photometric data from several previous works, we show that the observed bimodalities are self-consistent and categorize 221 of the 842 Trojans with absolute magnitudes in the range H<12.3 into the two color populations. We demonstrate that the magnitude distributions of the two color populations are distinct to a high confidence level (>95%) and fit them individually to a broken power law, with special attention given to evaluating and correcting for incompleteness in the Trojan catalog as well as incompleteness in our categorization of objects. A comparison of the best-fit curves shows that the faint-end power-law slopes are markedly different for the two color populations, which indicates that the red and less red Trojans likely formed in different locations. We propose a few hypotheses for the origin and evolution of the Trojan population based on the analyzed data.

  4. The Color–Magnitude Distribution of Hilda Asteroids: Comparison with Jupiter Trojans

    NASA Astrophysics Data System (ADS)

    Wong, Ian; Brown, Michael E.

    2017-02-01

    Current models of solar system evolution posit that the asteroid populations in resonance with Jupiter are comprised of objects scattered inward from the outer solar system during a period of dynamical instability. In this paper, we present a new analysis of the absolute magnitude and optical color distribution of Hilda asteroids, which lie in 3:2 mean-motion resonance with Jupiter, with the goal of comparing the bulk properties with previously published results from an analogous study of Jupiter Trojans. We report an updated power-law fit of the Hilda magnitude distribution through H = 14. Using photometric data listed in the Sloan Moving Object Catalog, we confirm the previously reported strong bimodality in visible spectral slope distribution, indicative of two subpopulations with differing surface compositions. When considering collisional families separately, we find that collisional fragments follow a unimodal color distribution with spectral slope values consistent with the bluer of the two subpopulations. The color distributions of Hildas and Trojans are comparable and consistent with a scenario in which the color bimodality in both populations developed prior to emplacement into their present-day locations. We propose that the shallower magnitude distribution of the Hildas is a result of an initially much larger Hilda population, which was subsequently depleted as smaller bodies were preferentially ejected from the narrow 3:2 resonance via collisions. Altogether, these observations provide a strong case supporting a common origin for Hildas and Trojans as predicted by current dynamical instability theories of solar system evolution.

  5. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  6. A developmental study of latent absolute pitch memory.

    PubMed

    Jakubowski, Kelly; Müllensiefen, Daniel; Stewart, Lauren

    2017-03-01

    The ability to recall the absolute pitch level of familiar music (latent absolute pitch memory) is widespread in adults, in contrast to the rare ability to label single pitches without a reference tone (overt absolute pitch memory). The present research investigated the developmental profile of latent absolute pitch (AP) memory and explored individual differences related to this ability. In two experiments, 288 children from 4 to12 years of age performed significantly above chance at recognizing the absolute pitch level of familiar melodies. No age-related improvement or decline, nor effects of musical training, gender, or familiarity with the stimuli were found in regard to latent AP task performance. These findings suggest that latent AP memory is a stable ability that is developed from as early as age 4 and persists into adulthood.

  7. Variation of the Tully-Fisher relation as a function of the magnitude interval of a sample of galaxies

    NASA Astrophysics Data System (ADS)

    Ruelas-Mayorga, A.; Sánchez, L. J.; Trujillo-Lara, M.; Nigoche-Netro, A.; Echevarría, J.; García, A. M.; Ramírez-Vélez, J.

    2016-10-01

    In this paper we carry out a preliminary study of the dependence of the Tully-Fisher Relation (TFR) with the width and intensity level of the absolute magnitude interval of a limited sample of 2411 galaxies taken from Mathewson and Ford (Astrophys. J. Suppl. Ser. 107:97, 1996). The galaxies in this sample do not differ significantly in morphological type, and are distributed over an ˜ 11-magnitude interval (-24.4 < I < -13.0). We take as directives the papers by Nigoche-Netro et al. (Astron. Astrophys. 491:731, 2008; Mon. Not. R. Astron. Soc. 392:1060, 2009; Astron. Astrophys. 516:96, 2010) in which they study the dependence of the Kormendy (KR), the Fundamental Plane (FPR) and the Faber-Jackson Relations (FJR) with the magnitude interval within which the observed galaxies used to derive these relations are contained. We were able to characterise the behaviour of the TFR coefficients (α, β ) with respect to the width of the magnitude interval as well as with the brightness of the galaxies within this magnitude interval. We concluded that the TFR for this specific sample of galaxies depends on observational biases caused by arbitrary magnitude cuts, which in turn depend on the width and intensity of the chosen brightness levels.

  8. SPANISH PEAKS WILDERNESS STUDY AREA, COLORADO.

    USGS Publications Warehouse

    Budding, Karin E.; Kluender, Steven E.

    1984-01-01

    A geologic and geochemical investigation and a survey of mines and prospects were conducted to evaluate the mineral-resource potential of the Spanish Peaks Wilderness Study Area, Huerfano and Las Animas Counties, in south-central Colorado. Anomalous gold, silver, copper, lead, and zinc concentrations in rocks and in stream sediments from drainage basins in the vicinity of the old mines and prospects on West Spanish Peak indicate a substantiated mineral-resource potential for base and precious metals in the area surrounding this peak; however, the mineralized veins are sparse, small in size, and generally low in grade. There is a possibility that coal may underlie the study area, but it would be at great depth and it is unlikely that it would have survived the intense igneous activity in the area. There is little likelihood for the occurrence of oil and gas because of the lack of structural traps and the igneous activity.

  9. The PEAK experience in South Carolina

    SciTech Connect

    1998-11-01

    The PEAK Institute was developed to provide a linkage for formal (schoolteachers) and nonformal educators (extension agents) with agricultural scientists of Clemson University`s South Carolina Agricultural Experiment Station System. The goal of the Institute was to enable teams of educators and researchers to develop and provide PEAK science and math learning experiences related to relevant agricultural and environmental issues of local communities for both classroom and 4-H Club experiences. The Peak Institute was conducted through a twenty day residential Institute held in June for middle school and high school teachers who were teamed with an Extension agent from their community. These educators participated in hands-on, minds-on sessions conducted by agricultural researchers and Clemson University Cooperative Extension specialists. Participants were given the opportunity to see frontier science being conducted by scientists from a variety of agricultural laboratories.

  10. Comparison enhances size sensitivity: neural correlates of outcome magnitude processing.

    PubMed

    Luo, Qiuling; Qu, Chen

    2013-01-01

    Magnitude is a critical feature of outcomes. In the present study, two event-related potential (ERP) experiments were implemented to explore the neural substrates of outcome magnitude processing. In Experiment 1, we used an adapted gambling paradigm where physical area symbols were set to represent potential relative outcome magnitudes in order to exclude the possibility that the participants would be ignorant of the magnitudes. The context was manipulated as total monetary amount: ¥4 and ¥40. In these two contexts, the relative outcome magnitudes were ¥1 versus ¥3, and ¥10 versus ¥30, respectively. Experiment 2, which provided two area symbols with similar outcome magnitudes, was conducted to exclude the possible interpretation of physical area symbol for magnitude effect of feedback-related negativity (FRN) in Experiment 1. Our results showed that FRN responded to the relative outcome magnitude but not to the context or area symbol, with larger amplitudes for relatively small outcomes. A larger FRN effect (the difference between losses and wins) was found for relatively large outcomes than relatively small outcomes. Relatively large outcomes evoked greater positive ERP waves (P300) than relatively small outcomes. Furthermore, relatively large outcomes in a high amount context elicited a larger P300 than those in a low amount context. The current study indicated that FRN is sensitive to variations in magnitude. Moreover, relative magnitude was integrated in both the early and late stages of feedback processing, while the monetary amount context was processed only in the late stage of feedback processing.

  11. Absolute cross sections for binary-encounter electron ejection by 95-MeV/u {sup 36}Ar{sup 18+} penetrating carbon foils

    SciTech Connect

    De Filippo, E.; Lanzano, G.; Aiello, S.; Arena, N.; Geraci, M.; Pagano, A.; Rothard, H.; Volant, C.; Anzalone, A.; Giustolisi, F.

    2003-08-01

    Doubly differential electron velocity spectra induced by 95-MeV/u {sup 36}Ar{sup 18+} from thin carbon foils were measured at GANIL (Caen, France) by means of the ARGOS multidetector and the time-of-flight technique. The spectra allow us to determine absolute singly differential cross sections as a function of the emission angle. Absolute doubly differential cross sections for binary encounter electron ejection from C targets are compared to a transport theory, which is based on the relativistic electron impact approximation for electron production and which accounts for angular deflection, energy loss, and also energy straggling of the transmitted electrons. For the thinnest targets, the measured peak width is in good agreement with experimental data obtained with a different detection technique. The theory underestimates the peak width but provides (within a factor of 2) the correct peak intensity. For the thickest target, even the peak shape is well reproduced by theory.

  12. Separating Peaks in X-Ray Spectra

    NASA Technical Reports Server (NTRS)

    Nicolas, David; Taylor, Clayborne; Wade, Thomas

    1987-01-01

    Deconvolution algorithm assists in analysis of x-ray spectra from scanning electron microscopes, electron microprobe analyzers, x-ray fluorescence spectrometers, and like. New algorithm automatically deconvolves x-ray spectrum, identifies locations of spectral peaks, and selects chemical elements most likely producing peaks. Technique based on similarities between zero- and second-order terms of Taylor-series expansions of Gaussian distribution and of damped sinusoid. Principal advantage of algorithm: no requirement to adjust weighting factors or other parameters when analyzing general x-ray spectra.

  13. Quantification of Human Brain Metabolites from in Vivo1H NMR Magnitude Spectra Using Automated Artificial Neural Network Analysis

    NASA Astrophysics Data System (ADS)

    Hiltunen, Yrjö; Kaartinen, Jouni; Pulkkinen, Juhani; Häkkinen, Anna-Maija; Lundbom, Nina; Kauppinen, Risto A.

    2002-01-01

    Long echo time (TE=270 ms) in vivo proton NMR spectra resembling human brain metabolite patterns were simulated for lineshape fitting (LF) and quantitative artificial neural network (ANN) analyses. A set of experimental in vivo1H NMR spectra were first analyzed by the LF method to match the signal-to-noise ratios and linewidths of simulated spectra to those in the experimental data. The performance of constructed ANNs was compared for the peak area determinations of choline-containing compounds (Cho), total creatine (Cr), and N-acetyl aspartate (NAA) signals using both manually phase-corrected and magnitude spectra as inputs. The peak area data from ANN and LF analyses for simulated spectra yielded high correlation coefficients demonstrating that the peak areas quantified with ANN gave similar results as LF analysis. Thus, a fully automated ANN method based on magnitude spectra has demonstrated potential for quantification of in vivo metabolites from long echo time spectroscopic imaging.

  14. Relationships between peak ground acceleration, peak ground velocity, and modified mercalli intensity in California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.

    1999-01-01

    We have developed regression relationships between Modified Mercalli Intensity (Imm) and peak ground acceleration (PGA) and velocity (PGV) by comparing horizontal peak ground motions to observed intensities for eight significant California earthquakes. For the limited range of Modified Mercalli intensities (Imm), we find that for peak acceleration with V ??? Imm ??? VIII, Imm = 3.66 log(PGA) - 1.66, and for peak velocity with V ??? Imm ??? IX, Imm = 3.47 log(PGV) + 2.35. From comparison with observed intensity maps, we find that a combined regression based on peak velocity for intensity > VII and on peak acceleration for intensity < VII is most suitable for reproducing observed Imm patterns, consistent with high intensities being related to damage (proportional to ground velocity) and with lower intensities determined by felt accounts (most sensitive to higher-frequency ground acceleration). These new Imm relationships are significantly different from the Trifunac and Brady (1975) correlations, which have been used extensively in loss estimation.

  15. Expression of VO2peak in Children and Youth, with Special Reference to Allometric Scaling.

    PubMed

    Loftin, Mark; Sothern, Melinda; Abe, Takashi; Bonis, Marc

    2016-10-01

    The aim of this review was to highlight research that has focused on examining expressions of peak oxygen uptake (VO2peak) in children and youth, with special reference to allometric scaling. VO2peak is considered the highest VO2 during an increasing workload treadmill or bicycle ergometer test until volitional termination. We have reviewed scholarly works identified from PubMed, One Search, EBSCOhost and Google Scholar that examined VO2peak in absolute units (L·min(-1)), relative units [body mass, fat-free mass (FFM)], and allometric expressions [mass, height, lean body mass (LBM) or LBM of the legs raised to a power function] through July 2015. Often, the objective of measuring VO2peak is to evaluate cardiorespiratory function and fitness level. Since body size (body mass and height) frequently vary greatly in children and youth, expressing VO2peak in dimensionless units is often inappropriate for comparative or explanatory purposes. Consequently, expressing VO2peak in allometric units has gained increased research attention over the past 2 decades. In our review, scaling mass was the most frequent variable employed, with coefficients ranging from approximately 0.30 to over 1.0. The wide variance is probably due to several factors, including mass, height, LBM, sex, age, physical training, and small sample size. In summary, we recommend that since skeletal muscle is paramount for human locomotion, an allometric expression of VO2peak relative to LBM is the best expression of VO2peak in children and youth.

  16. Absolute cross-sections for DNA strand breaks and crosslinks induced by low energy electrons.

    PubMed

    Chen, Wenzhuang; Chen, Shiliang; Dong, Yanfang; Cloutier, Pierre; Zheng, Yi; Sanche, Léon

    2016-12-07

    Absolute cross sections (CSs) for the interaction of low energy electrons with condensed macromolecules are essential parameters to accurately model ionizing radiation induced reactions. To determine CSs for various conformational DNA damage induced by 2-20 eV electrons, we investigated the influence of the attenuation length (AL) and penetration factor (f) using a mathematical model. Solid films of supercoiled plasmid DNA with thicknesses of 10, 15 and 20 nm were irradiated with 4.6, 5.6, 9.6 and 14.6 eV electrons. DNA conformational changes were quantified by gel electrophoresis, and the respective yields were extrapolated from exposure-response curves. The absolute CS, AL and f values were generated by applying the model developed by Rezaee et al. The values of AL were found to lie between 11 and 16 nm with the maximum at 14.6 eV. The absolute CSs for the loss of the supercoiled (LS) configuration and production of crosslinks (CL), single strand breaks (SSB) and double strand breaks (DSB) induced by 4.6, 5.6, 9.6 and 14.6 eV electrons are obtained. The CSs for SSB are smaller, but similar to those for LS, indicating that SSB are the main conformational damage. The CSs for DSB and CL are about one order of magnitude smaller than those of LS and SSB. The value of f is found to be independent of electron energy, which allows extending the absolute CSs for these types of damage within the range 2-20 eV, from previous measurements of effective CSs. When comparison is possible, the absolute CSs are found to be in good agreement with those obtained from previous similar studies with double-stranded DNA. The high values of the absolute CSs of 4.6 and 9.6 eV provide quantitative evidence for the high efficiency of low energy electrons to induce DNA damage via the formation of transient anions.

  17. Interpretation of the peak areas in gamma-ray spectra that have a large relative uncertainty.

    PubMed

    Korun, M; Maver Modec, P; Vodenik, B

    2012-06-01

    Empirical evidence is provided that the areas of peaks having a relative uncertainty in excess of 30% are overestimated. This systematic influence is of a statistical nature and originates in way the peak-analyzing routine recognizes the small peaks. It is not easy to detect this influence since it is smaller than the peak-area uncertainty. However, the systematic influence can be revealed in repeated measurements under the same experimental conditions, e.g., in background measurements. To evaluate the systematic influence, background measurements were analyzed with the peak-analyzing procedure described by Korun et al. (2008). The magnitude of the influence depends on the relative uncertainty of the peak area and may amount, in the conditions used in the peak analysis, to a factor of 5 at relative uncertainties exceeding 60%. From the measurements, the probability for type-II errors, as a function of the relative uncertainty of the peak area, was extracted. This probability is near zero below an uncertainty of 30% and rises to 90% at uncertainties exceeding 50%.

  18. OccuPeak: ChIP-Seq Peak Calling Based on Internal Background Modelling

    PubMed Central

    van den Boogaard, Malou; Christoffels, Vincent M.; Barnett, Phil; Ruijter, Jan M.

    2014-01-01

    ChIP-seq has become a major tool for the genome-wide identification of transcription factor binding or histone modification sites. Most peak-calling algorithms require input control datasets to model the occurrence of background reads to account for local sequencing and GC bias. However, the GC-content of reads in Input-seq datasets deviates significantly from that in ChIP-seq datasets. Moreover, we observed that a commonly used peak calling program performed equally well when the use of a simulated uniform background set was compared to an Input-seq dataset. This contradicts the assumption that input control datasets are necessary to fatefully reflect the background read distribution. Because the GC-content of the abundant single reads in ChIP-seq datasets is similar to those of randomly sampled regions we designed a peak-calling algorithm with a background model based on overlapping single reads. The application, OccuPeak, uses the abundant low frequency tags present in each ChIP-seq dataset to model the background, thereby avoiding the need for additional datasets. Analysis of the performance of OccuPeak showed robust model parameters. Its measure of peak significance, the excess ratio, is only dependent on the tag density of a peak and the global noise levels. Compared to the commonly used peak-calling applications MACS and CisGenome, OccuPeak had the highest sensitivity in an enhancer identification benchmark test, and performed similar in an overlap tests of transcription factor occupation with DNase I hypersensitive sites and H3K27ac sites. Moreover, peaks called by OccuPeak were significantly enriched with cardiac disease-associated SNPs. OccuPeak runs as a standalone application and does not require extensive tweaking of parameters, making its use straightforward and user friendly. Availability: http://occupeak.hfrc.nl PMID:24936875

  19. Accurate determination of pyridine-poly(amidoamine) dendrimer absolute binding constants with the OPLS-AA force field and direct integration of radial distribution functions.

    PubMed

    Peng, Yong; Kaminski, George A

    2005-08-11

    OPLS-AA force field and direct integration of intermolecular radial distribution functions (RDF) were employed to calculate absolute binding constants of pyridine molecules to amino group (NH2) and amide group hydrogen atoms in and first generation poly(amidoamine) dendrimers in chloroform. The average errors in the absolute and relative association constants, as predicted with the calculations, are 14.1% and 10.8%, respectively, which translate into ca. 0.08 and 0.06 kcal/mol errors in the absolute and relative binding free energies. We believe that this level of accuracy proves the applicability of the OPLS-AA, force field, in combination with the direct RDF integration, to reproducing and predicting absolute intermolecular association constants of low magnitudes (ca. 0.2-2.0 range).

  20. Accurate Determination of Pyridine -- Poly (Amidoamine) Dendrimer Absolute Binding Constants with the OPLS-AA Force Field and Direct Integration of Radial Distribution Functions

    NASA Astrophysics Data System (ADS)

    Peng, Yong; Kaminski, George

    2006-03-01

    OPLS-AA force field and direct integration of intermolecular radial distribution functions (RDF) were employed to calculate absolute binding constants of pyridine molecules to NH2 and amide group hydrogen atoms in 0th and 1st generation poly (amidoamine) dendrimers in chloroform. The average errors in the absolute and relative association constants, as predicted with the calculations, are 14.1% and 10.8%, respectively, which translate into ca. 0.08 kcal/mol and 0.06 kcal/mol errors in the absolute and relative binding free energies. We believe that this level of accuracy proves the applicability of the OPLS-AA, force field, in combination with the direct RDF integration, to reproducing and predicting absolute intermolecular association constants of low magnitudes (ca. 0.2 -- 2.0 range).

  1. Updating the Magnitudes of the Planets in The Astronomical Almanac

    DTIC Science & Technology

    2003-01-01

    USNO/AA Technical Note 2003-04 Updating the Magnitudes of the Planets in The Astronomical Almanac James L. Hilton The content of this Tech...the magnitudes of Mercury and Venus used in the AsA 2005 and 2006. Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden...SUBTITLE Updating The Magnitudes Of The Planets In The Astronomical Almanac 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  2. Correlated peak relative light intensity and peak current in triggered lightning subsequent return strokes

    NASA Technical Reports Server (NTRS)

    Idone, V. P.; Orville, R. E.

    1985-01-01

    The correlation between peak relative light intensity L(R) and stroke peak current I(R) is examined for 39 subsequent return strokes in two triggered lightning flashes. One flash contained 19 strokes and the other 20 strokes for which direct measurements were available of the return stroke peak current at ground. Peak currents ranged from 1.6 to 21 kA. The measurements of peak relative light intensity were obtained from photographic streak recordings using calibrated film and microsecond resolution. Correlations, significant at better than the 0.1 percent level, were found for several functional relationships. Although a relation between L(R) and I(R) is evident in these data, none of the analytical relations considered is clearly favored. The correlation between L(R) and the maximum rate of current rise is also examined, but less correlation than between L(R) and I(R) is found. In addition, the peak relative intensity near ground is evaluated for 22 dart leaders, and a mean ratio of peak dart leader to peak return stroke relative light intensity was found to be 0.1 with a range of 0.02-0.23. Using two different methods, the peak current near ground in these dart leaders is estimated to range from 0.1 to 6 kA.

  3. Peak Wind Tool for General Forecasting

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Short, David

    2008-01-01

    This report describes work done by the Applied Meteorology Unit (AMU) in predicting peak winds at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45th Weather Squadron requested the AMU develop a tool to help them forecast the speed and timing of the daily peak and average wind, from the surface to 300 ft on KSC/CCAFS during the cool season. Based on observations from the KSC/CCAFS wind tower network , Shuttle Landing Facility (SLF) surface observations, and CCAFS sounding s from the cool season months of October 2002 to February 2007, the AMU created mul tiple linear regression equations to predict the timing and speed of the daily peak wind speed, as well as the background average wind speed. Several possible predictors were evaluated, including persistence , the temperature inversion depth and strength, wind speed at the top of the inversion, wind gust factor (ratio of peak wind speed to average wind speed), synoptic weather pattern, occurrence of precipitation at the SLF, and strongest wind in the lowest 3000 ft, 4000 ft, or 5000 ft.

  4. Absorption, Creativity, Peak Experiences, Empathy, and Psychoticism.

    ERIC Educational Resources Information Center

    Mathes, Eugene W.; And Others

    Tellegen and Atkinson suggested that the trait of absorption may play a part in meditative skill, creativity, capacity for peak experiences, and empathy. Although the absorption-meditative skill relationship has been confirmed, other predictions have not been tested. Tellegen and Atkinson's Absorption Scale was completed by undergraduates in four…

  5. Some Phenomenological Aspects of the Peak Experience

    ERIC Educational Resources Information Center

    Rosenblatt, Howard S.; Bartlett, Iris

    1976-01-01

    This article relates the psychological dynamics of "peak experiences" to two concepts, intentionality and paradoxical intention, within the philosophical orientation of phenomenology. A review of early philosophical theories of self (Kant and Hume) is presented and compared with the experiential emphasis found in the phenomenology of Husserl.…

  6. Avoiding the False Peaks in Correlation Discrimination

    SciTech Connect

    Awwal, A S

    2009-07-31

    Fiducials imprinted on laser beams are used to perform video image based alignment of the 192 laser beams in the National Ignition Facility (NIF) of Lawrence Livermore National Laboratory. In many video images, matched filtering is used to detect the location of these fiducials. Generally, the highest correlation peak is used to determine the position of the fiducials. However, when the signal to-be-detected is very weak compared to the noise, this approach totally breaks down. The highest peaks act as traps for false detection. The active target images used for automatic alignment in the National Ignition Facility are examples of such images. In these images, the fiducials of interest exhibit extremely low intensity and contrast, surrounded by high intensity reflection from metallic objects. Consequently, the highest correlation peaks are caused by these bright objects. In this work, we show how the shape of the correlation is exploited to isolate the valid matches from hundreds of invalid correlation peaks, and therefore identify extremely faint fiducials under very challenging imaging conditions.

  7. Spanish Peaks, Sangre de Cristo Range, Colorado

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Spanish Peaks, on the eastern flank of the Sangre de Cristo range, abruptly rise 7,000 feet above the western Great Plains. Settlers, treasure hunters, trappers, gold and silver miners have long sighted on these prominent landmarks along the Taos branch of the Santa Fe trail. Well before the westward migration, the mountains figured in the legends and history of the Ute, Apache, Comanche, and earlier tribes. 'Las Cumbres Espanolas' are also mentioned in chronicles of exploration by Spaniards including Ulibarri in 1706 and later by de Anza, who eventually founded San Francisco (California). This exceptional view (STS108-720-32), captured by the crew of Space Shuttle mission STS108, portrays the Spanish Peaks in the context of the southern Rocky Mountains. Uplift of the Sangre de Cristo began about 75 million years ago and produced the long north-trending ridges of faulted and folded rock to the west of the paired peaks. After uplift had ceased (26 to 22 million years ago), the large masses of igneous rock (granite, granodiorite, syenodiorite) that form the Peaks were emplaced (Penn, 1995-2001). East and West Spanish Peaks are 'stocks'-bodies of molten rock that intruded sedimentary layers, cooled and solidified, and were later exposed by erosion. East Peak (E), at 12,708 ft is almost circular and is about 5 1/2 miles long by 3 miles wide, while West Peak (W), at 13,623 ft is roughly 2 3/4 miles long by 1 3/4 miles wide. Great dikes-long stone walls-radiate outward from the mountains like spokes of a wheel, a prominent one forms a broad arc northeast of East Spanish Peak. As the molten rock rose, it forced its way into vertical cracks and joints in the sedimentary strata; the less resistant material was then eroded away, leaving walls of hard rock from 1 foot to 100 feet wide, up to 100 feet high, and as long as 14 miles. Dikes trending almost east-west are also common in the region. For more information visit: Sangres.com: The Spanish Peaks (accessed January 16

  8. Inversion of Multi-Station Schumann Resonance Background Records for Global Lightning Activity in Absolute Units

    NASA Astrophysics Data System (ADS)

    Williams, E. R.; Mushtak, V. C.; Guha, A.; Boldi, R. A.; Bor, J.; Nagy, T.; Satori, G.; Sinha, A. K.; Rawat, R.; Hobara, Y.; Sato, M.; Takahashi, Y.; Price, C. G.; Neska, M.; Alexander, K.; Yampolski, Y.; Moore, R. C.; Mitchell, M. F.; Fraser-Smith, A. C.

    2014-12-01

    Every lightning flash contributes energy to the TEM mode of the natural global waveguide that contains the Earth's Schumann resonances. The modest attenuation at ELF (0.1 dB/Mm) allows for the continuous monitoring of the global lightning with a small number of receiving stations worldwide. In this study, nine ELF receiving sites (in Antarctica (3 sites), Hungary, India, Japan, Poland, Spitsbergen and USA) are used to provide power spectra at 12-minute intervals in two absolutely calibrated magnetic fields and occasionally, one electric field, with up to five resonance modes each. The observables are the extracted modal parameters (peak intensity, peak frequency and Q-factor) for each spectrum. The unknown quantities are the geographical locations of three continental lightning 'chimneys' and their lightning source strengths in absolute units (C2 km2/sec). The unknowns are calculated from the observables by the iterative inversion of an evolving 'sensitivity matrix' whose elements are the partial derivatives of each observable for all receiving sites with respect to each unknown quantity. The propagation model includes the important day-night asymmetry of the natural waveguide. To overcome the problem of multiple minima (common in inversion problems of this kind), location information from the World Wide Lightning Location Network has been used to make initial guess solutions based on centroids of stroke locations in each chimney. Results for five consecutive days in 2009 (Jan 7-11) show UT variations with the African chimney dominating on four of five days, and America dominating on the fifth day. The amplitude variations in absolute source strength exceed that of the 'Carnegie curve' of the DC global circuit by roughly twofold. Day-to-day variations in chimney source strength are of the order of tens of percent. Examination of forward calculations performed with the global inversion solution often show good agreement with the observed diurnal variations at

  9. Absolute reliability of isokinetic knee flexion and extension measurements adopting a prone position.

    PubMed

    Ayala, F; De Ste Croix, M; Sainz de Baranda, P; Santonja, F

    2013-01-01

    The main purpose of this study was to determine the absolute and relative reliability of isokinetic peak torque (PT), angle of peak torque (APT), average power (PW) and total work (TW) for knee flexion and extension during concentric and eccentric actions measured in a prone position at 60, 180 and 240° s(-1). A total of 50 recreational athletes completed the study. PT, APT, PW and TW for concentric and eccentric knee extension and flexion were recorded at three different angular velocities (60, 180 and 240° s(-1)) on three different occasions with a 72- to 96-h rest interval between consecutive testing sessions. Absolute reliability was examined through typical percentage error (CV(TE)), percentage change in the mean (ChM) and relative reliability with intraclass correlations (ICC(3,1)). For both the knee extensor and flexor muscle groups, all strength data (except APT during knee flexion movements) demonstrated moderate absolute reliability (ChM < 3%; ICCs > 0·70; and CV(TE) < 20%) independent of the knee movement (flexion and extension), type of muscle action (concentric and eccentric) and angular velocity (60, 180 and 240° s(-1)). Therefore, the current study suggests that the CV(TE) values reported for PT (8-20%), APT (8-18%) (only during knee extension movements), PW (14-20%) and TW (12-28%) may be acceptable to detect the large changes usually observed after rehabilitation programmes, but not acceptable to examine the effect of preventative training programmes in healthy individuals.

  10. Mini-implants and miniplates generate sub-absolute and absolute anchorage.

    PubMed

    Consolaro, Alberto

    2014-01-01

    The functional demand imposed on bone promotes changes in the spatial properties of osteocytes as well as in their extensions uniformly distributed throughout the mineralized surface. Once spatial deformation is established, osteocytes create the need for structural adaptations that result in bone formation and resorption that happen to meet the functional demands. The endosteum and the periosteum are the effectors responsible for stimulating adaptive osteocytes in the inner and outer surfaces. Changes in shape, volume and position of the jaws as a result of skeletal correction of the maxilla and mandible require anchorage to allow bone remodeling to redefine morphology, esthetics and function as a result of spatial deformation conducted by orthodontic appliances. Examining the degree of changes in shape, volume and structural relationship of areas where mini-implants and miniplates are placed allows us to classify mini-implants as devices of subabsolute anchorage and miniplates as devices of absolute anchorage.

  11. A Probabilistic Estimate of the Most Perceptible Earthquake Magnitudes in the NW Himalaya and Adjoining Regions

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Koravos, G. Ch.; Tsapanos, T. M.; Vougiouka, G. E.

    2015-02-01

    NW Himalaya and its neighboring region (25°-40°N and 65°-85°E) is one of the most seismically hazardous regions in the Indian subcontinent, a region that has historically experienced large to great damaging earthquakes. In the present study, the most perceptible earthquake magnitudes, M p, are estimated for intensity I = VII, horizontal peak ground acceleration a = 300 cm/s2 and horizontal peak ground velocity v = 10 cm/s in 28 seismogenic zones using the two earthquake recurrence models of Kijko and Sellevoll (Bulletin of the Seismological Society of America 82(1):120-134 1992 ) and Gumbel's third asymptotic distribution of extremes (GIII). Both methods deal with maximum magnitudes. The earthquake perceptibility is calculated by combining earthquake recurrence models with ground motion attenuation relations at a particular level of intensity, acceleration and velocity. The estimated results reveal that the values of M p for velocity v = 10 cm/s show higher estimates than corresponding values for intensity I = VII and acceleration a = 300 cm/s2. It is also observed that differences in perceptible magnitudes calculated by the Kijko-Sellevoll method and GIII statistics show significantly high values, up to 0.7, 0.6 and 1.7 for intensity, acceleration and velocity, respectively, revealing the importance of earthquake recurrence model selection. The estimated most perceptible earthquake magnitudes, M p, in the present study vary from M W 5.1 to 7.7 in the entire zone of the study area. Results of perceptible magnitudes are also represented in the form of spatial maps in 28 seismogenic zones for the aforementioned threshold levels of intensity, acceleration and velocity, estimated from two recurrence models. The spatial maps show that the Quetta of Pakistan, the Hindukush-Pamir Himalaya, the Caucasus mountain belt and the Himalayan frontal thrust belt (Kashmir-Kangra-Uttarkashi-Chamoli regions) exhibit higher values of the most perceptible earthquake magnitudes ( M

  12. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  13. Absolute dose verifications in small photon fields using BANGTM gel

    NASA Astrophysics Data System (ADS)

    Scheib, S. G.; Schenkel, Y.; Gianolini, S.

    2004-01-01

    Polymer gel dosimeters change their magnetic resonance (MR) and optical properties with the absorbed dose when irradiated and are suitable for narrow photon beam dosimetry in radiosurgery. Such dosimeters enable relative and absolute 3D dose verifications in order to check the entire treatment chain from imaging to dose application during commissioning and quality assurance. For absolute 3D dose verifications in radiosurgery using Gamma Knife B, commercially available BANGTM Gels (BANG 25 Gy and BANG 3 Gy) together with dedicated phantoms were chosen in order to determine the potential of absolute gel dosimetry in radiosurgery.

  14. Measuring the absolute magnetic field using high-Tc SQUID

    NASA Astrophysics Data System (ADS)

    He, D. F.; Itozaki, H.

    2006-06-01

    SQUID normally can only measure the change of magnetic field instead of the absolute value of magnetic field. Using a compensation method, a mobile SQUID, which could keep locked when moving in the earth's magnetic field, was developed. Using the mobile SQUID, it was possible to measure the absolute magnetic field. The absolute value of magnetic field could be calculated from the change of the compensation output when changing the direction of the SQUID in a magnetic field. Using this method and the mobile SQUID, we successfully measured the earth's magnetic field in our laboratory.

  15. Absolute Antenna Calibration at the US National Geodetic Survey

    NASA Astrophysics Data System (ADS)

    Mader, G. L.; Bilich, A. L.

    2012-12-01

    Geodetic GNSS applications routinely demand millimeter precision and extremely high levels of accuracy. To achieve these accuracies, measurement and instrument biases at the centimeter to millimeter level must be understood. One of these biases is the antenna phase center, the apparent point of signal reception for a GNSS antenna. It has been well established that phase center patterns differ between antenna models and manufacturers; additional research suggests that the addition of a radome or the choice of antenna mount can significantly alter those a priori phase center patterns. For the more demanding GNSS positioning applications and especially in cases of mixed-antenna networks, it is all the more important to know antenna phase center variations as a function of both elevation and azimuth in the antenna reference frame and incorporate these models into analysis software. Determination of antenna phase center behavior is known as "antenna calibration". Since 1994, NGS has computed relative antenna calibrations for more than 350 antennas. In recent years, the geodetic community has moved to absolute calibrations - the IGS adopted absolute antenna phase center calibrations in 2006 for use in their orbit and clock products, and NGS's CORS group began using absolute antenna calibration upon the release of the new CORS coordinates in IGS08 epoch 2005.00 and NAD 83(2011,MA11,PA11) epoch 2010.00. Although NGS relative calibrations can be and have been converted to absolute, it is considered best practice to independently measure phase center characteristics in an absolute sense. Consequently, NGS has developed and operates an absolute calibration system. These absolute antenna calibrations accommodate the demand for greater accuracy and for 2-dimensional (elevation and azimuth) parameterization. NGS will continue to provide calibration values via the NGS web site www.ngs.noaa.gov/ANTCAL, and will publish calibrations in the ANTEX format as well as the legacy ANTINFO

  16. On the Absolute Age of the Metal-rich Globular M71 (NGC 6838). I. Optical Photometry

    NASA Astrophysics Data System (ADS)

    Di Cecco, A.; Bono, G.; Prada Moroni, P. G.; Tognelli, E.; Allard, F.; Stetson, P. B.; Buonanno, R.; Ferraro, I.; Iannicola, G.; Monelli, M.; Nonino, M.; Pulone, L.

    2015-08-01

    We investigated the absolute age of the Galactic globular cluster M71 (NGC 6838) using optical ground-based images (u\\prime ,g\\prime ,r\\prime ,i\\prime ,z\\prime ) collected with the MegaCam camera at the Canada-France-Hawaii Telescope (CFHT). We performed a robust selection of field and cluster stars by applying a new method based on the 3D (r\\prime ,u\\prime -g\\prime ,g\\prime -r\\prime ) color-color-magnitude diagram. A comparison between the color-magnitude diagram (CMD) of the candidate cluster stars and a new set of isochrones at the locus of the main sequence turn-off (MSTO) suggests an absolute age of 12 ± 2 Gyr. The absolute age was also estimated using the difference in magnitude between the MSTO and the so-called main sequence knee, a well-defined bending occurring in the lower main sequence. This feature was originally detected in the near-infrared bands and explained as a consequence of an opacity mechanism (collisionally induced absorption of molecular hydrogen) in the atmosphere of cool low-mass stars. The same feature was also detected in the r‧, u\\prime -g\\prime , and in the r\\prime ,g\\prime -r\\prime CMD, thus supporting previous theoretical predictions by Borysow et al. The key advantage in using the {{{Δ }}}{TO}{Knee} as an age diagnostic is that it is independent of uncertainties affecting the distance, the reddening, and the photometric zero point. We found an absolute age of 12 ± 1 Gyr that agrees, within the errors, with similar age estimates, but the uncertainty is on average a factor of two smaller. We also found that the {{{Δ }}}{TO}{Knee} is more sensitive to the metallicity than the MSTO, but the dependence vanishes when using the difference in color between the MSK and the MSTO.

  17. On the impact of the magnitude of Interstellar pressure on physical properties of Molecular Cloud

    NASA Astrophysics Data System (ADS)

    Anathpindika, S.; Burkert, A.; Kuiper, R.

    2017-01-01

    Recently reported variations in the typical physical properties of Galactic and extra-Galactic molecular clouds (MCs), and in their star-forming ability have been attributed to local variations in the magnitude of interstellar pressure. Inferences from these surveys have called into question two long-standing beliefs : (1) that MCs are Virialised, and (2) they obey the Larson's third law. Here we invoked the framework of cloud-formation via collision between warm gas-flows to examine if these latest observational inferences can be reconciled. To this end we traced the temporal evolution of the gas surface density, the fraction of dense gas, the distribution of gas column density (N-PDF), and the Virial nature of the assembled clouds. We conclude, these physical properties exhibit temporal variation and their respective peak-magnitude also increases in proportion with the magnitude of external pressure, Pext. The velocity dispersion in assembled clouds appears to follow the power-law, σ _{gas}∝ P_{ext}^{0.23}. The power-law tail at higher densities becomes shallower with increasing magnitude of external pressure for Pext/kB ≲ 107 K cm-3; at higher magnitudes such as those typically found in the Galactic CMZ (Pext/kB > 107 K cm-3), the power-law shows significant steepening. While our results are broadly consistent with inferences from various recent observational surveys, it appears, MCs do not exhibit a unique set of properties, but rather a wide variety that can be reconciled with a range of magnitudes of pressure between 104 K cm-3 - 108 K cm-3.

  18. Comparison of local magnitude scales in Central Europe

    NASA Astrophysics Data System (ADS)

    Kysel, Robert; Kristek, Jozef; Moczo, Peter; Cipciar, Andrej; Csicsay, Kristian; Srbecky, Miroslav; Kristekova, Miriam

    2015-04-01

    Efficient monitoring of earthquakes and determination of their magnitudes are necessary for developing earthquake catalogues at a regional and national levels. Unification and homogenization of the catalogues in terms of magnitudes has great importance for seismic hazard assessment. Calibrated local earthquake magnitude scales are commonly used for determining magnitudes of regional earthquakes by all national seismological services in the Central Europe. However, at the local scale, each seismological service uses its own magnitude determination procedure. There is no systematic comparison of the approaches and there is no unified procedure. We present a comparison of the local magnitude scales used by the national seismological services of Slovakia (Geophysical Institute, Slovak Academy of Sciences), Czech Republic (Institute of Geophysics, Academy of Sciences of the Czech Republic), Austria (ZAMG), Hungary (Geodetic and Geophysical Institute, Hungarian Academy of Sciences) and Poland (Institute of Geophysics, Polish Academy of Sciences), and by the local network of seismic stations located around the Nuclear Power Plant Jaslovske Bohunice, Slovakia. The comparison is based on the national earthquake catalogues and annually published earthquake bulletins for the period from 1985 to 2011. A data set of earthquakes has been compiled based on identification of common events in the national earthquake catalogues and bulletins. For each pair of seismic networks, magnitude differences have been determined and investigated as a function of time. The mean and standard deviations of the magnitude differences as well as regression coefficients between local magnitudes from the national seismological networks have been computed. Results show relatively big scatter between different national local magnitudes and its considerable time variation. A conversion between different national local magnitudes in a scale 1:1 seems inappropriate, especially for the compilation of the

  19. Magnitude and Frequency of Floods in New York

    USGS Publications Warehouse

    Lumia, Richard; Freehafer, Douglas A.; Smith, Martyn J.

    2006-01-01

    Techniques are presented for estimating the magnitude and frequency of flood discharges on rural, unregulated streams in New York, excluding Long Island. Peak-discharge-frequency data and basin characteristics from 388 streamflow-gaging stations in New York and adjacent states were used to develop multiple linear regression equations for flood discharges with recurrence intervals ranging from 1.25 to 500 years. A generalized least-squares (GLS) procedure was used to develop the regression equations. Separate sets of equations were developed for each of six hydrologic regions of New York; standard errors of prediction range from 14 to 43 percent. Statistically significant explanatory variables in the regression equations include drainage area, main-channel slope, percent basin storage, mean annual precipitation, percent forested area, a basin lag factor, a ratio of main-channel slope to basin slope, mean annual runoff, maximum snow depth, and percentage of basin above 1,200 feet. Drainage areas for the 388 sites used in the analyses ranged from 0.41 to 4,773 square miles. Methods of computing flood discharges from the regression equations differ, depending on whether the estimate is for a gaged or ungaged basin, and whether the basin crosses hydrologic-region or state boundaries. Examples of computations are included. Discharge-frequency estimates for an additional 122 streamflow-gaging stations with significant regulation or urbanization (including Long Island) are also included as at-site estimates. Basin characteristics, log-Pearson Type III statistics, and regression and weighted estimates of the discharge-frequency relations are tabulated for the streamflow-gaging stations used in the regression analyses. Sensitivity analyses showed that mean-annual precipitation, drainage area, mean annual runoff, and maximum snow depth are the variables to which computed discharges are most sensitive in the regression equations. Included with the report is a DVD that provides

  20. Estimating Seismic Hazards from the Catalog of Taiwan Earthquakes from 1900 to 2014 in Terms of Maximum Magnitude

    NASA Astrophysics Data System (ADS)

    Chen, Kuei-Pao; Chang, Wen-Yen

    2017-02-01

    Maximum expected earthquake magnitude is an important parameter when designing mitigation measures for seismic hazards. This study calculated the maximum magnitude of potential earthquakes for each cell in a 0.1° × 0.1° grid of Taiwan. Two zones vulnerable to maximum magnitudes of M w ≥6.0, which will cause extensive building damage, were identified: one extends from Hsinchu southward to Taichung, Nantou, Chiayi, and Tainan in western Taiwan; the other extends from Ilan southward to Hualian and Taitung in eastern Taiwan. These zones are also characterized by low b values, which are consistent with high peak ground shaking. We also employed an innovative method to calculate (at intervals of M w 0.5) the bounds and median of recurrence time for earthquakes of magnitude M w 6.0-8.0 in Taiwan.

  1. Dosimetric response of radiochromic films to protons of low energies in the Bragg peak region

    NASA Astrophysics Data System (ADS)

    Battaglia, M. C.; Schardt, D.; Espino, J. M.; Gallardo, M. I.; Cortés-Giraldo, M. A.; Quesada, J. M.; Lallena, A. M.; Miras, H.; Guirado, D.

    2016-06-01

    One of the major advantages of proton or ion beams, applied in cancer treatment, is their excellent depth-dose profile exhibiting a low dose in the entrance channel and a distinct dose maximum (Bragg peak) near the end of range in tissue. In the region of the Bragg peak, where the protons or ions are almost stopped, experimental studies with low-energy particle beams and thin biological samples may contribute valuable information on the biological effectiveness in the stopping region. Such experiments, however, require beam optimization and special dosimetry techniques for determining the absolute dose and dose homogeneity for very thin biological samples. At the National Centre of Accelerators in Seville, one of the beam lines at the 3 MV Tandem Accelerator was equipped with a scattering device, a special parallel-plate ionization chamber with very thin electrode foils and target holders for cell cultures. In this work, we present the calibration in absolute dose of EBT3 films [Gafchromic radiotherapy films, http://www.ashland.com/products/gafchromic-radiotherapy-films] for proton energies in the region of the Bragg peak, where the linear energy transfer increases and becomes more significant for radiobiology studies, as well as the response of the EBT3 films for different proton energy values. To irradiate the films in the Bragg peak region, the energy of the beam was degraded passively, by interposing Mylar foils of variable thickness to place the Bragg peak inside the active layer of the film. The results obtained for the beam degraded in Mylar foils are compared with the dose calculated by means of the measurement of the beam fluence with an ionization chamber and the energy loss predicted by srim2008 code.

  2. Comparison of a kayaking ergometer protocol with an arm crank protocol for evaluating peak oxygen consumption.

    PubMed

    Forbes, Scott C; Chilibeck, Philip D

    2007-11-01

    The purpose of this study was to compare a kayak ergometer protocol with an arm crank protocol for determining peak oxygen consumption (V(.-)O2). On separate days in random order, 10 men and 5 women (16-24 years old) with kayaking experience completed the kayak ergometer protocol and a standardized arm crank protocol. The kayak protocol began at 70 strokes per minute and increased by 10 strokes per minute every 2 minutes until volitional fatigue. The arm crank protocol consisted of a crank rate of 70 revolutions per minute, initial loading of 35 W and subsequent increases of 35 W every 2 minutes until volitional fatigue. The results showed a significant difference (p < 0.01) between the kayak ergometer and the arm crank protocols for relative peak V(.-)O2 (47.5 +/- 3.9 ml x kg(-1) x min(-1) vs. 44.2 +/- 6.2 ml x kg(-1) x min(-1)) and absolute peak V(.-)O2 (3.38 L x min(-1) +/- 0.53 vs. 3.14 +/- 0.64 L x min(-1)). The correlation between kayak and arm crank protocol was 0.79 and 0.90, for relative and absolute V(.-)O2 peak, respectively (both p < 0.01). The higher peak V(.-)O2 on the kayak ergometer may be due to the greater muscle mass involved compared to the arm crank ergometer. The kayak ergometer protocol may therefore be more specific to the sport of kayaking than an arm crank protocol.

  3. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…

  4. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…

  5. Reinforcement Magnitude: An Evaluation of Preference and Reinforcer Efficacy

    ERIC Educational Resources Information Center

    Trosclair-Lasserre, Nicole M.; Lerman, Dorothea C.; Call, Nathan A.; Addison, Laura R.; Kodak, Tiffany

    2008-01-01

    Consideration of reinforcer magnitude may be important for maximizing the efficacy of treatment for problem behavior. Nonetheless, relatively little is known about children's preferences for different magnitudes of social reinforcement or the extent to which preference is related to differences in reinforcer efficacy. The purpose of the current…

  6. Some Effects of Magnitude of Reinforcement on Persistence of Responding

    ERIC Educational Resources Information Center

    McComas, Jennifer J.; Hartman, Ellie C.; Jimenez, Angel

    2008-01-01

    The influence of magnitude of reinforcement was examined on both response rate and behavioral persistence. During Phase 1, a multiple schedule of concurrent reinforcement was implemented in which reinforcement for one response option was held constant at VI 30 s across both components, while magnitude of reinforcement for the other response option…

  7. Congruency Effects between Number Magnitude and Response Force

    ERIC Educational Resources Information Center

    Vierck, Esther; Kiesel, Andrea

    2010-01-01

    Numbers are thought to be represented in space along a mental left-right oriented number line. Number magnitude has also been associated with the size of grip aperture, which might suggest a connection between number magnitude and intensity. The present experiment aimed to confirm this possibility more directly by using force as a response…

  8. The Weight of Time: Affordances for an Integrated Magnitude System

    ERIC Educational Resources Information Center

    Lu, Aitao; Mo, Lei; Hodges, Bert H.

    2011-01-01

    In five experiments we explored the effects of weight on time in different action contexts to test the hypothesis that an integrated magnitude system is tuned to affordances. Larger magnitudes generally seem longer; however, Lu and colleagues (2009) found that if numbers were presented as weights in a range heavy enough to affect lifting, the…

  9. Gibbs Paradox Revisited from the Fluctuation Theorem with Absolute Irreversibility

    NASA Astrophysics Data System (ADS)

    Murashita, Yûto; Ueda, Masahito

    2017-02-01

    The inclusion of the factor ln (1 /N !) in the thermodynamic entropy proposed by Gibbs is shown to be equivalent to the validity of the fluctuation theorem with absolute irreversibility for gas mixing.

  10. Absolute Value Boundedness, Operator Decomposition, and Stochastic Media and Equations

    NASA Technical Reports Server (NTRS)

    Adomian, G.; Miao, C. C.

    1973-01-01

    The research accomplished during this period is reported. Published abstracts and technical reports are listed. Articles presented include: boundedness of absolute values of generalized Fourier coefficients, propagation in stochastic media, and stationary conditions for stochastic differential equations.

  11. Effect of gear ratio on peak power and time to peak power in BMX cyclists.

    PubMed

    Rylands, Lee P; Roberts, Simon J; Hurst, Howard T

    2017-03-01

    The aim of this study was to ascertain if gear ratio selection would have an effect on peak power and time to peak power production in elite Bicycle Motocross (BMX) cyclists. Eight male elite BMX riders volunteered for the study. Each rider performed three, 10-s maximal sprints on an Olympic standard indoor BMX track. The riders' bicycles were fitted with a portable SRM power meter. Each rider performed the three sprints using gear ratios of 41/16, 43/16 and 45/16 tooth. The results from the 41/16 and 45/16 gear ratios were compared to the current standard 43/16 gear ratio. Statistically, significant differences were found between the gear ratios for peak power (F(2,14) = 6.448; p = .010) and peak torque (F(2,14) = 4.777; p = .026), but no significant difference was found for time to peak power (F(2,14) = 0.200; p = .821). When comparing gear ratios, the results showed a 45/16 gear ratio elicited the highest peak power,1658 ± 221 W, compared to 1436 ± 129 W and 1380 ± 56 W, for the 43/16 and 41/16 ratios, respectively. The time to peak power showed a 41/16 tooth gear ratio attained peak power in -0.01 s and a 45/16 in 0.22 s compared to the 43/16. The findings of this study suggest that gear ratio choice has a significant effect on peak power production, though time to peak power output is not significantly affected. Therefore, selecting a higher gear ratio results in riders attaining higher power outputs without reducing their start time.

  12. Enhancement of Raman scattering by two orders of magnitude using photonic nanojet of a microsphere

    NASA Astrophysics Data System (ADS)

    Dantham, V. R.; Bisht, P. B.; Namboodiri, C. K. R.

    2011-05-01

    The enhancement of Raman signal is observed on excitation through a single microsphere. The dependence of the enhancement ratio (ER) on various parameters viz., numerical aperture (NA) of the microscopic objective lens, pump wavelength, size and refractive index of the microsphere has been studied. The enhancement has been explained due to interaction of the increased field of the photonic nanojet emerging from the single microsphere. The photonic nanojet induced ER of Raman peaks of silicon wafer and cadmium ditelluride is reported here. It is observed for the first time that by suitable selection of the experimental parameters, it is possible to enhance the Raman signal by approximately two orders of magnitude.

  13. Status of the Frisco Peak Observatory

    NASA Astrophysics Data System (ADS)

    Ricketts, Paul; Springer, Wayne; Dawson, Kyle; Kieda, Dave; Gondolo, Paolo; Bolton, Adam

    2009-10-01

    The University of Utah has constructed an astronomical observatory located at an elevation of approximately 9600 feet of Frisco Peak west of Milford, Utah. This site was chosen after performing a survey of potential observatory sites throughout Southern Utah. At the time of writing this abstract, the dome and control buildings have been completed. Installation of a 32'' telescope manufactured by DFM Engineering is scheduled to start October 5, 2009. Commissioning of the telescope will take place this fall. A study of the photometric quality of the observatory site will be performed as well. A description of the observatory site survey and the construction and commissioning of the Frisco Peak Observatory will be presented.

  14. Peak oil, food systems, and public health.

    PubMed

    Neff, Roni A; Parker, Cindy L; Kirschenmann, Frederick L; Tinch, Jennifer; Lawrence, Robert S

    2011-09-01

    Peak oil is the phenomenon whereby global oil supplies will peak, then decline, with extraction growing increasingly costly. Today's globalized industrial food system depends on oil for fueling farm machinery, producing pesticides, and transporting goods. Biofuels production links oil prices to food prices. We examined food system vulnerability to rising oil prices and the public health consequences. In the short term, high food prices harm food security and equity. Over time, high prices will force the entire food system to adapt. Strong preparation and advance investment may mitigate the extent of dislocation and hunger. Certain social and policy changes could smooth adaptation; public health has an essential role in promoting a proactive, smart, and equitable transition that increases resilience and enables adequate food for all.

  15. Peak Oil, Food Systems, and Public Health

    PubMed Central

    Parker, Cindy L.; Kirschenmann, Frederick L.; Tinch, Jennifer; Lawrence, Robert S.

    2011-01-01

    Peak oil is the phenomenon whereby global oil supplies will peak, then decline, with extraction growing increasingly costly. Today's globalized industrial food system depends on oil for fueling farm machinery, producing pesticides, and transporting goods. Biofuels production links oil prices to food prices. We examined food system vulnerability to rising oil prices and the public health consequences. In the short term, high food prices harm food security and equity. Over time, high prices will force the entire food system to adapt. Strong preparation and advance investment may mitigate the extent of dislocation and hunger. Certain social and policy changes could smooth adaptation; public health has an essential role in promoting a proactive, smart, and equitable transition that increases resilience and enables adequate food for all. PMID:21778492

  16. The Effects Of Reinforcement Magnitude On Functional Analysis Outcomes

    PubMed Central

    2005-01-01

    The duration or magnitude of reinforcement has varied and often appears to have been selected arbitrarily in functional analysis research. Few studies have evaluated the effects of reinforcement magnitude on problem behavior, even though basic findings indicate that this parameter may affect response rates during functional analyses. In the current study, 6 children with autism or developmental disabilities who engaged in severe problem behavior were exposed to three separate functional analyses, each of which varied in reinforcement magnitude. Results of these functional analyses were compared to determine if a particular reinforcement magnitude was associated with the most conclusive outcomes. In most cases, the same conclusion about the functions of problem behavior was drawn regardless of the reinforcement magnitude. PMID:16033163

  17. Reinforcement magnitude: an evaluation of preference and reinforcer efficacy.

    PubMed

    Trosclair-Lasserre, Nicole M; Lerman, Dorothea C; Call, Nathan A; Addison, Laura R; Kodak, Tiffany

    2008-01-01

    Consideration of reinforcer magnitude may be important for maximizing the efficacy of treatment for problem behavior. Nonetheless, relatively little is known about children's preferences for different magnitudes of social reinforcement or the extent to which preference is related to differences in reinforcer efficacy. The purpose of the current study was to evaluate the relations among reinforcer magnitude, preference, and efficacy by drawing on the procedures and results of basic experimentation in this area. Three children who engaged in problem behavior that was maintained by social positive reinforcement (attention, access to tangible items) participated. Results indicated that preference for different magnitudes of social reinforcement may predict reinforcer efficacy and that magnitude effects may be mediated by the schedule requirement.

  18. Reducing Peak Power in Automated Weapon Laying

    DTIC Science & Technology

    2016-02-01

    aiming a weapon is referred to as gun laying. This report describes a method to calculate motion profiles that reach a given lay within the least...amount of time while reducing the amount of peak power required and, therefore, minimizing the forces caused by acceleration. 15. SUBJECT TERMS...Calculating New Acceleration Values 5 Results and Discussions 7 Conclusions 10 Distribution List 11 FIGURES 1 Trapezoidal motion profile 1 2

  19. Absolute flux calibration of optical spectrophotometric standard stars

    NASA Technical Reports Server (NTRS)

    Colina, Luis; Bohlin, Ralph C.

    1994-01-01

    A method based on Landolt photometry in B and V is developed to correct for a wavelength independent offset of the absolute flux level of optical spectrophotometric standards. The method is based on synthetic photometry techniques in B and V and is accurate to approximately 1%. The correction method is verified by Hubble Space Telescope Faint Object Spectrograph absolute fluxes for five calibration stars, which agree with Landolt photometry to 0.5% in B and V.

  20. Hawaii StreamStats; a web application for defining drainage-basin characteristics and estimating peak-streamflow statistics

    USGS Publications Warehouse

    Rosa, Sarah N.; Oki, Delwyn S.

    2010-01-01

    Reliable estimates of the magnitude and frequency of floods are necessary for the safe and efficient design of roads, bridges, water-conveyance structures, and flood-control projects and for the management of flood plains and flood-prone areas. StreamStats provides a simple, fast, and reproducible method to define drainage-basin characteristics and estimate the frequency and magnitude of peak discharges in Hawaii?s streams using recently developed regional regression equations. StreamStats allows the user to estimate the magnitude of floods for streams where data from stream-gaging stations do not exist. Existing estimates of the magnitude and frequency of peak discharges in Hawaii can be improved with continued operation of existing stream-gaging stations and installation of additional gaging stations for areas where limited stream-gaging data are available.

  1. Structural performance of the DOE's Idaho National Engineering Laboratory during the 1983 Borak Peak earthquake

    SciTech Connect

    Guenzler, R.C.; Gorman, V.W.

    1985-01-01

    The 1983 Borah Peak Earthquake (7.3 Richter magnitude) was the largest earthquake ever experienced by the DOE's Idaho National Engineering Laboratory (INEL). Reactor and plant facilities are generally located about 90 to 110 km (60 miles) from the epicenter. Several reactors were operating normally at the time of the earthquake. Based on detailed inspections, comparisons of measured accelerations with design levels, and instrumental seismograph information, it was concluded that the 1983 Borah Peak Earthquake created no safety problems for INEL reactors or other facilities. 10 refs., 16 figs., 2 tabs.

  2. Timing and magnitude of systolic stretch affect myofilament activation and mechanical work

    PubMed Central

    Tangney, Jared R.; Campbell, Stuart G.; McCulloch, Andrew D.

    2014-01-01

    Dyssynchronous activation of the heart leads to abnormal regional systolic stretch. In vivo studies have suggested that the timing of systolic stretch can affect regional tension and external work development. In the present study, we measured the direct effects of systolic stretch timing on the magnitude of tension and external work development in isolated murine right ventricular papillary muscles. A servomotor was used to impose precisely timed stretches relative to electrical activation while a force transducer measured force output and strain was monitored using a charge-couple device camera and topical markers. Stretches taking place during peak intracellular Ca2+ statistically increased peak tension up to 270%, whereas external work due to stretches in this interval reached values of 500 J/m. An experimental analysis showed that time-varying elastance overestimated peak tension by 100% for stretches occurring after peak isometric tension. The addition of the force-velocity relation explained some effects of stretches occurring before the peak of the Ca2+ transient but had no effect in later stretches. An estimate of transient deactivation was measured by performing quick stretches to dissociate cross-bridges. The timing of transient deactivation explained the remaining differences between the model and experiment. These results suggest that stretch near the start of cardiac tension development substantially increases twitch tension and mechanical work production, whereas late stretches decrease external work. While the increased work can mostly be explained by the time-varying elastance of cardiac muscle, the decreased work in muscles stretched after the peak of the Ca2+ transient is largely due to myofilament deactivation. PMID:24878774

  3. Using Google Earth to Teach the Magnitude of Deep Time

    ERIC Educational Resources Information Center

    Parker, Joel D.

    2011-01-01

    Most timeline analogies of geologic and evolutionary time are fundamentally flawed. They trade off the problem of grasping very long times for the problem of grasping very short distances. The result is an understanding of relative time with little comprehension of absolute time. Earlier work has shown that the distances most easily understood by…

  4. Peak height velocity as a maturity indicator for males with idiopathic scoliosis.

    PubMed

    Song, K M; Little, D G

    2000-01-01

    We retrospectively studied 43 adolescent boys treated with orthoses for idiopathic scoliosis to assess the usefulness of the timing of peak height velocity for predicting growth remaining and the likelihood of curve progression when compared with Risser sign, closure of the triradiate cartilage, and chronologic age. We compared the peak height velocity data in boys to our previous work for girls with adolescent idiopathic scoliosis. We found the median height velocity plots showed a similar high peak and sharp decline as is found in girls. All 13 patients with a curve magnitude > 30 degrees at the time of peak height velocity had progression of their scoliosis to > 45 degrees despite bracing. Four of 29 patients (14%) with curves < or = 30 degrees at peak height velocity progressed to 45 degrees. These values generate a sensitivity of 76%, specificity of 100% and accuracy of 91% in predicting progression to 45 degrees. Similar values have been found in female patients. The use of peak height velocity to predict the length of time for remaining growth was superior to Risser sign and chronologic age for boys with idiopathic scoliosis. Closure of the triradiate cartilage approximated the timing of peak height velocity in boys.

  5. Estimation of instantaneous peak flow from simulated maximum daily flow using the HBV model

    NASA Astrophysics Data System (ADS)

    Ding, Jie; Haberlandt, Uwe

    2014-05-01

    Instantaneous peak flow (IPF) data are the foundation of the design of hydraulic structures and flood frequency analysis. However, the long discharge records published by hydrological agencies contain usually only average daily flows which are of little value for design in small catchments. In former research, statistical analysis using observed peak and daily flow data was carried out to explore the link between instantaneous peak flow (IPF) and maximum daily flow (MDF) where the multiple regression model is proved to have the best performance. The objective of this study is to further investigate the acceptability of the multiple regression model for post-processing simulated daily flows from hydrological modeling. The model based flood frequency analysis allows to consider change in the condition of the catchments and in climate for design. Here, the HBV model is calibrated on peak flow distributions and flow duration curves using two approaches. In a two -step approach the simulated MDF are corrected with a priory established regressions. In a one-step procedure the regression coefficients are calibrated together with the parameters of the model. For the analysis data from 18 mesoscale catchments in the Aller-Leine river basin in Northern Germany are used. The results show that: (1) the multiple regression model is capable to predict the peak flows with the simulated MDF data; (2) the calibrated hydrological model reproduces well the magnitude and frequency distribution of peak flows; (3) the one-step procedure outperforms the two-step procedure regarding the estimation of peak flows.

  6. Quantifying peak discharges for historical floods

    USGS Publications Warehouse

    Cook, J.L.

    1987-01-01

    It is usually advantageous to use information regarding historical floods, if available, to define the flood-frequency relation for a stream. Peak stages can sometimes be determined for outstanding floods that occurred many years ago before systematic gaging of streams began. In the United States, this information is usually not available for more than 100-200 years, but in countries with long cultural histories, such as China, historical flood data are available at some sites as far back as 2,000 years or more. It is important in flood studies to be able to assign a maximum discharge rate and an associated error range to the historical flood. This paper describes the significant characteristics and uncertainties of four commonly used methods for estimating the peak discharge of a flood. These methods are: (1) rating curve (stage-discharge relation) extension; (2) slope conveyance; (3) slope area; and (4) step backwater. Logarithmic extensions of rating curves are based on theoretical plotting techniques that results in straight line extensions provided that channel shape and roughness do not change significantly. The slope-conveyance and slope-area methods are based on the Manning equation, which requires specific data on channel size, shape and roughness, as well as the water-surface slope for one or more cross-sections in a relatively straight reach of channel. The slope-conveyance method is used primarily for shaping and extending rating curves, whereas the slope-area method is used for specific floods. The step-backwater method, also based on the Manning equation, requires more cross-section data than the slope-area ethod, but has a water-surface profile convergence characteristic that negates the need for known or estimated water-surface slope. Uncertainties in calculating peak discharge for historical floods may be quite large. Various investigations have shown that errors in calculating peak discharges by the slope-area method under ideal conditions for

  7. Dissociated time course recovery between rate of force development and peak torque after eccentric exercise.

    PubMed

    Molina, Renato; Denadai, Benedito S

    2012-05-01

    This study investigated the association between isokinetic peak torque (PT) of quadriceps and the corresponding peak rate of force development (peak RFD) during the recovery of eccentric exercise. Twelve untrained men (aged 21·7 ± 2·3 year) performed 100 maximal eccentric contractions for knee extensors (10 sets of 10 repetitions with a 2-min rest between each set) on isokinetic dynamometer. PT and peak RFD accessed by maximal isokinetic knee concentric contractions at 60° s(-1) were obtained before (baseline) and at 24 and 48 h after eccentric exercise. Indirect markers of muscle damage included delayed onset of muscle soreness (DOMS) and plasma creatine kinase (CK) activity. The eccentric exercise resulted in elevated DOMS and CK compared with baseline values. At 24 h, PT (-15·3%, P = 0·002) and peak RFD (-13·1%, P = 0·03) decreased significantly. At 48 h, PT (-7·9%, P = 0·002) was still decreased but peak RFD have returned to baseline values. Positive correlation was found between PT and peak RFD at baseline (r = 0·62, P = 0·02), 24 h (r = 0·99, P = 0·0001) and 48 h (r = 0·68, P = 0·01) after eccentric exercise. The magnitude of changes (%) in PT and peak RFD from baseline to 24 h (r = 0·68, P = 0·01) and from 24 to 48 h (r = 0·68, P = 0·01) were significantly correlated. It can be concluded that the muscle damage induced by the eccentric exercise affects differently the time course of PT and peak RFD recovery during isokinetic concentric contraction at 60° s(-1). During the recovery from exercise-induced muscle damage, PT and peak RFD are determined but not fully defined by shared putative physiological mechanisms.

  8. Flood Frequency Estimates and Documented and Potential Extreme Peak Discharges in Oklahoma

    USGS Publications Warehouse

    Tortorelli, Robert L.; McCabe, Lan P.

    2001-01-01

    Knowledge of the magnitude and frequency of floods is required for the safe and economical design of highway bridges, culverts, dams, levees, and other structures on or near streams; and for flood plain management programs. Flood frequency estimates for gaged streamflow sites were updated, documented extreme peak discharges for gaged and miscellaneous measurement sites were tabulated, and potential extreme peak discharges for Oklahoma streamflow sites were estimated. Potential extreme peak discharges, derived from the relation between documented extreme peak discharges and contributing drainage areas, can provide valuable information concerning the maximum peak discharge that could be expected at a stream site. Potential extreme peak discharge is useful in conjunction with flood frequency analysis to give the best evaluation of flood risk at a site. Peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years were estimated for 352 gaged streamflow sites. Data through 1999 water year were used from streamflow-gaging stations with at least 8 years of record within Oklahoma or about 25 kilometers into the bordering states of Arkansas, Kansas, Missouri, New Mexico, and Texas. These sites were in unregulated basins, and basins affected by regulation, urbanization, and irrigation. Documented extreme peak discharges and associated data were compiled for 514 sites in and near Oklahoma, 352 with streamflow-gaging stations and 162 at miscellaneous measurements sites or streamflow-gaging stations with short record, with a total of 671 measurements.The sites are fairly well distributed statewide, however many streams, large and small, have never been monitored. Potential extreme peak-discharge curves were developed for streamflow sites in hydrologic regions of the state based on documented extreme peak discharges and the contributing drainage areas. Two hydrologic regions, east and west, were defined using 98 degrees 15 minutes longitude as the

  9. The Instructional Dependency of SNARC Effects Reveals Flexibility of the Space-Magnitude Association of Nonsymbolic and Symbolic Magnitudes.

    PubMed

    Lee, Dasom; Chun, Joohyung; Cho, Soohyun

    2016-05-01

    The Spatial-Numerical Association of Response Codes (SNARC) effect refers to the phenomenon that small versus large numbers are responded to faster in the left versus right side of space, respectively. Using a pairwise comparison task, Shaki et al. found that task instruction influences the pattern of SNARC effects of certain types of magnitudes which are less rigid in their space-magnitude association .The present study examined the generalizability of this instruction effect using pairwise comparison of nonsymbolic and symbolic stimuli within a wide range of magnitudes. We contrasted performance between trials in which subjects were instructed to select the stimulus representing the smaller versus larger magnitude within each pair. We found an instruction-dependent pattern of SNARC effects for both nonsymbolic and symbolic magnitudes. Specifically, we observed a SNARC effect for the "Select Smaller" instruction, but a reverse SNARC effect for the "Select Larger" instruction. Considered together with previous studies, our findings suggest that nonsymbolic magnitudes and relatively large symbolic magnitudes have greater flexibility in their space-magnitude association.

  10. Peak horizontal acceleration and velocity from strong-motion records including records from the 1979 imperial valley, California, earthquake

    USGS Publications Warehouse

    Joyner, William B.; Boore, David M.

    1981-01-01

    We have taken advantage of the recent increase in strong-motion data at close distances to derive new attenuation relations for peak horizontal acceleration and velocity. This new analysis uses a magnitude-independent shape, based on geometrical spreading and anelastic attenuation, for the attenuation curve. An innovation in technique is introduced that decouples the determination of the distance dependence of the data from the magnitude dependence.

  11. Adolescents with Developmental Dyscalculia Do Not Have a Generalized Magnitude Deficit – Processing of Discrete and Continuous Magnitudes

    PubMed Central

    McCaskey, Ursina; von Aster, Michael; O’Gorman Tuura, Ruth; Kucian, Karin

    2017-01-01

    The link between number and space has been discussed in the literature for some time, resulting in the theory that number, space and time might be part of a generalized magnitude system. To date, several behavioral and neuroimaging findings support the notion of a generalized magnitude system, although contradictory results showing a partial overlap or separate magnitude systems are also found. The possible existence of a generalized magnitude processing area leads to the question how individuals with developmental dyscalculia (DD), known for deficits in numerical-arithmetical abilities, process magnitudes. By means of neuropsychological tests and functional magnetic resonance imaging (fMRI) we aimed to examine the relationship between number and space in typical and atypical development. Participants were 16 adolescents with DD (14.1 years) and 14 typically developing (TD) peers (13.8 years). In the fMRI paradigm participants had to perform discrete (arrays of dots) and continuous magnitude (angles) comparisons as well as a mental rotation task. In the neuropsychological tests, adolescents with dyscalculia performed significantly worse in numerical and complex visuo-spatial tasks. However, they showed similar results to TD peers when making discrete and continuous magnitude decisions during the neuropsychological tests and the fMRI paradigm. A conjunction analysis of the fMRI data revealed commonly activated higher order visual (inferior and middle occipital gyrus) and parietal (inferior and superior parietal lobe) magnitude areas for the discrete and continuous magnitude tasks. Moreover, no differences were found when contrasting both magnitude processing conditions, favoring the possibility of a generalized magnitude system. Group comparisons further revealed that dyscalculic subjects showed increased activation in domain general regions, whilst TD peers activate domain specific areas to a greater extent. In conclusion, our results point to the existence of a

  12. Echo 2 - Observations at Fort Churchill of a 4-keV peak in low-level electron precipitation

    NASA Technical Reports Server (NTRS)

    Arnoldy, R. L.; Hendrickson, R. A.; Winckler, J. R.

    1975-01-01

    The Echo 2 rocket flight launched from Fort Churchill, Manitoba, offered the opportunity to observe high-latitude low-level electron precipitation during quiet magnetic conditions. Although no visual aurora was evident at the time of the flight, an auroral spectrum sharply peaked at a few keV was observed to have intensities from 1 to 2 orders of magnitude lower than peaked spectra typically associated with bright auroral forms. There is a growing body of evidence that relates peaked electron spectra to discrete aurora. The Echo 2 observations show that whatever the mechanism for peaking the electron spectrum in and above discrete forms, it operates over a range of precipitation intensities covering nearly 3 orders of magnitude down to subvisual or near subvisual events.

  13. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  14. Quantifying Heartbeat Dynamics by Magnitude and Sign Correlations

    NASA Astrophysics Data System (ADS)

    Ivanov, Plamen Ch.; Ashkenazy, Yosef; Kantelhardt, Jan W.; Stanley, H. Eugene

    2003-05-01

    We review a recently developed approach for analyzing time series with long-range correlations by decomposing the signal increment series into magnitude and sign series and analyzing their scaling properties. We show that time series with identical long-range correlations can exhibit different time organization for the magnitude and sign. We apply our approach to series of time intervals between consecutive heartbeats. Using the detrended fluctuation analysis method we find that the magnitude series is long-range correlated, while the sign series is anticorrelated and that both magnitude and sign series may have clinical applications. Further, we study the heartbeat magnitude and sign series during different sleep stages — light sleep, deep sleep, and REM sleep. For the heartbeat sign time series we find short-range anticorrelations, which are strong during deep sleep, weaker during light sleep and even weaker during REM sleep. In contrast, for the heartbeat magnitude time series we find long-range positive correlations, which are strong during REM sleep and weaker during light sleep. Thus, the sign and the magnitude series provide information which is also useful for distinguishing between different sleep stages.

  15. Does residual force enhancement increase with increasing stretch magnitudes?

    PubMed

    Hisey, Brandon; Leonard, Tim R; Herzog, Walter

    2009-07-22

    It is generally accepted that force enhancement in skeletal muscles increases with increasing stretch magnitudes. However, this property has not been tested across supra-physiological stretch magnitudes and different muscle lengths, thus it is not known whether this is a generic property of skeletal muscle, or merely a property that holds for small stretch magnitudes within the physiological range. Six cat soleus muscles were actively stretched with magnitudes varying from 3 to 24 mm at three different parts of the force-length relationship to test the hypothesis that force enhancement increases with increasing stretch magnitude, independent of muscle length. Residual force enhancement increased consistently with stretch amplitudes on the descending limb of the force-length relationship up to a threshold value, after which it reached a plateau. Force enhancement did not increase with stretch amplitude on the ascending limb of the force-length relationship. Passive force enhancement was observed for all test conditions, and paralleled the behavior of the residual force enhancement. Force enhancement increased with stretch magnitude when stretching occurred at lengths where there was natural passive force within the muscle. These results suggest that force enhancement does not increase unconditionally with increasing stretch magnitude, as is generally accepted, and that increasing force enhancement with stretch appears to be tightly linked to that part of the force-length relationship where there is naturally occurring passive force.

  16. Derivation of Johnson-Cousins Magnitudes from DSLR Camera Observations

    NASA Astrophysics Data System (ADS)

    Park, Woojin; Pak, Soojong; Shim, Hyunjin; Le, Huynh Anh N.; Im, Myungshin; Chang, Seunghyuk; Yu, Joonkyu

    2016-01-01

    The RGB Bayer filter system consists of a mosaic of R, G, and B filters on the grid of the photo sensors which typical commercial DSLR (Digital Single Lens Reflex) cameras and CCD cameras are equipped with. Lot of unique astronomical data obtained using an RGB Bayer filter system are available, including transient objects, e.g. supernovae, variable stars, and solar system bodies. The utilization of such data in scientific research requires that reliable photometric transformation methods are available between the systems. In this work, we develop a series of equations to convert the observed magnitudes in the RGB Bayer filter system (RB, GB, and BB) into the Johnson-Cousins BVR filter system (BJ, VJ, and RC). The new transformation equations derive the calculated magnitudes in the Johnson-Cousins filters (BJcal, VJcal, and RCcal) as functions of RGB magnitudes and colors. The mean differences between the transformed magnitudes and original magnitudes, i.e. the residuals, are (BJ - BJcal) = 0.064 mag, (VJ - VJcal) = 0.041 mag, and (RC - RCcal) = 0.039 mag. The calculated Johnson-Cousins magnitudes from the transformation equations show a good linear correlation with the observed Johnson-Cousins magnitudes.

  17. A scheme to set preferred magnitudes in the ISC Bulletin

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Storchak, Dmitry A.

    2016-04-01

    One of the main purposes of the International Seismological Centre (ISC) is to collect, integrate and reprocess seismic bulletins provided by agencies around the world in order to produce the ISC Bulletin. This is regarded as the most comprehensive bulletin of the Earth's seismicity, and its production is based on a unique cooperation in the seismological community that allows the ISC to complement the work of seismological agencies operating at global and/or local-regional scale. In addition, by using the seismic wave measurements provided by reporting agencies, the ISC computes, where possible, its own event locations and magnitudes such as short-period body wave m b and surface wave M S . Therefore, the ISC Bulletin contains the results of the reporting agencies as well as the ISC own solutions. Among the most used seismic event parameters listed in seismological bulletins, the event magnitude is of particular importance for characterizing a seismic event. The selection of a magnitude value (or multiple ones) for various research purposes or practical applications is not always a straightforward task for users of the ISC Bulletin and related products since a multitude of magnitude types is currently computed by seismological agencies (sometimes using different standards for the same magnitude type). Here, we describe a scheme that we intend to implement in routine ISC operations to mark the preferred magnitudes in order to help ISC users in the selection of events with magnitudes of their interest.

  18. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  19. Comparison of magnetic probe calibration at nano and millitesla magnitudes.

    PubMed

    Pahl, Ryan A; Rovey, Joshua L; Pommerenke, David J

    2014-01-01

    Magnetic field probes are invaluable diagnostics for pulsed inductive plasma devices where field magnitudes on the order of tenths of tesla or larger are common. Typical methods of providing a broadband calibration of [Formula: see text] probes involve either a Helmholtz coil driven by a function generator or a network analyzer. Both calibration methods typically produce field magnitudes of tens of microtesla or less, at least three and as many as six orders of magnitude lower than their intended use. This calibration factor is then assumed constant regardless of magnetic field magnitude and the effects of experimental setup are ignored. This work quantifies the variation in calibration factor observed when calibrating magnetic field probes in low field magnitudes. Calibration of two [Formula: see text] probe designs as functions of frequency and field magnitude are presented. The first [Formula: see text] probe design is the most commonly used design and is constructed from two hand-wound inductors in a differential configuration. The second probe uses surface mounted inductors in a differential configuration with balanced shielding to further reduce common mode noise. Calibration factors are determined experimentally using an 80.4 mm radius Helmholtz coil in two separate configurations over a frequency range of 100-1000 kHz. A conventional low magnitude calibration using a vector network analyzer produced a field magnitude of 158 nT and yielded calibration factors of 15 663 ± 1.7% and 4920 ± 0.6% [Formula: see text] at 457 kHz for the surface mounted and hand-wound probes, respectively. A relevant magnitude calibration using a pulsed-power setup with field magnitudes of 8.7-354 mT yielded calibration factors of 14 615 ± 0.3% and 4507 ± 0.4% [Formula: see text] at 457 kHz for the surface mounted inductor and hand-wound probe, respectively. Low-magnitude calibration resulted in a larger calibration factor, with an average difference of 9.7% for the surface

  20. Comparison of magnetic probe calibration at nano and millitesla magnitudes

    NASA Astrophysics Data System (ADS)

    Pahl, Ryan A.; Rovey, Joshua L.; Pommerenke, David J.

    2014-01-01

    Magnetic field probes are invaluable diagnostics for pulsed inductive plasma devices where field magnitudes on the order of tenths of tesla or larger are common. Typical methods of providing a broadband calibration of dot{{B}} probes involve either a Helmholtz coil driven by a function generator or a network analyzer. Both calibration methods typically produce field magnitudes of tens of microtesla or less, at least three and as many as six orders of magnitude lower than their intended use. This calibration factor is then assumed constant regardless of magnetic field magnitude and the effects of experimental setup are ignored. This work quantifies the variation in calibration factor observed when calibrating magnetic field probes in low field magnitudes. Calibration of two dot{{B}} probe designs as functions of frequency and field magnitude are presented. The first dot{{B}} probe design is the most commonly used design and is constructed from two hand-wound inductors in a differential configuration. The second probe uses surface mounted inductors in a differential configuration with balanced shielding to further reduce common mode noise. Calibration factors are determined experimentally using an 80.4 mm radius Helmholtz coil in two separate configurations over a frequency range of 100-1000 kHz. A conventional low magnitude calibration using a vector network analyzer produced a field magnitude of 158 nT and yielded calibration factors of 15 663 ± 1.7% and 4920 ± 0.6% {T}/{V {s}} at 457 kHz for the surface mounted and hand-wound probes, respectively. A relevant magnitude calibration using a pulsed-power setup with field magnitudes of 8.7-354 mT yielded calibration factors of 14 615 ± 0.3% and 4507 ± 0.4% {T}/{V {s}} at 457 kHz for the surface mounted inductor and hand-wound probe, respectively. Low-magnitude calibration resulted in a larger calibration factor, with an average difference of 9.7% for the surface mounted probe and 12.0% for the hand-wound probe. The