Sample records for station correction factor

  1. Underwater and Dive Station Work-Site Noise Surveys

    DTIC Science & Technology

    2008-03-14

    A) octave band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet...band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A...noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A) level, and

  2. Determination of velocity correction factors for real-time air velocity monitoring in underground mines.

    PubMed

    Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-12-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer ® . The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed.

  3. Determination of velocity correction factors for real-time air velocity monitoring in underground mines

    PubMed Central

    Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-01-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer®. The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed. PMID:29201495

  4. On Aethalometer measurement uncertainties and an instrument correction factor for the Arctic

    NASA Astrophysics Data System (ADS)

    Backman, John; Schmeisser, Lauren; Virkkula, Aki; Ogren, John A.; Asmi, Eija; Starkweather, Sandra; Sharma, Sangeeta; Eleftheriadis, Konstantinos; Uttal, Taneil; Jefferson, Anne; Bergin, Michael; Makshtas, Alexander; Tunved, Peter; Fiebig, Markus

    2017-12-01

    Several types of filter-based instruments are used to estimate aerosol light absorption coefficients. Two significant results are presented based on Aethalometer measurements at six Arctic stations from 2012 to 2014. First, an alternative method of post-processing the Aethalometer data is presented, which reduces measurement noise and lowers the detection limit of the instrument more effectively than boxcar averaging. The biggest benefit of this approach can be achieved if instrument drift is minimised. Moreover, by using an attenuation threshold criterion for data post-processing, the relative uncertainty from the electronic noise of the instrument is kept constant. This approach results in a time series with a variable collection time (Δt) but with a constant relative uncertainty with regard to electronic noise in the instrument. An additional advantage of this method is that the detection limit of the instrument will be lowered at small aerosol concentrations at the expense of temporal resolution, whereas there is little to no loss in temporal resolution at high aerosol concentrations ( > 2.1-6.7 Mm-1 as measured by the Aethalometers). At high aerosol concentrations, minimising the detection limit of the instrument is less critical. Additionally, utilising co-located filter-based absorption photometers, a correction factor is presented for the Arctic that can be used in Aethalometer corrections available in literature. The correction factor of 3.45 was calculated for low-elevation Arctic stations. This correction factor harmonises Aethalometer attenuation coefficients with light absorption coefficients as measured by the co-located light absorption photometers. Using one correction factor for Arctic Aethalometers has the advantage that measurements between stations become more inter-comparable.

  5. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  6. Correction of clock errors in seismic data using noise cross-correlations

    NASA Astrophysics Data System (ADS)

    Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline

    2017-04-01

    Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock drifts (1 ms/day) as well as large clock jumps (6 min) are identified. The same method is applied to records of five OBS stations deployed within a radius of 150 km around La Réunion. The assumption of a linear clock drift is verified by correlating OBS for which GPS-based skew corrections were available with land stations. For two OBS stations without skew estimates, we find clock drifts of 0.9 ms/day and 0.4 ms/day. This study salvages expensive seismic records from remote regions that would be otherwise lost for seismicity or tomography studies.

  7. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken, T; Mayeda, K; Hofstetter, A

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, we found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction for 10 narrow frequency bands ranging between 0.02 to 2.0 Hz. For higher frequencies however, 2-D pathmore » corrections will be necessary and will be the subject of a future study. After calibrating the stations ISP, ISKB, and MALT for local and regional distances, single-station moment-magnitude estimates (M{sub w}) derived from the coda spectra were in excellent agreement with those determined from multi-station waveform modeling inversions of long-period data, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub w} estimates to significantly smaller events which could not otherwise be waveform modeled due to poor signal-to-noise ratio at long periods and sparse station coverage. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  9. Principal facts for gravity stations in Dixie; Fairview, and Stingaree valleys, Churchill and Pershing counties, Nevada

    USGS Publications Warehouse

    Schaefer, D.H.; Thomas, J.M.; Duffrin, B.G.

    1984-01-01

    During March through July 1979, gravity measurements were made at 300 stations in Dixie Valley, Nevada. In December 1981, 45 additional stations were added--7 in Dixie Valley, 23 in Fairview Valley, and 15 in Stingaree Valley. Most altitudes were determined by using altimeters or topographic maps. The gravity observations were made with a Worden temperature-controlled gravimeter with an initial scale factor of 0.0965 milliGal/scale division. Principal facts for each of the 345 stations are tabulated; they consist of latitude, longitude, altitude, observed gravity, free-air anomaly, terrain correction, and Bouguer anomaly values at a bedrock density of 2.67 grams/cu cm. (Lantz-PTT)

  10. Station corrections for the Katmai Region Seismic Network

    USGS Publications Warehouse

    Searcy, Cheryl K.

    2003-01-01

    Most procedures for routinely locating earthquake hypocenters within a local network are constrained to using laterally homogeneous velocity models to represent the Earth's crustal velocity structure. As a result, earthquake location errors may arise due to actual lateral variations in the Earth's velocity structure. Station corrections can be used to compensate for heterogeneous velocity structure near individual stations (Douglas, 1967; Pujol, 1988). The HYPOELLIPSE program (Lahr, 1999) used by the Alaska Volcano Observatory (AVO) to locate earthquakes in Cook Inlet and the Aleutian Islands is a robust and efficient program that uses one-dimensional velocity models to determine hypocenters of local and regional earthquakes. This program does have the capability of utilizing station corrections within it's earthquake location proceedure. The velocity structures of Cook Inlet and Aleutian volcanoes very likely contain laterally varying heterogeneities. For this reason, the accuracy of earthquake locations in these areas will benefit from the determination and addition of station corrections. In this study, I determine corrections for each station in the Katmai region. The Katmai region is defined to lie between latitudes 57.5 degrees North and 59.00 degrees north and longitudes -154.00 and -156.00 (see Figure 1) and includes Mount Katmai, Novarupta, Mount Martin, Mount Mageik, Snowy Mountain, Mount Trident, and Mount Griggs volcanoes. Station corrections were determined using the computer program VELEST (Kissling, 1994). VELEST inverts arrival time data for one-dimensional velocity models and station corrections using a joint hypocenter determination technique. VELEST can also be used to locate single events.

  11. Experimental and casework validation of ambient temperature corrections in forensic entomology.

    PubMed

    Johnson, Aidan P; Wallman, James F; Archer, Melanie S

    2012-01-01

    This paper expands on Archer (J Forensic Sci 49, 2004, 553), examining additional factors affecting ambient temperature correction of weather station data in forensic entomology. Sixteen hypothetical body discovery sites (BDSs) in Victoria and New South Wales (Australia), both in autumn and in summer, were compared to test whether the accuracy of correlation was affected by (i) length of correlation period; (ii) distance between BDS and weather station; and (iii) periodicity of ambient temperature measurements. The accuracy of correlations in data sets from real Victorian and NSW forensic entomology cases was also examined. Correlations increased weather data accuracy in all experiments, but significant differences in accuracy were found only between periodicity treatments. We found that a >5°C difference between average values of body in situ and correlation period weather station data was predictive of correlations that decreased the accuracy of ambient temperatures estimated using correlation. Practitioners should inspect their weather data sets for such differences. © 2011 American Academy of Forensic Sciences.

  12. Principal facts for a gravity survey of the Gerlach Extension Known Geothermal Resource Area, Pershing County, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, D.L.; Kaufmann, H.E.

    1978-01-01

    During July 1977, fifty-one gravity stations were obtained in the Gerlach Extension Known Geothermal Resource Area and vicinity, northwestern Nevada. The gravity observations were made with a Worden gravimeter having a scale factor of about 0.5 milligal per division. No terrain corrections have been applied to these data. The earth tide correction was not used in drift reduction. The Geodetic Reference System 1967 formula (International Association of Geodesy, 1967) was used to compute theoretical gravity. Observed gravity is referenced to a base station in Gerlach, Nevada, having a value based on the Potsdam System of 1930. A density of 2.67more » g per cm/sup 3/ was used in computing the Bouguer anomaly.« less

  13. Principal facts for a gravity survey of the Fly Ranch Extension Known Geothermal Resource Area, Pershing County, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, D.L.; Kaufmann, H.E.

    1978-01-01

    During July 1977, forty-four gravity stations were obtained in the Fly Ranch Extension Known Geothermal Resource Area and vicinity, northwestern Nevada. The gravity observations were made with a Worden gravimeter having a scale factor of about 0.5 milligal per division. No terrain corrections have been applied to these data. The earth tide correction was not used in drift reduction. The Geodetic Reference System 1967 formula (International Association of Geodesy, 1967) was used to compute theoretical gravity. Observed gravity is referenced to a base station in Gerlach, Nevada, having a value based on the Potsdam System of 1930 (fig. 1). Amore » density of 2.67 g per cm/sup 3/ was used in computing the Bouguer anomaly.« less

  14. Examination of the site amplification factor of OBS and their application to magnitude estimation and ground-motion prediction for EEW

    NASA Astrophysics Data System (ADS)

    Hayashimoto, N.; Hoshiba, M.

    2013-12-01

    1. Introduction Ocean bottom seismograph (OBS) is useful for making Earthquake Early Warning (EEW) earlier. However, careful handling of these data is required because the installation environment of OBSs may be different from that of land stations. Site amplification factor is an important factor to estimate the magnitudes, and to predict ground motions (e.g. seismic intensity) in EEW. In this presentation, we discuss the site amplification factor of OBS in the Tonankai area of Japan from these two points of view. 2. Examination of magnitude correction of OBS In the EEW of JMA, the magnitude is estimated from the maximum amplitude of the displacement in real time. To provide the fast magnitude estimation, the magnitude-estimation algorithm switches from the P to S formula (Meew(P) to Meew(S)) depending on the expected S-phase arrival (Kamigaichi,2004). To estimate the magnitude correction for OBS, we determine Meew(P) and Meew(S) at OBSs and compare them with JMA magnitude (Mjma). We find Meew(S) at OBS is generally larger than Mjma by approximately 0.6. The slight differences of spatial distribution of Meew(S) amplification are also found among other OBSs. From the numerical simulations, Nakamura et al. (MGR,submitted) pointed out that the oceanic layer and the low-velocity sediment layers causes the large amplifications in low frequency range (0.1-0.2Hz) at OBSs. We conclude that the site effect of OBS characterized by such a low velocity sediment layers causes those amplification of Magnitude. 3. The frequency-dependent site factor of OBS estimated from Fourier spectrum ratio and their application for prediction of seismic intensity of land station We compare Fourier spectra of S-wave portion on OBSs with those on adjacent land stations. Station pair whose distance is smaller than 50 km is analyzed, and we obtain that spectral ratio of land station (MIEH05 of the KiK-net/NIED) to OBS (KMA01 of the DONET/JAMSTEC) is 5-20 for frequencies 10-20Hz for both horizontal and vertical components, whereas it is approximately 0.2 at less than 2Hz for the horizontal component, which corresponds to the relative site amplification factors in the frequency domain. In addition, we compare the accuracies of expected seismic intensity of land stations using the average of seismic intensity difference with those using the spectral ratio as the empirical amplification factor. In an example of station pair mentioned above, the RMS of the difference between measured and predicted seismic intensity is improved by about 38% by using a spectral ratio as the amplification factor. These results indicate that the frequency-dependent site factor is crucial factor to predict seismic intensity from OBS data, and also show that OBS can be used as front stations in the method for prediction of ground motion based on the real-time monitoring (Hoshiba, 2013). Acknowledgement: Waveform data were obtained from the JMA network, DONET of the JAMSTEC, K-net and KiK-net of the NIED.

  15. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  16. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  17. Designing an Ergonomically Correct CNC Workstation on a Shoe String Budget.

    ERIC Educational Resources Information Center

    Lightner, Stan

    2001-01-01

    Describes research to design and construct ergonomically correct work stations for Computer Numerical Control machine tools. By designing ergonomically correct work stations, industrial technology teachers help protect students from repetitive motion injuries. (Contains 12 references.) (JOW)

  18. Simulation of relationship between river discharge and sediment yield in the semi-arid river watersheds

    NASA Astrophysics Data System (ADS)

    Khaleghi, Mohammad Reza; Varvani, Javad

    2018-02-01

    Complex and variable nature of the river sediment yield caused many problems in estimating the long-term sediment yield and problems input into the reservoirs. Sediment Rating Curves (SRCs) are generally used to estimate the suspended sediment load of the rivers and drainage watersheds. Since the regression equations of the SRCs are obtained by logarithmic retransformation and have a little independent variable in this equation, they also overestimate or underestimate the true sediment load of the rivers. To evaluate the bias correction factors in Kalshor and Kashafroud watersheds, seven hydrometric stations of this region with suitable upstream watershed and spatial distribution were selected. Investigation of the accuracy index (ratio of estimated sediment yield to observed sediment yield) and the precision index of different bias correction factors of FAO, Quasi-Maximum Likelihood Estimator (QMLE), Smearing, and Minimum-Variance Unbiased Estimator (MVUE) with LSD test showed that FAO coefficient increases the estimated error in all of the stations. Application of MVUE in linear and mean load rating curves has not statistically meaningful effects. QMLE and smearing factors increased the estimated error in mean load rating curve, but that does not have any effect on linear rating curve estimation.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  20. A European-wide 222radon and 222radon progeny comparison study

    NASA Astrophysics Data System (ADS)

    Schmithüsen, Dominik; Chambers, Scott; Fischer, Bernd; Gilge, Stefan; Hatakka, Juha; Kazan, Victor; Neubert, Rolf; Paatero, Jussi; Ramonet, Michel; Schlosser, Clemens; Schmid, Sabine; Vermeulen, Alex; Levin, Ingeborg

    2017-04-01

    Although atmospheric 222radon (222Rn) activity concentration measurements are currently performed worldwide, they are being made by many different laboratories and with fundamentally different measurement principles, so compatibility issues can limit their utility for regional-to-global applications. Consequently, we conducted a European-wide 222Rn / 222Rn progeny comparison study in order to evaluate the different measurement systems in use, determine potential systematic biases between them, and estimate correction factors that could be applied to harmonize data for their use as a tracer in atmospheric applications. Two compact portable Heidelberg radon monitors (HRM) were moved around to run for at least 1 month at each of the nine European measurement stations included in this comparison. Linear regressions between parallel data sets were calculated, yielding correction factors relative to the HRM ranging from 0.68 to 1.45. A calibration bias between ANSTO (Australian Nuclear Science and Technology Organisation) two-filter radon monitors and the HRM of ANSTO / HRM = 1.11 ± 0.05 was found. Moreover, for the continental stations using one-filter systems that derive atmospheric 222Rn activity concentrations from measured atmospheric progeny activity concentrations, preliminary 214Po / 222Rn disequilibrium values were also estimated. Mean station-specific disequilibrium values between 0.8 at mountain sites (e.g. Schauinsland) and 0.9 at non-mountain sites for sampling heights around 20 to 30 m above ground level were determined. The respective corrections for calibration biases and disequilibrium derived in this study need to be applied to obtain a compatible European atmospheric 222Rn data set for use in quantitative applications, such as regional model intercomparison and validation or trace gas flux estimates with the radon tracer method.

  1. Principal facts for a gravity survey of the Double Hot Springs Known Geothermal Resource Area, Humboldt County, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, D.L.; Kaufmann, H.E.

    1978-01-01

    During July 1977, forty-nine gravity stations were obtained in the Double Hot Springs Known Geothermal Resource Area and vicinity, northwestern Nevada. The gravity observations were made with a Worden gravimeter having a scale factor of about 0.5 milligal per division. No terrain corrections have been applied to these data. The earth tide correction was not used in drift reduction. The Geodetic Reference System 1967 formula (International Association of Geodesy, 1967) was used to compute theoretical gravity.

  2. Automated general temperature correction method for dielectric soil moisture sensors

    NASA Astrophysics Data System (ADS)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a significant error factor comparable to ±1% manufacturer's accuracy.

  3. TMPA Products 3B42RT & 3B42V6: Evaluation and Application in Qinghai-Tibet Plateau

    NASA Astrophysics Data System (ADS)

    Hao, Z.; Sun, L.; Wang, J.

    2012-04-01

    Hydrological researchers in Qinghai-Tibet Plateau tend to be haunted by deficiency of station gauged precipitation data for the sparse and uneven distribution of local meteorological stations. Fortunately, alternative data can be obtained from TRMM (Tropic Rainfall Measurement Mission) satellite. Preliminary evaluation and necessary correction of TRMM satellite rainfall products is required for the sake of reliability and suitability considering that TRMM precipitation is unconventional and natural condition in Qinghai-Tibet Plateau is unusually complicated. 3B42RT and 3B42V6 products from TRMM Multisatellite Precipitation Analysis(TMPA) are evaluated in northeast Qinghai-Tibet Plateau with 50 stations quality-controlled gauged daily precipitation as the benchmark precipitation set. It is found that the RT data overestimates the actual precipitation greatly while V6 only overestimates it slightly. RT data shows different seasonal and inter-annual accuracies. Summer and autumn see better accuracies than winter and spring and wet years see higher accuracies than dry years. Latitude is believed to be an important factor that influences the accuracy of satellite precipitation. Both RT and V6 can reflect the general pattern of the spatial distribution of precipitation even though RT overestimates the quantity greatly. A new parameter, accumulated precipitation weight point (APWP), was introduced to describe the temporal-spatial pattern evolution of precipitation. The APWP of both RT and V6 were moving from south to north in the past decade, but they are all in the west of station gauged precipitation APWP(s).V6 APWP track fit gauged precipitation perfectly while RT APWP track has over-exaggerated legs, indicating that spatial distribution of RT precipitation experienced unreasonable sharp changes. A practical and operational procedure to correct satellite precipitation data is developed. For RT, there are two steps. Step 1, the downscaling, original daily precipitation was multiplied by a ratio of its monthly satellite/station precipitation gauged precipitation. Step2, objective analysis, Barnes/Cressman successive correction as well as Optimal Interpolation was applied to refine the processed daily results. Step 1 is unnecessary for V6 correction. The accuracy of RT can be improved significantly and the spatial details of satellite precipitation can be obtained as much as possible while quite little improvement showed in V6 correction. Besides, the iteration of successive correction should not be more than twice and the ideal influence radius for Optimal Interpolation is R=5. The original/corrected RT and V6 data sets were used as precipitation inputs to drive a newly developed hydrological model DHM-SP in the headwater region of the Yellow river so as to assess their applicability in simulating the daily runoff. V6 simulation result is qualified even though it is uncorrected. The bias in RT is too much to make use of RT as model input directly while quite satisfied results can be derived from corrected RT input. The simulation results of corrected RT are even better than that of station gauged and V6.

  4. Calibration of the local magnitude scale ( M L ) for Peru

    NASA Astrophysics Data System (ADS)

    Condori, Cristobal; Tavera, Hernando; Marotta, Giuliano Sant'Anna; Rocha, Marcelo Peres; França, George Sand

    2017-07-01

    We propose a local magnitude scale ( M L ) for Peru, based on the original Richter definition, using 210 seismic events between 2011 and 2014, recorded by 35 broadband stations of the National Seismic Network operated by the Geophysical Institute of Peru. In the solution model, we considered 1057 traces of maximum amplitude records on the vertical channel from simulated Wood-Anderson seismograms of shallow events (depths between 0 and 60 km) and hypocentral distances less than 600 km. The attenuation factor has been evaluated in terms of geometrical spreading and anelastic attenuation coefficients. The magnitude M L was defined as M L = L o g 10 A W A +1.5855 L o g 10( R/100)+0.0008( R-100)+3± S, where, A W A is the displacement amplitude in millimeters (Wood-Anderson), R is the hypocentral distance (km), and S is the station correction. The results obtained for M L have good correlation with the m b , M s and M w values reported the ISC and NEIC. The anelastic attenuation curve obtained has a similar behavior to that other highly seismic regions. Station corrections were determined for all stations during the regression analysis resulting in values ranging between -0.97 and +0.73, suggesting a strong influence of local site effects on amplitude.

  5. Exhaustive testing of recent oceanic and Earth tidal models using combination of tide gravity data from GGP and ICET data banks

    NASA Astrophysics Data System (ADS)

    Kopaev, A.; Ducarme, B.

    2003-04-01

    We have used the most recent oceanic tidal models e.g. FES’99/02, GOT’00, CSR’4, NAO’99 and TPXO’5/6 for tidal gravity loading computations using LOAD’97 software. Resulting loading vectors were compared against each other in different regions located at different distances from the sea coast. Results indicate good coincidence for majority of models at the distances larger than 100-200 km, excluding some regions where mostly CSR’4 and TPXO have problems. Outlying models were rejected for this regions and mean loading vectors have been calculated for more than 200 tidal gravity stations from GGP and ICET data banks, representing state of the art of tidal loading correction. Corresponding errors in d-factors and phase lags are generally smaller than 0.1 % resp. 0.05o, that means that we do not have the real troubles with loading corrections and more attention should be applied to the calibration values and phase lag determination accuracies. Corrected values agree with DDW model values very well (within 0.2 %) for majority of GGP stations, whereas some of very good (Chinese network mainly) ICET tidal gravity stations clearly demonstrate statistically significant (up to 0.5 %) anomalies that seems not connected either with calibration troubles or loading problems. Various possible reasons including instrumental and geophysical will be presented and discussed.

  6. Human Factors Research Under Ground-Based and Space Conditions. Part 1

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Session TP2 includes short reports concerning: (1) Human Factors Engineering of the International space Station Human Research Facility; (2) Structured Methods for Identifying and Correcting Potential Human Errors in Space operation; (3) An Improved Procedure for Selecting Astronauts for Extended Space Missions; (4) The NASA Performance Assessment Workstation: Cognitive Performance During Head-Down Bedrest; (5) Cognitive Performance Aboard the Life and Microgravity Spacelab; and (6) Psychophysiological Reactivity Under MIR-Simulation and Real Micro-G.

  7. Network capability estimation. Vela network evaluation and automatic processing research. Technical report. [NETWORTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, N.S.

    1976-09-24

    NETWORTH is a computer program which calculates the detection and location capability of seismic networks. A modified version of NETWORTH has been developed. This program has been used to evaluate the effect of station 'downtime', the signal amplitude variance, and the station detection threshold upon network detection capability. In this version all parameters may be changed separately for individual stations. The capability of using signal amplitude corrections has been added. The function of amplitude corrections is to remove possible bias in the magnitude estimate due to inhomogeneous signal attenuation. These corrections may be applied to individual stations, individual epicenters, ormore » individual station/epicenter combinations. An option has been added to calculate the effect of station 'downtime' upon network capability. This study indicates that, if capability loss due to detection errors can be minimized, then station detection threshold and station reliability will be the fundamental limits to network performance. A baseline network of thirteen stations has been performed. These stations are as follows: Alaskan Long Period Array, (ALPA); Ankara, (ANK); Chiang Mai, (CHG); Korean Seismic Research Station, (KSRS); Large Aperture Seismic Array, (LASA); Mashhad, (MSH); Mundaring, (MUN); Norwegian Seismic Array, (NORSAR); New Delhi, (NWDEL); Red Knife, Ontario, (RK-ON); Shillong, (SHL); Taipei, (TAP); and White Horse, Yukon, (WH-YK).« less

  8. Near-station terrain corrections for gravity data by a surface-integral technique

    USGS Publications Warehouse

    Gettings, M.E.

    1982-01-01

    A new method of computing gravity terrain corrections by use of a digitizer and digital computer can result in substantial savings in the time and manual labor required to perform such corrections by conventional manual ring-chart techniques. The method is typically applied to estimate terrain effects for topography near the station, for example within 3 km of the station, although it has been used successfully to a radius of 15 km to estimate corrections in areas where topographic mapping is poor. Points (about 20) that define topographic maxima, minima, and changes in the slope gradient are picked on the topographic map, within the desired radius of correction about the station. Particular attention must be paid to the area immediately surrounding the station to ensure a good topographic representation. The horizontal and vertical coordinates of these points are entered into the computer, usually by means of a digitizer. The computer then fits a multiquadric surface to the input points to form an analytic representation of the surface. By means of the divergence theorem, the gravity effect of an interior closed solid can be expressed as a surface integral, and the terrain correction is calculated by numerical evaluation of the integral over the surfaces of a cylinder, The vertical sides of which are at the correction radius about the station, the flat bottom surface at the topographic minimum, and the upper surface given by the multiquadric equation. The method has been tested with favorable results against models for which an exact result is available and against manually computed field-station locations in areas of rugged topography. By increasing the number of points defining the topographic surface, any desired degree of accuracy can be obtained. The method is more objective than manual ring-chart techniques because no average compartment elevations need be estimated ?

  9. 76 FR 4224 - Airworthiness Directives; The Boeing Company Model 767-300 Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... backup structure at the lower VHF antenna cutout at station 1197 + 99 between stringers 39 left and 39... periphery of the VHF antenna baseplate at station 1197 + 99. We are issuing this AD to detect and correct... lower VHF antenna cutout at station 1197 + 99 between stringers 39L and 39R, and corrective actions if...

  10. Propulsion requirements for communications satellites.

    NASA Technical Reports Server (NTRS)

    Isley, W. C.; Duck, K. I.

    1972-01-01

    The concept of characteristics thrust is introduced herein as a means of classifying propulsion system tasks related particularly to geosynchronous communications spacecraft. Approximate analytical models are developed to permit estimation of characteristic thrust for injection error corrections, orbit angle re-location, north-south station keeping, east-west station keeping, spin axis precession control, attitude rate damping, and orbit raising applications. Performance assessment factors are then outlined in terms of characteristic power, characteristic weight, and characteristic volume envelope, which are related to the characteristic thrust. Finally, selected performance curves are shown for power as a function of spacecraft weight, including the influence of duty cycle on north-south station keeping, a 90 degree orbit angle re-location in 14 days, and finally comparison of orbit raising tasks from low and intermediate orbits to a final geosynchronous station. Power requirements range from less than 75 watts for north-south station keeping on small payloads up to greater than 15 KW for a 180 day orbit raising mission including a 28.5 degree plane change.

  11. Investigation of the ionospheric Faraday rotation for use in orbit corrections

    NASA Technical Reports Server (NTRS)

    Llewellyn, S. K.; Bent, R. B.; Nesterczuk, G.

    1974-01-01

    The possibility of mapping the Faraday factors on a worldwide basis was examined as a simple method of representing the conversion factors for any possible user. However, this does not seem feasible. The complex relationship between the true magnetic coordinates and the geographic latitude, longitude, and azimuth angles eliminates the possibility of setting up some simple tables that would yield worldwide results of sufficient accuracy. Tabular results for specific stations can easily be produced or could be represented in graphic form.

  12. Collision in space.

    PubMed

    Ellis, S R

    2000-01-01

    On June 25, 1997, the Russian supply spacecraft Progress 234 collided with the Mir space station, rupturing Mir's pressure hull, throwing it into an uncontrolled attitude drift, and nearly forcing evacuation of the station. Like many high-profile accidents, this collision was the consequence of a chain of events leading to the final piloting errors that were its immediate cause. The discussion in this article does not resolve the relative contributions of the actions and decisions in this chain. Neither does it suggest corrective measures, many of which are straightforward and have already been implemented by the National Aeronautics and Space Administration (NASA) and the Russian Space Agency. Rather, its purpose is to identify the human factors that played a pervasive role in the incident. Workplace stress, fatigue, and sleep deprivation were identified by NASA as contributory factors in the Mir-Progress collision (Culbertson, 1997; NASA, forthcoming), but other contributing factors, such as requiring crew to perform difficult tasks for which their training is not current, could potentially become important factors in future situations.

  13. Collision in space

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.

    2000-01-01

    On June 25, 1997, the Russian supply spacecraft Progress 234 collided with the Mir space station, rupturing Mir's pressure hull, throwing it into an uncontrolled attitude drift, and nearly forcing evacuation of the station. Like many high-profile accidents, this collision was the consequence of a chain of events leading to the final piloting errors that were its immediate cause. The discussion in this article does not resolve the relative contributions of the actions and decisions in this chain. Neither does it suggest corrective measures, many of which are straightforward and have already been implemented by the National Aeronautics and Space Administration (NASA) and the Russian Space Agency. Rather, its purpose is to identify the human factors that played a pervasive role in the incident. Workplace stress, fatigue, and sleep deprivation were identified by NASA as contributory factors in the Mir-Progress collision (Culbertson, 1997; NASA, forthcoming), but other contributing factors, such as requiring crew to perform difficult tasks for which their training is not current, could potentially become important factors in future situations.

  14. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  15. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    PubMed

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  16. Tsunami Source Estimate for the 1960 Chilean Earthquake from Near- and Far-Field Observations

    NASA Astrophysics Data System (ADS)

    Ho, T.; Satake, K.; Watada, S.; Fujii, Y.

    2017-12-01

    The tsunami source of the 1960 Chilean earthquake was estimated from the near- and far-field tsunami data. The 1960 Chilean earthquake is known as the greatest earthquake instrumentally ever recorded. This earthquake caused a large tsunami which was recorded by 13 near-field tidal gauges in South America, and 84 far-field stations around the Pacific Ocean at the coasts of North America, Asia, and Oceania. The near-field stations had been used for estimating the tsunami source [Fujii and Satake, Pageoph, 2013]. However, far-field tsunami waveforms have not been utilized because of the discrepancy between observed and simulated waveforms. The observed waveforms at the far-field stations are found systematically arrived later than the simulated waveforms. This phenomenon has been also observed in the tsunami of the 2004 Sumatra earthquake, the 2010 Chilean earthquake, and the 2011 Tohoku earthquake. Recently, the factors for the travel time delay have been explained [Watada et al., JGR, 2014; Allgeyer and Cummins, GRL, 2014], so the far-field data are usable for tsunami source estimation. The phase correction method [Watada et al., JGR, 2014] converts the tsunami waveforms computed by the linear long wave into the dispersive waveform which accounts for the effects of elasticity of the Earth and ocean, ocean density stratification, and gravitational potential change associated with tsunami propagation. We apply the method to correct the computed waveforms. For the preliminary initial sea surface height inversion, we use 12 near-field stations and 63 far-field stations, located in the South and North America, islands in the Pacific Ocean, and the Oceania. The estimated tsunami source from near-field stations is compared with the result from both near- and far-field stations. Two estimated sources show a similar pattern: a large sea surface displacement concentrated at the south of the epicenter close to the coast and extended to south. However, the source estimated from near-field stations shows larger displacement than one from both dataset.

  17. Adaptive topographic mass correction for satellite gravity and gravity gradient data

    NASA Astrophysics Data System (ADS)

    Holzrichter, Nils; Szwillus, Wolfgang; Götze, Hans-Jürgen

    2014-05-01

    Subsurface modelling with gravity data includes a reliable topographic mass correction. Since decades, this mandatory step is a standard procedure. However, originally methods were developed for local terrestrial surveys. Therefore, these methods often include defaults like a limited correction area of 167 km around an observation point, resampling topography depending on the distance to the station or disregard the curvature of the earth. New satellite gravity data (e.g. GOCE) can be used for large scale lithospheric modelling with gravity data. The investigation areas can include thousands of kilometres. In addition, measurements are located in the flight height of the satellite (e.g. ~250 km for GOCE). The standard definition of the correction area and the specific grid spacing around an observation point was not developed for stations located in these heights and areas of these dimensions. This asks for a revaluation of the defaults used for topographic correction. We developed an algorithm which resamples the topography based on an adaptive approach. Instead of resampling topography depending on the distance to the station, the grids will be resampled depending on its influence at the station. Therefore, the only value the user has to define is the desired accuracy of the topographic correction. It is not necessary to define the grid spacing and a limited correction area. Furthermore, the algorithm calculates the topographic mass response with a spherical shaped polyhedral body. We show examples for local and global gravity datasets and compare the results of the topographic mass correction to existing approaches. We provide suggestions how satellite gravity and gradient data should be corrected.

  18. 47 CFR 13.201 - Qualifying for a commercial operator license or endorsement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... satisfactory knowledge of FCC rules and must have the ability to send correctly and receive correctly spoken...) Applicant must have one year of experience in sending and receiving public correspondence by radiotelegraph at a public coast station, a ship station, or both. (2) Second Class Radiotelegraph Operator's...

  19. 47 CFR 13.201 - Qualifying for a commercial operator license or endorsement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... satisfactory knowledge of FCC rules and must have the ability to send correctly and receive correctly spoken...) Applicant must have one year of experience in sending and receiving public correspondence by radiotelegraph at a public coast station, a ship station, or both. (2) Second Class Radiotelegraph Operator's...

  20. Deep Space Station (DSS-13) automation demonstration

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Lorden, G.

    1980-01-01

    The data base collected during a six month demonstration of an automated Deep Space Station (DSS 13) run unattended and remotely controlled is summarized. During this period, DSS 13 received spacecraft telemetry data from Voyager, Pioneers 10 and 11, and Helios projects. Corrective and preventive maintenance are reported by subsystem including the traditional subsystems and those subsystems added for the automation demonstration. Operations and maintenance data for a comparable manned Deep Space Station (DSS 11) are also presented for comparison. The data suggests that unattended operations may reduce maintenance manhours in addition to reducing operator manhours. Corrective maintenance for the unmanned station was about one third of the manned station, and preventive maintenance was about one half.

  1. Performance Trials of an Integrated Loran/GPS/IMU Navigation System, Part 1

    DTIC Science & Technology

    2005-01-27

    differences are used to correct the grid values in the absence of a local ASF monitor station . Performance of the receiver using different ASF grids...United States is served by the North American Loran-C system made up of 29 stations organized into 10 chains (see Figure 1). Loran coverage is...the absence of a local ASF monitor station . Performance of the receiver using different ASF grids and interpolation techniques and corrected using the

  2. The new Landsat 8 potential for remote sensing of colored dissolved organic matter (CDOM)

    USGS Publications Warehouse

    Slonecker, Terry; Jones, Daniel K.; Pellerin, Brian A.

    2016-01-01

    Due to a combination of factors, such as a new coastal/aerosol band and improved radiometric sensitivity of the Operational Land Imager aboard Landsat 8, the atmospherically-corrected Surface Reflectance product for Landsat data, and the growing availability of corrected fDOM data from U.S. Geological Survey gaging stations, moderate-resolution remote sensing of fDOM may now be achievable. This paper explores the background of previous efforts and shows preliminary examples of the remote sensing and data relationships between corrected fDOM and Landsat 8 reflectance values. Although preliminary results before and after Hurricane Sandy are encouraging, more research is needed to explore the full potential of Landsat 8 to continuously map fDOM in a number of water profiles.

  3. The new Landsat 8 potential for remote sensing of colored dissolved organic matter (CDOM).

    PubMed

    Slonecker, E Terrence; Jones, Daniel K; Pellerin, Brian A

    2016-06-30

    Due to a combination of factors, such as a new coastal/aerosol band and improved radiometric sensitivity of the Operational Land Imager aboard Landsat 8, the atmospherically-corrected Surface Reflectance product for Landsat data, and the growing availability of corrected fDOM data from U.S. Geological Survey gaging stations, moderate-resolution remote sensing of fDOM may now be achievable. This paper explores the background of previous efforts and shows preliminary examples of the remote sensing and data relationships between corrected fDOM and Landsat 8 reflectance values. Although preliminary results before and after Hurricane Sandy are encouraging, more research is needed to explore the full potential of Landsat 8 to continuously map fDOM in a number of water profiles. Published by Elsevier Ltd.

  4. Determining the sensitivity of the amplitude source location (ASL) method through active seismic sources: An example from Te Maari Volcano, New Zealand

    NASA Astrophysics Data System (ADS)

    Walsh, Braden; Jolly, Arthur; Procter, Jonathan

    2017-04-01

    Using active seismic sources on Tongariro Volcano, New Zealand, the amplitude source location (ASL) method is calibrated and optimized through a series of sensitivity tests. By applying a geologic medium velocity of 1500 m/s and an attenuation value of Q=60 for surface waves along with amplification factors computed from regional earthquakes, the ASL produced location discrepancies larger than 1.0 km horizontally and up to 0.5 km in depth. Through the use of sensitivity tests on input parameters, we show that velocity and attenuation models have moderate to strong influences on the location results, but can be easily constrained. Changes in locations are accommodated through either lateral or depth movements. Station corrections (amplification factors) and station geometry strongly affect the ASL locations laterally, horizontally and in depth. Calibrating the amplification factors through the exploitation of the active seismic source events reduced location errors for the sources by up to 50%.

  5. Regional ionospheric model for improvement of navigation position with EGNOS

    NASA Astrophysics Data System (ADS)

    Swiatek, Anna; Tomasik, Lukasz; Jaworski, Leszek

    The problem of insufficient accuracy of EGNOS correction for the territory of Poland, located at the edge of EGNOS range is well known. The EEI PECS project (EGNOS EUPOS Integration) assumed improving the EGNOS correction by using the GPS observations from Polish ASG-EUPOS stations. A ionospheric delay parameter is a part of EGNOS correction. The comparative analysis of TEC values obtained from EGNOS and regional permanent GNSS stations showed the systematic shift. The TEC from EGNOS correction is underestimated related to computed regional TEC value. The new-‘improved’ corrections computed based on regional model were substituted for the EGNOS correction for suitable message. Dynamic measurements managed using the Mobile GPS Laboratory (MGL), showed the improvement of navigation position with TEC regional model.

  6. Networked differential GPS system

    NASA Technical Reports Server (NTRS)

    Sheynblat, Leonid (Inventor); Kalafus, Rudolph M. (Inventor); Loomis, Peter V. W. (Inventor); Mueller, K. Tysen (Inventor)

    1994-01-01

    An embodiment of the present invention relates to a worldwide network of differential GPS reference stations (NDGPS) that continually track the entire GPS satellite constellation and provide interpolations of reference station corrections tailored for particular user locations between the reference stations Each reference station takes real-time ionospheric measurements with codeless cross-correlating dual-frequency carrier GPS receivers and computes real-time orbit ephemerides independently. An absolute pseudorange correction (PRC) is defined for each satellite as a function of a particular user's location. A map of the function is constructed, with iso-PRC contours. The network measures the PRCs at a few points, so-called reference stations and constructs an iso-PRC map for each satellite. Corrections are interpolated for each user's site on a subscription basis. The data bandwidths are kept to a minimum by transmitting information that cannot be obtained directly by the user and by updating information by classes and according to how quickly each class of data goes stale given the realities of the GPS system. Sub-decimeter-level kinematic accuracy over a given area is accomplished by establishing a mini-fiducial network.

  7. 47 CFR 22.371 - Disturbance of AM broadcast station antenna patterns.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Disturbance of AM broadcast station antenna....371 Disturbance of AM broadcast station antenna patterns. Public Mobile Service licensees that... necessary to correct disturbance of the AM station antenna pattern which causes operation outside of the...

  8. 47 CFR 22.371 - Disturbance of AM broadcast station antenna patterns.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Disturbance of AM broadcast station antenna....371 Disturbance of AM broadcast station antenna patterns. Public Mobile Service licensees that... necessary to correct disturbance of the AM station antenna pattern which causes operation outside of the...

  9. 47 CFR 22.371 - Disturbance of AM broadcast station antenna patterns.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Disturbance of AM broadcast station antenna....371 Disturbance of AM broadcast station antenna patterns. Public Mobile Service licensees that... necessary to correct disturbance of the AM station antenna pattern which causes operation outside of the...

  10. 47 CFR 22.371 - Disturbance of AM broadcast station antenna patterns.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false Disturbance of AM broadcast station antenna....371 Disturbance of AM broadcast station antenna patterns. Public Mobile Service licensees that... necessary to correct disturbance of the AM station antenna pattern which causes operation outside of the...

  11. Surface-wave amplitude analysis for array data with non-linear waveform fitting: Toward high-resolution attenuation models of the upper mantle

    NASA Astrophysics Data System (ADS)

    Hamada, K.; Yoshizawa, K.

    2013-12-01

    Anelastic attenuation of seismic waves provides us with valuable information on temperature and water content in the Earth's mantle. While seismic velocity models have been investigated by many researchers, anelastic attenuation (or Q) models have yet to be investigated in detail mainly due to the intrinsic difficulties and uncertainties in the amplitude analysis of observed seismic waveforms. To increase the horizontal resolution of surface wave attenuation models on a regional scale, we have developed a new method of fully non-linear waveform fitting to measure inter-station phase velocities and amplitude ratios simultaneously, using the Neighborhood Algorithm (NA) as a global optimizer. Model parameter space (perturbations of phase speed and amplitude ratio) is explored to fit two observed waveforms on a common great-circle path by perturbing both phase and amplitude of the fundamental-mode surface waves. This method has been applied to observed waveform data of the USArray from 2007 to 2008, and a large-number of inter-station amplitude and phase speed data are corrected in a period range from 20 to 200 seconds. We have constructed preliminary phase speed and attenuation models using the observed phase and amplitude data, with careful considerations of the effects of elastic focusing and station correction factors for amplitude data. The phase velocity models indicate good correlation with the conventional tomographic results in North America on a large-scale; e.g., significant slow velocity anomaly in volcanic regions in the western United States. The preliminary results of surface-wave attenuation achieved a better variance reduction when the amplitude data are inverted for attenuation models in conjunction with corrections for receiver factors. We have also taken into account the amplitude correction for elastic focusing based on a geometrical ray theory, but its effects on the final model is somewhat limited and our attenuation model show anti-correlation with the phase velocity models; i.e., lower attenuation is found in slower velocity areas that cannot readily be explained by the temperature effects alone. Some former global scale studies (e.g., Dalton et al., JGR, 2006) indicated that the ray-theoretical focusing corrections on amplitude data tend to eliminate such anti-correlation of phase speed and attenuation, but this seems not to work sufficiently well for our regional scale model, which is affected by stronger velocity gradient relative to global-scale models. Thus, the estimated elastic focusing effects based on ray theory may be underestimated in our regional-scale studies. More rigorous ways to estimate the focusing corrections as well as data selection criteria for amplitude measurements are required to achieve a high-resolution attenuation models on regional scales in the future.

  12. Statewide analysis of the drainage-area ratio method for 34 streamflow percentile ranges in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.; Vrabel, Joseph

    2006-01-01

    The drainage-area ratio method commonly is used to estimate streamflow for sites where no streamflow data are available using data from one or more nearby streamflow-gaging stations. The method is intuitive and straightforward to implement and is in widespread use by analysts and managers of surface-water resources. The method equates the ratio of streamflow at two stream locations to the ratio of the respective drainage areas. In practice, unity often is assumed as the exponent on the drainage-area ratio, and unity also is assumed as a multiplicative bias correction. These two assumptions are evaluated in this investigation through statewide analysis of daily mean streamflow in Texas. The investigation was made by the U.S. Geological Survey in cooperation with the Texas Commission on Environmental Quality. More than 7.8 million values of daily mean streamflow for 712 U.S. Geological Survey streamflow-gaging stations in Texas were analyzed. To account for the influence of streamflow probability on the drainage-area ratio method, 34 percentile ranges were considered. The 34 ranges are the 4 quartiles (0-25, 25-50, 50-75, and 75-100 percent), the 5 intervals of the lower tail of the streamflow distribution (0-1, 1-2, 2-3, 3-4, and 4-5 percent), the 20 quintiles of the 4 quartiles (0-5, 5-10, 10-15, 15-20, 20-25, 25-30, 30-35, 35-40, 40-45, 45-50, 50-55, 55-60, 60-65, 65-70, 70-75, 75-80, 80-85, 85-90, 90-95, and 95-100 percent), and the 5 intervals of the upper tail of the streamflow distribution (95-96, 96-97, 97-98, 98-99 and 99-100 percent). For each of the 253,116 (712X711/2) unique pairings of stations and for each of the 34 percentile ranges, the concurrent daily mean streamflow values available for the two stations provided for station-pair application of the drainage-area ratio method. For each station pair, specific statistical summarization (median, mean, and standard deviation) of both the exponent and bias-correction components of the drainage-area ratio method were computed. Statewide statistics (median, mean, and standard deviation) of the station-pair specific statistics subsequently were computed and are tabulated herein. A separate analysis considered conditioning station pairs to those stations within 100 miles of each other and with the absolute value of the logarithm (base-10) of the ratio of the drainage areas greater than or equal to 0.25. Statewide statistics of the conditional station-pair specific statistics were computed and are tabulated. The conditional analysis is preferable because of the anticipation that small separation distances reflect similar hydrologic conditions and the observation of large variation in exponent estimates for similar-sized drainage areas. The conditional analysis determined that the exponent is about 0.89 for streamflow percentiles from 0 to about 50 percent, is about 0.92 for percentiles from about 50 to about 65 percent, and is about 0.93 for percentiles from about 65 to about 85 percent. The exponent decreases rapidly to about 0.70 for percentiles nearing 100 percent. The computation of the bias-correction factor is sensitive to the range analysis interval (range of streamflow percentile); however, evidence suggests that in practice the drainage-area method can be considered unbiased. Finally, for general application, suggested values of the exponent are tabulated for 54 percentiles of daily mean streamflow in Texas; when these values are used, the bias correction is unity.

  13. Correction of the equilibrium temperature caused by slight evaporation of water in protein crystal growth cells during long-term space experiments at International Space Station.

    PubMed

    Fujiwara, Takahisa; Suzuki, Yoshihisa; Yoshizaki, Izumi; Tsukamoto, Katsuo; Murayama, Kenta; Fukuyama, Seijiro; Hosokawa, Kouhei; Oshi, Kentaro; Ito, Daisuke; Yamazaki, Tomoya; Tachibana, Masaru; Miura, Hitoshi

    2015-08-01

    The normal growth rates of the {110} faces of tetragonal hen egg-white lysozyme crystals, R, were measured as a function of the supersaturation σ parameter using a reflection type interferometer under μG at the International Space Station (NanoStep Project). Since water slightly evaporated from in situ observation cells during a long-term space station experiment for several months, equilibrium temperature T(e) changed, and the actual σ, however, significantly increased mainly due to the increase in salt concentration C(s). To correct σ, the actual C(s) and protein concentration C(p), which correctly represent the measured T(e) value in space, were first calculated. Second, a new solubility curve with the corrected C(s) was plotted. Finally, the revised σ was obtained from the new solubility curve. This correction method successfully revealed that the 2.8% water was evaporated from the solution, leading to 2.8% increase in the C(s) and C(p) of the solution.

  14. Apparatus and method for classifying fuel pellets for nuclear reactor

    DOEpatents

    Wilks, Robert S.; Sternheim, Eliezer; Breakey, Gerald A.; Sturges, Jr., Robert H.; Taleff, Alexander; Castner, Raymond P.

    1984-01-01

    Control for the operation of a mechanical handling and gauging system for nuclear fuel pellets. The pellets are inspected for diameters, lengths, surface flaws and weights in successive stations. The control includes, a computer for commanding the operation of the system and its electronics and for storing and processing the complex data derived at the required high rate. In measuring the diameter, the computer enables the measurement of a calibration pellet, stores that calibration data and computes and stores diameter-correction factors and their addresses along a pellet. To each diameter measurement a correction factor is applied at the appropriate address. The computer commands verification that all critical parts of the system and control are set for inspection and that each pellet is positioned for inspection. During each cycle of inspection, the measurement operation proceeds normally irrespective of whether or not a pellet is present in each station. If a pellet is not positioned in a station, a measurement is recorded, but the recorded measurement indicates maloperation. In measuring diameter and length a light pattern including successive shadows of slices transverse for diameter or longitudinal for length are projected on a photodiode array. The light pattern is scanned electronically by a train of pulses. The pulses are counted during the scan of the lighted diodes. For evaluation of diameter the maximum diameter count and the number of slices for which the diameter exceeds a predetermined minimum is determined. For acceptance, the maximum must be less than a maximum level and the minimum must exceed a set number. For evaluation of length, the maximum length is determined. For acceptance, the length must be within maximum and minimum limits.

  15. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  16. Performance of some DFT functionals with dispersion on modeling of the translational isomers of a solvent-switchable [2]rotaxane

    NASA Astrophysics Data System (ADS)

    Ivanov, Petko

    2016-03-01

    The balances of interactions were studied by computational methods in the translational isomers of a solvent switchable fullerene-stoppered [2]rotaxane (1) manifesting unexpected behavior, namely that due to favorable dispersion interactions the fullerene stopper becomes the second station upon change of the solvent. For comparison, another system, a pH switchable molecular shuttle (2), was also examined as an example of prevailing electrostatic interactions. Tested for 1 were five global hybrid Generalized Gradient Approximation functionals (B3LYP, B3LYP-D3, B3LYP-D3BJ, PBEh1PBE and APFD), one long-range corrected, range-separated functional with D2 empirical dispersion correction, ωB97XD, the Zhao-Truhlar's hybrid meta-GGA functional M06 with double the amount of nonlocal exchange (2X), and a pure functional, B97, with the Grimme's D3BJ dispersion (B97D3). The molecular mechanics method qualitatively correctly reproduced the behavior of the [2]rotaxanes, whereas the DFT models, except for M06-2X to some extent, failed in the case of significant dispersion interactions with participation of the fulleropyrrolidine stopper (rotaxane 1). Unexpectedly, the benzylic amide macrocycle tends to adopt preferentially 'boat'-like conformation in most of the cases. Four hydrogen bonds interconnect the axle with the wheel for the translational isomer with the macroring at the succinamide station (station II), whereas the number of hydrogen bonds vary for the isomer with the macroring at the fulleropyrrolidine stopper (station I) depending of the computational model used. The B3LYP and the PBEh1PBE results show strong preference of station II in the gas phase and in the model solvent DMSO. After including empirical dispersion correction, the translational isomer with the macroring at station I has the lower energy with B3LYP, both in the gas phase and in DMSO. The same result, but with higher preference of station I, was estimated with APFD, ωB97XD and B97D3. Only M06-2X presented qualitatively correct behavior for the relative stability of the two translational isomers, namely, slight preference of station II for the isolated molecule and higher relative energy of the same isomer with the model solvent DMSO. The electrostatic interactions in 2 have the decisive contribution both when the macroring is positioned at the dipeptide residue for the neutral form, and at the N-benzylalanine fragment after protonation, and the observed behavior of the [2]rotaxane is correctly reproduced by the methods used.

  17. 75 FR 13318 - Virginia Electric and Power Company; Surry Power Station, Unit Nos. 1 and 2 (Surry 1 and 2...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-19

    ... notice. SUMMARY: This document corrects a notice appearing in the Federal Register on March 3, 2010 (75... Power Company; Surry Power Station, Unit Nos. 1 and 2 (Surry 1 and 2); Correction to Environmental... Surry 1 and 2, respectively.'' This action is necessary to add an implementation date for Surry Unit 2...

  18. Seismic Yield Estimates of UTTR Surface Explosions

    NASA Astrophysics Data System (ADS)

    Hayward, C.; Park, J.; Stump, B. W.

    2016-12-01

    Since 2007 the Utah Test and Training Range (UTTR) has used explosive demolition as a method to destroy excess solid rocket motors ranging in size from 19 tons to less than 2 tons. From 2007 to 2014, 20 high quality seismic stations within 180 km recorded most of the more than 200 demolitions. This provides an interesting dataset to examine seismic source scaling for surface explosions. Based upon observer records, shots were of 4 sizes, corresponding to the size of the rocket motors. Instrument corrections for the stations were quality controlled by examining the P-wave amplitudes of all magnitude 6.5-8 earthquakes from 30 to 90 degrees away. For each station recording, the instrument corrected RMS seismic amplitude in the first 10 seconds after the P-onset was calculated. Waveforms at any given station for all the observed explosions are nearly identical. The observed RMS amplitudes were fit to a model including a term for combined distance and station correction, a term for observed RMS amplitude, and an error term for the actual demolition size. The observed seismic yield relationship is RMS=k*Weight2/3 . Estimated yields for the largest shots vary by about 50% from the stated weights, with a nearly normal distribution.

  19. 2-D Path Corrections for Local and Regional Coda Waves: A Test of Transportability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayeda, K M; Malagnini, L; Phillips, W S

    2005-07-13

    Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. [2003] has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regionsmore » of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. We will compare performance of 1-D versus 2-D path corrections in a variety of regions. First, the complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Next, we will compare results for the Italian Alps using high frequency data from the University of Genoa. For Northern California, we used the same station and event distribution and compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7 {le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less

  20. Spatio-temporal environmental data tide corrections for reconnaissance operations

    NASA Astrophysics Data System (ADS)

    Barbu, Costin; Avera, Will; Harris, Mike; Malpass, Kevyn

    2005-06-01

    Dynamic, accurate near-real time environmental data is critical to the success of the mine countermeasures operations. Bathymetric data acquired from the AQS-20 mine hunting sensor should be adjusted for local tide variations related to the specific geographic area and time interval. This problem can be overcome by a spatio-temporal estimate of tide corrections provided for the area and time of interest by the Naval Research Laboratory tide prediction code PCTides. For each geographic position of the AQS-20 sonar, a tide height relative to mean sea level is computed by interpolating the tidal information from the K - nearest neighbored stations for the corresponding time. The value is used to correct the measured depth generated by the AQS-20 sonar in that location to mean sea level for fusion with other bathymetric data products. It is argued that this paper provides a useful tool to the MCM decision factors during Mine Warfare operations.

  1. Calibration of GOES-derived solar radiation data using a distributed network of surface measurements in Florida, USA

    USGS Publications Warehouse

    Sumner, David M.; Pathak, Chandra S.; Mecikalski, John R.; Paech, Simon J.; Wu, Qinglong; Sangoyomi, Taiye; Babcock, Roger W.; Walton, Raymond

    2008-01-01

    Solar radiation data are critically important for the estimation of evapotranspiration. Analysis of visible-channel data derived from Geostationary Operational Environmental Satellites (GOES) using radiative transfer modeling has been used to produce spatially- and temporally-distributed datasets of solar radiation. An extensive network of (pyranometer) surface measurements of solar radiation in the State of Florida has allowed refined calibration of a GOES-derived daily integrated radiation data product. This refinement of radiation data allowed for corrections of satellite sensor drift, satellite generational change, and consideration of the highly-variable cloudy conditions that are typical of Florida. To aid in calibration of a GOES-derived radiation product, solar radiation data for the period 1995–2004 from 58 field stations that are located throughout the State were compiled. The GOES radiation product was calibrated by way of a three-step process: 1) comparison with ground-based pyranometer measurements on clear reference days, 2) correcting for a bias related to cloud cover, and 3) deriving month-by-month bias correction factors. Pre-calibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m–2 day–1 (13 percent). Calibration reduced errors to 1.7 MJ m–2 day–1 (10 percent) and also removed time- and cloudiness-related biases. The final dataset has been used to produce Statewide evapotranspiration estimates.

  2. Identifying and Correcting Timing Errors at Seismic Stations in and around Iran

    DOE PAGES

    Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; ...

    2017-09-06

    A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14more » stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.« less

  3. Population size of Cuban Parrots Amazona leucocephala and Sandhill Cranes Grus canadensis and community involvement in their conservation in northern Isla de la Juventud, Cuba

    USGS Publications Warehouse

    Aguilera, X.G.; Alvarez, V.B.; Wiley, J.W.; Rosales, J.R.

    1999-01-01

    The Cuban Sandhill Crane Grus canadensis nesiotes and Cuban Parrot Amazona leucocephala palmarum are considered endangered species in Cuba and the Isla de la Juventud (formerly Isla de Pinos). Coincident with a public education campaign, a population survey for these species was conducted in the northern part of the Isla de la Juventud on 17 December 1995, from 06hoo to 10hoo. Residents from throughout the island participated, manning 98 stations, with 1-4 observers per station. Parrots were observed at 60 (61.2%) of the stations with a total of 1320, maximum (without correction for duplicate observations), and 1100, minimum (corrected), individuals counted. Sandhill cranes were sighted at 38 (38.8%) of the stations, with a total of 115 individuals. Cranes and parrots co-occurred at 20 (20.4%) of the stations.

  4. [ESTIMATION OF IONIZING RADIATION EFFECTIVE DOSES IN THE INTERNATIONAL SPACE STATION CREWS BY THE METHOD OF CALCULATION MODELING].

    PubMed

    Mitrikas, V G

    2015-01-01

    Monitoring of the radiation loading on cosmonauts requires calculation of absorbed dose dynamics with regard to the stay of cosmonauts in specific compartments of the space vehicle that differ in shielding properties and lack means of radiation measurement. The paper discusses different aspects of calculation modeling of radiation effects on human body organs and tissues and reviews the effective dose estimates for cosmonauts working in one or another compartment over the previous period of the International space station operation. It was demonstrated that doses measured by a real or personal dosimeters can be used to calculate effective dose values. Correct estimation of accumulated effective dose can be ensured by consideration for time course of the space radiation quality factor.

  5. 78 FR 19172 - Earth Stations Aboard Aircraft Communicating with Fixed-Satellite Service Geostationary-Orbit...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-29

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Parts 2 and 25 [IB Docket No. 12-376; FCC 12-161] Earth Stations Aboard Aircraft Communicating with Fixed-Satellite Service Geostationary-Orbit Space Stations AGENCY: Federal Communications Commission. ACTION: Proposed rule; correction. SUMMARY: The Federal...

  6. KENNEDY SPACE CENTER, FLA. - One of four rudder speed brake actuators arrives at Cape Canaveral Air Force Station. The actuators, to be installed on the orbiter Discovery, are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.

    NASA Image and Video Library

    2004-03-08

    KENNEDY SPACE CENTER, FLA. - One of four rudder speed brake actuators arrives at Cape Canaveral Air Force Station. The actuators, to be installed on the orbiter Discovery, are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.

  7. Improving automatic earthquake locations in subduction zones: a case study for GEOFON catalog of Tonga-Fiji region

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Heimann, Sebastian; Saul, Joachim; Tilmann, Frederik; Dahm, Torsten

    2015-04-01

    Automatic earthquake locations are sometimes associated with very large residuals up to 10 s even for clear arrivals, especially for regional stations in subduction zones because of their strongly heterogeneous velocity structure associated. Although these residuals are most likely not related to measurement errors but unmodelled velocity heterogeneity, these stations are usually removed from or down-weighted in the location procedure. While this is possible for large events, it may not be useful if the earthquake is weak. In this case, implementation of travel-time station corrections may significantly improve the automatic locations. Here, the shrinking box source-specific station term method (SSST) [Lin and Shearer, 2005] has been applied to improve relative location accuracy of 1678 events that occurred in the Tonga subduction zone between 2010 and mid-2014. Picks were obtained from the GEOFON earthquake bulletin for all available station networks. We calculated a set of timing corrections for each station which vary as a function of source position. A separate time correction was computed for each source-receiver path at the given station by smoothing the residual field over nearby events. We begin with a very large smoothing radius essentially encompassing the whole event set and iterate by progressively shrinking the smoothing radius. In this way, we attempted to correct for the systematic errors, that are introduced into the locations by the inaccuracies in the assumed velocity structure, without solving for a new velocity model itself. One of the advantages of the SSST technique is that the event location part of the calculation is separate from the station term calculation and can be performed using any single event location method. In this study, we applied a non-linear, probabilistic, global-search earthquake location method using the software package NonLinLoc [Lomax et al., 2000]. The non-linear location algorithm implemented in NonLinLoc is less sensitive to the problem of local misfit minima in the model space. Moreover, the spatial errors estimated by NonLinLoc are much more reliable than those derived by linearized algorithms. According to the obtained results, the root-mean-square (RMS) residual decreased from 1.37 s for the original GEOFON catalog (using a global 1-D velocity model without station specific corrections) to 0.90 s for our SSST catalog. Our results show 45-70% reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations. Additionally, our locations exhibit less scatter in depth and a sharper image of the seismicity associated with the subducting slab compared to the initial locations.

  8. Impact of seasonal and postglacial surface displacement on global reference frames

    NASA Astrophysics Data System (ADS)

    Krásná, Hana; Böhm, Johannes; King, Matt; Memin, Anthony; Shabala, Stanislav; Watson, Christopher

    2014-05-01

    The calculation of actual station positions requires several corrections which are partly recommended by the International Earth Rotation and Reference Systems Service (IERS) Conventions (e.g., solid Earth tides and ocean tidal loading) as well as other corrections, e.g. accounting for hydrology and atmospheric loading. To investigate the pattern of omitted non-linear seasonal motion we estimated empirical harmonic models for selected stations within a global solution of suitable Very Long Baseline Interferometry (VLBI) sessions as well as mean annual models by stacking yearly time series of station positions. To validate these models we compare them to displacement series obtained from the Gravity Recovery and Climate Experiment (GRACE) data and to hydrology corrections determined from global models. Furthermore, we assess the impact of the seasonal station motions on the celestial reference frame as well as on Earth orientation parameters derived from real and also artificial VLBI observations. In the second part of the presentation we apply vertical rates of the ICE-5G_VM2_2012 vertical land movement grid on vertical station velocities. We assess the impact of postglacial uplift on the variability in the scale given different sampling of the postglacial signal in time and hence on the uncertainty in the scale rate of the estimated terrestrial reference frame.

  9. Estimation of satellite position, clock and phase bias corrections

    NASA Astrophysics Data System (ADS)

    Henkel, Patrick; Psychas, Dimitrios; Günther, Christoph; Hugentobler, Urs

    2018-05-01

    Precise point positioning with integer ambiguity resolution requires precise knowledge of satellite position, clock and phase bias corrections. In this paper, a method for the estimation of these parameters with a global network of reference stations is presented. The method processes uncombined and undifferenced measurements of an arbitrary number of frequencies such that the obtained satellite position, clock and bias corrections can be used for any type of differenced and/or combined measurements. We perform a clustering of reference stations. The clustering enables a common satellite visibility within each cluster and an efficient fixing of the double difference ambiguities within each cluster. Additionally, the double difference ambiguities between the reference stations of different clusters are fixed. We use an integer decorrelation for ambiguity fixing in dense global networks. The performance of the proposed method is analysed with both simulated Galileo measurements on E1 and E5a and real GPS measurements of the IGS network. We defined 16 clusters and obtained satellite position, clock and phase bias corrections with a precision of better than 2 cm.

  10. Estimation of the impacts of different homogenization approaches on the variability of temperature series in Catalonia (North Eastern-Spain), Andorra and South Eastern - France. An experiment under the umbrella of the HOME-COST action.

    NASA Astrophysics Data System (ADS)

    Aguilar, E.; Prohom, M.; Mestre, O.; Esteban, P.; Kuglitsch, F. G.; Gruber, C.; Herrero, M.

    2008-12-01

    The almost unanimously accepted fact of climate change has brought many scientists to investigate the seasonal and interannual variability and change in instrumental climatic records. Unfortunately, these records are nearly always affected by homogeneity problems caused by changes in the station or its environment. The European Cooperation in the Field of Scientific and Technical Research (COST) is sponsoring the action COST-ES0601: Advances in homogenisation methods of climate series: an integrated approach (HOME), which aims amongst others to investigate the impacts of different homogenisation ap-proaches on the observed data series. In this work, we apply different detection/correction methods (SNHT, RhTest, Caussinus-Mestre, Vincent Interpolation Method, HOM Method) to annual, sea-sonal, monthly and daily data of a multi-country quality controlled dataset (17 stations in Catalonia (NE Spain); 3 stations in Andorra and 11 stations in SE France). The different outputs are analysed and the differences in the final se-ries studied. After this experiment, we can state that - although all the applied methods im-prove the homogeneity of the original series - the conclusions extracted from the analysis of the homogenised annual, seasonal, monthly data and extreme indices derived from daily data demonstrate important differences. As an exam-ple, some methods (SNHT) tend to detect fewer breakpoints than others (Caussinus-Mestre). Even if metadata or a pre-identified list of breakpoints is available, the correction factors calculated by the different approaches differ both in annual, seasonal, monthly and daily scales. In the latter case, some methods like HOM - based on the modelling of a candidate series against a reference series - present a richest solution than others based on the mere in-terpolation of monthly factors (Vincent Method), although the former are not al-ways applicable due to lack of good reference stations. In order to identify the best performing method (or suite of methods) COST-HOME action is conducting an intensive testing of the different homogenisation methods over simulated, surrogated and real series. At the end of the action (2011), we expect to present a significant contribution to a better evaluation of seasonal and interannual variability and change.

  11. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  12. Objective structured clinical examination "Death Certificate" station - Computer-based versus conventional exam format.

    PubMed

    Biolik, A; Heide, S; Lessig, R; Hachmann, V; Stoevesandt, D; Kellner, J; Jäschke, C; Watzke, S

    2018-04-01

    One option for improving the quality of medical post mortem examinations is through intensified training of medical students, especially in countries where such a requirement exists regardless of the area of specialisation. For this reason, new teaching and learning methods on this topic have recently been introduced. These new approaches include e-learning modules or SkillsLab stations; one way to objectify the resultant learning outcomes is by means of the OSCE process. However, despite offering several advantages, this examination format also requires considerable resources, in particular in regards to medical examiners. For this reason, many clinical disciplines have already implemented computer-based OSCE examination formats. This study investigates whether the conventional exam format for the OSCE forensic "Death Certificate" station could be replaced with a computer-based approach in future. For this study, 123 students completed the OSCE "Death Certificate" station, using both a computer-based and conventional format, half starting with the Computer the other starting with the conventional approach in their OSCE rotation. Assignment of examination cases was random. The examination results for the two stations were compared and both overall results and the individual items of the exam checklist were analysed by means of inferential statistics. Following statistical analysis of examination cases of varying difficulty levels and correction of the repeated measures effect, the results of both examination formats appear to be comparable. Thus, in the descriptive item analysis, while there were some significant differences between the computer-based and conventional OSCE stations, these differences were not reflected in the overall results after a correction factor was applied (e.g. point deductions for assistance from the medical examiner was possible only at the conventional station). Thus, we demonstrate that the computer-based OSCE "Death Certificate" station is a cost-efficient and standardised format for examination that yields results comparable to those from a conventional format exam. Moreover, the examination results also indicate the need to optimize both the test itself (adjusting the degree of difficulty of the case vignettes) and the corresponding instructional and learning methods (including, for example, the use of computer programmes to complete the death certificate in small group formats in the SkillsLab). Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. Isostatic gravity map of the Nevada Test Site and vicinity, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponce, D.A.; Harris, R.N.; Oliver, H.W.

    1988-12-31

    The isostatic gravity map of the Nevada Test Site (NTS) and vicinity is based on about 16,000 gravity stations. Principal facts of the gravity data were listed by Harris and others (1989) and their report included descriptions of base stations, high-precision and absolute gravity stations, and data accuracy. Observed gravity values were referenced to the International Gravity Standardization Net 1971 gravity datum described by Morelli (1974) and reduced using the Geodetic Reference System 1967 formula for the normal gravity on the ellipsoid (International Union of Geodesy and Geophysics, 1971). Free-air, Bouguer, curvature, and terrain corrections for a standard reduction densitymore » of 2.67 g/cm{sup 3} were made to compute complete Bouguer anomalies. Terrain corrections were made to a radial distance of 166.7 km from each station using a digital elevation model and a computer procedure by Plouff (1977) and, in general, include manually estimated inner-zone terrain corrections. Finally, isostatic corrections were made using a procedure by Simpson and others (1983) based on an Airy-Heiskanen model with local compensation (Heiskanen and Moritz, 1967) with an upper-crustal density of 2.67 g/cm{sup 3}, a crustal thickness of 25 km, and a density contrast between the lower-crust and upper-mantle of 0.4 g/cm{sup 3}. Isostatic corrections help remove the effects of long-wavelength anomalies related to topography and their compensating masses and, thus, enhance short- to moderate-wavelength anomalies caused by near surface geologic features. 6 refs.« less

  14. KENNEDY SPACE CENTER, FLA. - Workers at Cape Canaveral Air Force Station place one of four rudder speed brake actuators onto a pallet for X-ray. The actuators, to be installed on the orbiter Discovery, are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.

    NASA Image and Video Library

    2004-03-08

    KENNEDY SPACE CENTER, FLA. - Workers at Cape Canaveral Air Force Station place one of four rudder speed brake actuators onto a pallet for X-ray. The actuators, to be installed on the orbiter Discovery, are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.

  15. Estimating unbiased magnitudes for the announced DPRK nuclear tests, 2006-2016

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Bowers, David

    2017-04-01

    The seismic disturbances generated from the five (2006-2016) announced nuclear test explosions by the Democratic People's Republic of Korea (DPRK) are of moderate magnitude (body-wave magnitude mb 4-5) by global earthquake standards. An upward bias of network mean mb of low- to moderate-magnitude events is long established, and is caused by the censoring of readings from stations where the signal was below noise level at the time of the predicted arrival. This sampling bias can be overcome by maximum-likelihood methods using station thresholds at detecting (and non-detecting) stations. Bias in the mean mb can also be introduced by differences in the network of stations recording each explosion - this bias can reduced by using station corrections. We apply a maximum-likelihood (JML) inversion that jointly estimates station corrections and unbiased network mb for the five DPRK explosions recorded by the CTBTO International Monitoring Network (IMS) of seismic stations. The thresholds can either be directly measured from the noise preceding the observed signal, or determined by statistical analysis of bulletin amplitudes. The network mb of the first and smallest explosion is reduced significantly relative to the mean mb (to < 4.0 mb) by removal of the censoring bias.

  16. Isostatic gravity map of the Monterey 30 x 60 minute quadrangle and adjacent areas, California

    USGS Publications Warehouse

    Langenheim, V.E.; Stiles, S.R.; Jachens, R.C.

    2002-01-01

    The digital dataset consists of one file (monterey_100k.iso) containing 2,385 gravity stations. The file, monterey_100k.iso, contains the principal facts of the gravity stations, with one point coded per line. The format of the data is described below. Each gravity station has a station name, location (latitude and longitude, NAD27 projection), elevation, and an observed gravity reading. The data are on the IGSN71 datum and the reference ellipsoid is the Geodetic Reference System 1967 (GRS67). The free-air gravity anomalies were calculated using standard formulas (Telford and others, 1976). The Bouguer, curvature, and terrain corrections were applied to the free-air anomaly at each station to determine the complete Bouguer gravity anomalies at a reduction density of 2.67 g/cc. An isostatic correction was then applied to remove the long-wavelength effect of deep crustal and/or upper mantle masses that isostatically support regional topography.

  17. A theoretical study on the bottlenecks of GPS phase ambiguity resolution in a CORS RTK Network

    NASA Astrophysics Data System (ADS)

    Odijk, D.; Teunissen, P.

    2011-01-01

    Crucial to the performance of GPS Network RTK positioning is that a user receives and applies correction information from a CORS Network. These corrections are necessary for the user to account for the atmospheric (ionospheric and tropospheric) delays and possibly orbit errors between his approximate location and the locations of the CORS Network stations. In order to provide the most precise corrections to users, the CORS Network processing should be based on integer resolution of the carrier phase ambiguities between the network's CORS stations. One of the main challenges is to reduce the convergence time, thus being able to quickly resolve the integer carrier phase ambiguities between the network's reference stations. Ideally, the network ambiguity resolution should be conducted within one single observation epoch, thus truly in real time. Unfortunately, single-epoch CORS Network RTK ambiguity resolution is currently not feasible and in the present contribution we study the bottlenecks preventing this. For current dual-frequency GPS the primary cause of these CORS Network integer ambiguity initialization times is the lack of a sufficiently large number of visible satellites. Although an increase in satellite number shortens the ambiguity convergence times, instantaneous CORS Network RTK ambiguity resolution is not feasible even with 14 satellites. It is further shown that increasing the number of stations within the CORS Network itself does not help ambiguity resolution much, since every new station introduces new ambiguities. The problem with CORS Network RTK ambiguity resolution is the presence of the atmospheric (mainly ionospheric) delays themselves and the fact that there are no external corrections that are sufficiently precise. We also show that external satellite clock corrections hardly contribute to CORS Network RTK ambiguity resolution, despite their quality, since the network satellite clock parameters and the ambiguities are almost completely uncorrelated. One positive is that the foreseen modernized GPS will have a very beneficial effect on CORS ambiguity resolution, because of an additional frequency with improved code precision.

  18. An "In Situ" Calibration Correction Procedure (KCICLO) Based on AOD Diurnal Cycle: Application to AERONET-El Arenosillo (Spain) AOD Data Series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cachorro, V. E.; Toledano, C.; Berjon, A.

    Aerosol optical depth (AOD) very often shows a distinct diurnal cycle pattern, which seems to be an artifact resulting from an incorrect calibration (or an equivalent effect, such as filter degradation). The shape of this fictitious AOD diurnal cycle varies as the inverse of the solar air mass (m) and the magnitude of the effect is greatest at midday. The observation of this effect is not easy at many field stations, and only those stations with good weather conditions permit an easier detection and the possibility of its correction. By taking advantage of this dependence on the air mass, wemore » propose an improved “in situ” correction-calibration procedure to AOD measured data series. The method is named KCICLO because the determination of a constant K and the behavior of AOD as a cycle (ciclo, in Spanish). We estimate it has an accuracy of 0.2–0.5% for the calibration ratio constant K, or 0.002–0.005 in AOD at field stations. Although the KCICLO is an “in situ” calibration method, we recommend it to be used as an AOD correction method for field stations. At high-altitude sites, it may be used independently of the classical Langley method (CLM). However, we also recommend it to be used as a complement to CLM, improving it considerably. The application of this calibration correction method to the nearly 5 year AOD data series at El Arenosillo (Huelva, southwestern Spain) station belonging to Aerosol Robotic Network (AERONET)-PHOTONS shows that 8 (50%) of 16 filters of the four analyzed Sun photometers were outside of the 0.02 uncertainty of AERONET specification. The largest departures reached values of 0.06. The results show the efficiency of the method and a significant improvement over other “in situ” methods, with no other information required beyond the same AOD data.« less

  19. 2-D or not 2-D, that is the question: A Northern California test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayeda, K; Malagnini, L; Phillips, W S

    2005-06-06

    Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions ofmore » approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less

  20. 76 FR 72982 - Cable Statutory License: Specialty Station List; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-28

    ...: Ben Golant, Assistant General Counsel, Copyright GC/I&R, P.O. Box 70400, Southwest Station, Washington... objection (no evidence of construction or the type of programming broadcast should not be identified as...

  1. Site term from single-station sigma analysis of S-waves in western Turkey

    NASA Astrophysics Data System (ADS)

    Akyol, Nihal

    2018-05-01

    The main aim of this study is to obtain site terms from single-station sigma analysis and to compare them with the site functions resulting from different techniques. The dataset consists of 1764 records from 322 micro- and moderate-size local earthquakes recorded by 29 stations in western Turkey. Median models were derived from S-wave Fourier amplitude spectra for selected 22 frequencies, by utilizing the MLR procedure which performs the maximum likelihood (ML) estimation of mixed models where the fixed effects are treated as random (R) effects with infinite variance. At this stage, b (geometrical spreading coefficient) and Q (quality factor) values were decomposed, simultaneously. The residuals of the median models were examined by utilizing the single-station sigma analysis to obtain the site terms of 29 stations. Sigma for the median models is about 0.422 log10 units and decreases to about 0.308, when the site terms from the single-station sigma analysis were considered (27% reduction). The event-corrected within-event standard deviations for each frequency are rather stable, in the range 0.19-0.23 log10 units with an average value of 0.20 (± 0.01). The site terms from single-station sigma analysis were compared with the site function estimates from the horizontal-to-vertical-spectral-ratio (HVSR) and generalized inversion (INV) techniques by Akyol et al. (2013) and Kurtulmuş and Akyol (2015), respectively. Consistency was observed between the single-station sigma site terms and the INV site transfer functions. The results imply that the single-station sigma analysis could separate the site terms with respect to the median models.

  2. The impact of higher-order ionospheric effects on estimated tropospheric parameters in Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Zus, F.; Deng, Z.; Wickert, J.

    2017-08-01

    The impact of higher-order ionospheric effects on the estimated station coordinates and clocks in Global Navigation Satellite System (GNSS) Precise Point Positioning (PPP) is well documented in literature. Simulation studies reveal that higher-order ionospheric effects have a significant impact on the estimated tropospheric parameters as well. In particular, the tropospheric north-gradient component is most affected for low-latitude and midlatitude stations around noon. In a practical example we select a few hundred stations randomly distributed over the globe, in March 2012 (medium solar activity), and apply/do not apply ionospheric corrections in PPP. We compare the two sets of tropospheric parameters (ionospheric corrections applied/not applied) and find an overall good agreement with the prediction from the simulation study. The comparison of the tropospheric parameters with the tropospheric parameters derived from the ERA-Interim global atmospheric reanalysis shows that ionospheric corrections must be consistently applied in PPP and the orbit and clock generation. The inconsistent application results in an artificial station displacement which is accompanied by an artificial "tilting" of the troposphere. This finding is relevant in particular for those who consider advanced GNSS tropospheric products for meteorological studies.

  3. BARENTS16: a 1-D velocity model for the western Barents Sea

    NASA Astrophysics Data System (ADS)

    Pirli, Myrto; Schweitzer, Johannes

    2018-01-01

    A minimum 1-D seismic velocity model for routine seismic event location purposes was determined for the area of the western Barents Sea, using a modified version of the VELEST code. The resulting model, BARENTS16, and corresponding station corrections were produced using data from stations at regional distances, the vast majority located in the periphery of the recorded seismic activity, due to the unfavorable land-sea distribution. Recorded seismicity is approached through the listings of a joint bulletin, resulting from the merging of several international and regional bulletins for the region, as well as additional parametric data from temporary deployments. We discuss the challenges posed by this extreme network-seismicity geometry in terms of velocity estimation resolution and result stability. Although the conditions do not facilitate the estimation of meaningful station corrections at the farthermost stations, and even well-resolved corrections do not have a convincing contribution, we show that the process can still converge to a stable velocity average for the crust and upper mantle, in good agreement with a priori information about the regional structure and geology, which reduces adequately errors in event location estimates.

  4. A method for obtaining distributed surface flux measurements in complex terrain

    NASA Astrophysics Data System (ADS)

    Daniels, M. H.; Pardyjak, E.; Nadeau, D. F.; Barrenetxea, G.; Brutsaert, W. H.; Parlange, M. B.

    2011-12-01

    Sonic anemometers and gas analyzers can be used to measure fluxes of momentum, heat, and moisture over flat terrain, and with the proper corrections, over sloping terrain as well. While this method of obtaining fluxes is currently the most accurate available, the instruments themselves are costly, making installation of many stations impossible for most campaign budgets. Small, commercial automatic weather stations (Sensorscope) are available at a fraction of the cost of sonic anemometers or gas analyzers. Sensorscope stations use slow-response instruments to measure standard meteorological variables, including wind speed and direction, air temperature, humidity, surface skin temperature, and incoming solar radiation. The method presented here makes use of one sonic anemometer and one gas analyzer along with a dozen Sensorscope stations installed throughout the Val Ferret catchment in southern Switzerland in the summers of 2009, 2010 and 2011. Daytime fluxes are calculated using Monin-Obukhov similarity theory in conjunction with the surface energy balance at each Sensorscope station as well as at the location of the sonic anemometer and gas analyzer, where a suite of additional slow-response instruments were co-located. Corrections related to slope angle were made for wind speeds and incoming shortwave radiation measured by the horizontally-mounted cup anemometers and incoming solar radiation sensors respectively. A temperature correction was also applied to account for daytime heating inside the radiation shield on the slow-response temperature/humidity sensors. With these corrections, we find a correlation coefficient of 0.77 between u* derived using Monin-Obukhov similarity theory and that of the sonic anemometer. Calculated versus measured heat fluxes also compare well and local patterns of latent heat flux and measured surface soil moisture are correlated.

  5. True-coincidence correction when using an LEPD for the determination of the lanthanides in the environment via k0-based INAA.

    PubMed

    Freitas, M C; De Corte, F

    1994-01-01

    As part of a recent study on the environmental effects caused by the operation of a coal-fired power station at Sines, Portugal, k0-based instrumental neutron activation analysis (INAA) was used for the determination of the lanthanides (and also of tantalum and uranium) in plant leaves and lichens. In view of the accuracy and sensitivity of the determinations, it was advantageous to make use of a low-energy photon detector (LEPD). To begin with, in the present article, a survey is given of the former developments leading to user-friendly procedures for detection efficiency calibration of the LEPD and for correction for true-coincidence (cascade summing) effects. As a continuation of this, computer coincidence correction factors are now tabulated for the relevant low-energetic gamma-rays of the analytically interesting lanthanide, tantalum, and uranium radionuclides. Also the 140.5-keV line of 99Mo/99mTc is included, molybdenum being the comparator chosen when counting using an LEPD.

  6. The Top-of-Instrument corrections for nuclei with AMS on the Space Station

    NASA Astrophysics Data System (ADS)

    Ferris, N. G.; Heil, M.

    2018-05-01

    The Alpha Magnetic Spectrometer (AMS) is a large acceptance, high precision magnetic spectrometer on the International Space Station (ISS). The top-of-instrument correction for nuclei flux measurements with AMS accounts for backgrounds due to the fragmentation of nuclei with higher charge. Upon entry in the detector, nuclei may interact with AMS materials and split into fragments of lower charge based on their cross-section. The redundancy of charge measurements along the particle trajectory with AMS allows for the determination of inelastic interactions and for the selection of high purity nuclei samples with small uncertainties. The top-of-instrument corrections for nuclei with 2 < Z ≤ 6 are presented.

  7. 75 FR 17452 - PPL Susquehanna, LLC.; Susquehanna Steam Electric Station, Units 1 And 2; Correction to Federal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-06

    ...This document corrects a notice appearing in the Federal Register on March 19, 2010 (75 FR 13322), that incorrectly stated the number of exemptions requested by the licensee and the corresponding implementation date. This action is necessary to correct erroneous information.

  8. An estimation of tropospheric corrections using GPS and synoptic data: Improving Urmia Lake water level time series from Jason-2 and SARAL/AltiKa satellite altimetry

    NASA Astrophysics Data System (ADS)

    Arabsahebi, Reza; Voosoghi, Behzad; Tourian, Mohammad J.

    2018-05-01

    Tropospheric correction is one of the most important corrections in satellite altimetry measurements. Tropospheric wet and dry path delays have strong dependence on temperature, pressure and humidity. Tropospheric layer has particularly high variability over coastal regions due to humidity, wind and temperature gradients. Depending on the extent of water body and wind conditions over an inland water, Wet Tropospheric Correction (WTC) is within the ranges from a few centimeters to tens of centimeters. Therefore, an extra care is needed to estimate tropospheric corrections on the altimetric measurements over inland waters. This study assesses the role of tropospheric correction on the altimetric measurements over the Urmia Lake in Iran. For this purpose, four types of tropospheric corrections have been used: (i) microwave radiometer (MWR) observations, (ii) tropospheric corrections computed from meteorological models, (iii) GPS observations and (iv) synoptic station data. They have been applied to Jason-2 track no. 133 and SARAL/AltiKa track no. 741 and 356 corresponding to 117-153 and the 23-34 cycles, respectively. In addition, the corresponding measurements of PISTACH and PEACHI, include new retracking method and an innovative wet tropospheric correction, have also been used. Our results show that GPS observation leads to the most accurate tropospheric correction. The results obtained from the PISTACH and PEACHI projects confirm those obtained with the standard SGDR, i.e., the role of GPS in improving the tropospheric corrections. It is inferred that the MWR data from Jason-2 mission is appropriate for the tropospheric corrections, however the SARAL/AltiKa one is not proper because Jason-2 possesses an enhanced WTC near the coast. Furthermore, virtual stations are defined for assessment of the results in terms of time series of Water Level Height (WLH). The results show that GPS tropospheric corrections lead to the most accurate WLH estimation for the selected virtual stations, which improves the accuracy of the obtained WLH time series by about 5%.

  9. Support and Maintenance of the International Monitoring System network

    NASA Astrophysics Data System (ADS)

    Pereira, Jose; Bazarragchaa, Sergelen; Kilgour, Owen; Pretorius, Jacques; Werzi, Robert; Beziat, Guillaume; Hamani, Wacel; Mohammad, Walid; Brely, Natalie

    2014-05-01

    The Monitoring Facilities Support Section of the Provisional Technical Secretariat (PTS) has as its main task to ensure optimal support and maintenance of an array of 321 monitoring stations and 16 radionuclide laboratories distributed worldwide. Raw seismic, infrasonic, hydroacoustic and radionuclide data from these facilities constitutes the basic product delivered by the International Monitoring System (IMS). In the process of maintaining such a wide array of stations of different technologies, the Support Section contributes to ensuring station mission capability. Mission capable data availability according to the IMS requirements should be at least 98% annually (no more than 7 days down time per year per waveform stations - 14 continuous for radionuclide stations) for continuous data sending stations. In this presentation, we will present our case regarding our intervention at stations to address equipment supportability and maintainability, as these are particularly large activities requiring the removal of a substantial part of the station equipment and installation of new equipment. The objective is always to plan these activities while minimizing downtime and continuing to meet all IMS requirements, including those of data availability mentioned above. We postulate that these objectives are better achieved by planning and making use of preventive maintenance, as opposed to "run-to-failure" with associated corrective maintenance. We use two recently upgraded Infrasound Stations (IS39 Palau and IS52 BIOT) as a case study and establish a comparison between these results and several other stations where corrective maintenance was performed, to demonstrate our hypothesis.

  10. Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances

    2006-01-01

    In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.

  11. Regional travel-time residual studies and station correction from 1-D velocity models for some stations around Peninsular Malaysia and Singapore

    NASA Astrophysics Data System (ADS)

    Osagie, Abel U.; Nawawi, Mohd.; Khalil, Amin Esmail; Abdullah, Khiruddin

    2017-06-01

    We have investigated the average P-wave travel-time residuals for some stations around Southern Thailand, Peninsular Malaysia and Singapore at regional distances. Six years (January, 2010-December, 2015) record of events from central and northern Sumatra was obtained from the digital seismic archives of Integrated Research Institute for Seismology (IRIS). The criteria used for the data selection are designed to be above the magnitude of mb 4.5, depth less than 200 km and an epicentral distance shorter than 1000 km. Within this window a total number of 152 earthquakes were obtained. Furthermore, data were filtered based on the clarity of the seismic phases that are manually picked. A total of 1088 P-wave arrivals and 962 S-wave arrivals were hand-picked from 10 seismic stations around the Peninsula. Three stations IPM, KUM, and KOM from Peninsular Malaysia, four stations BTDF, NTU, BESC and KAPK from Singapore and three stations SURA, SRIT and SKLT located in the southern part of Thailand are used. Station NTU was chosen as the Ref. station because it recorded the large number of events. Travel-times were calculated using three 1-D models (Preliminary Ref. Earth Model PREM (Dziewonski and Anderson, 1981, IASP91, and Lienert et al., 1986) and an adopted two-point ray tracing algorithm. For the three models, we corroborate our calculated travel-times with the results from the use of TAUP travel-time calculation software. Relative to station NTU, our results show that the average P wave travel-time residual for PREM model ranges from -0.16 to 0.45 s for BESC and IPM respectively. For IASP91 model, the average residual ranges from -0.25 to 0.24 s for SRIT and SKLT respectively, and ranges from -0.22 to 0.30 s for KAPK and IPM respectively for Lienert et al. (1986) model. Generally, most stations have slightly positive residuals relative to station NTU. These corrections reflect the difference between actual and estimated model velocities along ray paths to stations and can compensate for heterogeneous velocity structure near individual stations. The computed average travel-time residuals can reduce errors attributable to station correction in the inversion of hypocentral parameters around the Peninsula. Due to the heterogeneity occasioned by the numerous fault systems, a better 1-D velocity model for the Peninsula is desired for more reliable hypocentral inversion and other seismic investigations.

  12. Temperature correction and usefulness of ocean bottom pressure data from cabled seafloor observatories around Japan for analyses of tsunamis, ocean tides, and low-frequency geophysical phenomena

    NASA Astrophysics Data System (ADS)

    Inazu, D.; Hino, R.

    2011-11-01

    Ocean bottom pressure (OBP) data obtained by cabled seafloor observatories deployed around Japan, are known to be significantly affected by temperature changes. This paper examines the relationship between the OBP and temperature records of six OBP gauges in terms of a regression coefficient and lag at a wide range of frequencies. No significant temperature dependency is recognized in secular variations, while substantial increases, at rates of the order of 1 hPa/year, are commonly evident in the OBP records. Strong temperature dependencies are apparent for periods of hours to days, and we correct the OBP data based on the estimated OBP-temperature relationship. At periods longer than days, the temperature corrections work well for extracting geophysical signals for OBP data at a station off Hokkaido (KPG2), while other corrected data show insufficient signal-to-noise ratios. At a tsunami frequency, the correction can reduce OBP fluctuations, due to rapid temperature changes, by as much as millimeters, and is especially effective for data at a station off Shikoku (MPG2) at which rapid temperature changes most frequently occur. A tidal analysis shows that OBP data at a station off Honshu (TM1), and at KPG2, are useful for studies on the long-term variations of tidal constituents.

  13. NASA's Accident Precursor Analysis Process and the International Space Station

    NASA Technical Reports Server (NTRS)

    Groen, Frank; Lutomski, Michael

    2010-01-01

    This viewgraph presentation reviews the implementation of Accident Precursor Analysis (APA), as well as the evaluation of In-Flight Investigations (IFI) and Problem Reporting and Corrective Action (PRACA) data for the identification of unrecognized accident potentials on the International Space Station.

  14. Revision of earthquake hypocenter locations in GEOFON bulletin data using global source-specific station terms technique

    NASA Astrophysics Data System (ADS)

    Nooshiri, N.; Saul, J.; Heimann, S.; Tilmann, F. J.; Dahm, T.

    2015-12-01

    The use of a 1D velocity model for seismic event location is often associated with significant travel-time residuals. Particularly for regional stations in subduction zones, where the velocity structure strongly deviates from the assumed 1D model, residuals of up to ±10 seconds are observed even for clear arrivals, which leads to strongly biased locations. In fact, due to mostly regional travel-time anomalies, arrival times at regional stations do not match the location obtained with teleseismic picks, and vice versa. If the earthquake is weak and only recorded regionally, or if fast locations based on regional stations are needed, the location may be far off the corresponding teleseismic location. In this case, implementation of travel-time corrections may leads to a reduction of the travel-time residuals at regional stations and, in consequence, significantly improve the relative location accuracy. Here, we have extended the source-specific station terms (SSST) technique to regional and teleseismic distances and adopted the algorithm for probabilistic, non-linear, global-search earthquake location. The method has been applied to specific test regions using P and pP phases from the GEOFON bulletin data for all available station networks. By using this method, a set of timing corrections has been calculated for each station varying as a function of source position. In this way, an attempt is made to correct for the systematic errors, introduced by limitations and inaccuracies in the assumed velocity structure, without solving for a new earth model itself. In this presentation, we draw on examples of the application of this global SSST technique to relocate earthquakes from the Tonga-Fiji subduction zone and from the Chilean margin. Our results have been showing a considerable decrease of the root-mean-square (RMS) residual in earthquake location final catalogs, a major reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations and sharper images of the seismicity compared to the initial locations.

  15. Enhanced autocompensating quantum cryptography system.

    PubMed

    Bethune, Donald S; Navarro, Martha; Risk, William P

    2002-03-20

    We have improved the hardware and software of our autocompensating system for quantum key distribution by replacing bulk optical components at the end stations with fiber-optic equivalents and implementing software that synchronizes end-station activities, communicates basis choices, corrects errors, and performs privacy amplification over a local area network. The all-fiber-optic arrangement provides stable, efficient, and high-contrast routing of the photons. The low-bit error rate leads to high error-correction efficiency and minimizes data sacrifice during privacy amplification. Characterization measurements made on a number of commercial avalanche photodiodes are presented that highlight the need for improved devices tailored specifically for quantum information applications. A scheme for frequency shifting the photons returning from Alice's station to allow them to be distinguished from backscattered noise photons is also described.

  16. Local magnitude determinations for intermountain seismic belt earthquakes from broadband digital data

    USGS Publications Warehouse

    Pechmann, J.C.; Nava, S.J.; Terra, F.M.; Bernier, J.C.

    2007-01-01

    The University of Utah Seismograph Stations (UUSS) earthquake catalogs for the Utah and Yellowstone National Park regions contain two types of size measurements: local magnitude (ML) and coda magnitude (MC), which is calibrated against ML. From 1962 through 1993, UUSS calculated ML values for southern and central Intermountain Seismic Belt earthquakes using maximum peak-to-peak (p-p) amplitudes on paper records from one to five Wood-Anderson (W-A) seismographs in Utah. For ML determinations of earthquakes since 1994, UUSS has utilized synthetic W-A seismograms from U.S. National Seismic Network and UUSS broadband digital telemetry stations in the region, which numbered 23 by the end of our study period on 30 June 2002. This change has greatly increased the percentage of earthquakes for which ML can be determined. It is now possible to determine ML for all M ???3 earthquakes in the Utah and Yellowstone regions and earthquakes as small as M <1 in some areas. To maintain continuity in the magnitudes in the UUSS earthquake catalogs, we determined empirical ML station corrections that minimize differences between MLs calculated from paper and synthetic W-A records. Application of these station corrections, in combination with distance corrections from Richter (1958) which have been in use at UUSS since 1962, produces ML values that do not show any significant distance dependence. ML determinations for the Utah and Yellowstone regions for 1981-2002 using our station corrections and Richter's distance corrections have provided a reliable data set for recalibrating the MC scales for these regions. Our revised ML values are consistent with available moment magnitude determinations for Intermountain Seismic Belt earthquakes. To facilitate automatic ML measurements, we analyzed the distribution of the times of maximum p-p amplitudes in synthetic W-A records. A 30-sec time window for maximum amplitudes, beginning 5 sec before the predicted Sg time, encompasses 95% of the maximum p-p amplitudes. In our judgment, this time window represents a good compromise between maximizing the chances of capturing the maximum amplitude and minimizing the risk of including other seismic events.

  17. The engine maintenance scheduling by using reliability centered maintenance method and the identification of 5S application in PT. XYZ

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Panjaitan, N.; Saragih, A. F.

    2018-02-01

    PT. XYZ is a manufacturing company that produces fresh fruit bunches (FFB) to Crude Palm Oil (CPO) and Palm Kernel Oil (PKO). PT. XYZ consists of six work stations: receipt station, sterilizing station, thressing station, pressing station, clarification station, and kernelery station. So far, the company is still implementing corrective maintenance maintenance system for production machines where the machine repair is done after damage occurs. Problems at PT. XYZ is the absence of scheduling engine maintenance in a planned manner resulting in the engine often damaged which can disrupt the smooth production. Another factor that is the problem in this research is the kernel station environment that becomes less convenient for operators such as there are machines and equipment not used in the production area, slippery, muddy, scattered fibers, incomplete use of PPE, and lack of employee discipline. The most commonly damaged machine is in the seed processing station (kernel station) which is cake breaker conveyor machine. The solution of this problem is to propose a schedule plan for maintenance of the machine by using the method of reliability centered maintenance and also the application of 5S. The result of the application of Reliability Centered maintenance method is obtained four components that must be treated scheduled (time directed), namely: for bearing component is 37 days, gearbox component is 97 days, CBC pen component is 35 days and conveyor pedal component is 32 days While after identification the application of 5S obtained the proposed corporate environmental improvement measures in accordance with the principles of 5S where unused goods will be moved from the production area, grouping goods based on their use, determining the procedure of cleaning the production area, conducting inspection in the use of PPE, and making 5S slogans.

  18. Statistical Correction of Air Temperature Forecasts for City and Road Weather Applications

    NASA Astrophysics Data System (ADS)

    Mahura, Alexander; Petersen, Claus; Sass, Bent; Gilet, Nicolas

    2014-05-01

    The method for statistical correction of air /road surface temperatures forecasts was developed based on analysis of long-term time-series of meteorological observations and forecasts (from HIgh Resolution Limited Area Model & Road Conditions Model; 3 km horizontal resolution). It has been tested for May-Aug 2012 & Oct 2012 - Mar 2013, respectively. The developed method is based mostly on forecasted meteorological parameters with a minimal inclusion of observations (covering only a pre-history period). Although the st iteration correction is based taking into account relevant temperature observations, but the further adjustment of air and road temperature forecasts is based purely on forecasted meteorological parameters. The method is model independent, e.g. it can be applied for temperature correction with other types of models having different horizontal resolutions. It is relatively fast due to application of the singular value decomposition method for matrix solution to find coefficients. Moreover, there is always a possibility for additional improvement due to extra tuning of the temperature forecasts for some locations (stations), and in particular, where for example, the MAEs are generally higher compared with others (see Gilet et al., 2014). For the city weather applications, new operationalized procedure for statistical correction of the air temperature forecasts has been elaborated and implemented for the HIRLAM-SKA model runs at 00, 06, 12, and 18 UTCs covering forecast lengths up to 48 hours. The procedure includes segments for extraction of observations and forecast data, assigning these to forecast lengths, statistical correction of temperature, one-&multi-days statistical evaluation of model performance, decision-making on using corrections by stations, interpolation, visualisation and storage/backup. Pre-operational air temperature correction runs were performed for the mainland Denmark since mid-April 2013 and shown good results. Tests also showed that the CPU time required for the operational procedure is relatively short (less than 15 minutes including a large time spent for interpolation). These also showed that in order to start correction of forecasts there is no need to have a long-term pre-historical data (containing forecasts and observations) and, at least, a couple of weeks will be sufficient when a new observational station is included and added to the forecast point. Note for the road weather application, the operationalization of the statistical correction of the road surface temperature forecasts (for the RWM system daily hourly runs covering forecast length up to 5 hours ahead) for the Danish road network (for about 400 road stations) was also implemented, and it is running in a test mode since Sep 2013. The method can also be applied for correction of the dew point temperature and wind speed (as a part of observations/ forecasts at synoptical stations), where these both meteorological parameters are parts of the proposed system of equations. The evaluation of the method performance for improvement of the wind speed forecasts is planned as well, with considering possibilities for the wind direction improvements (which is more complex due to multi-modal types of such data distribution). The method worked for the entire domain of mainland Denmark (tested for 60 synoptical and 395 road stations), and hence, it can be also applied for any geographical point within this domain, as through interpolation to about 100 cities' locations (for Danish national byvejr forecasts). Moreover, we can assume that the same method can be used in other geographical areas. The evaluation for other domains (with a focus on Greenland and Nordic countries) is planned. In addition, a similar approach might be also tested for statistical correction of concentrations of chemical species, but such approach will require additional elaboration and evaluation.

  19. Assessing the acoustical climate of underground stations.

    PubMed

    Nowicka, Elzbieta

    2007-01-01

    Designing a proper acoustical environment--indispensable to speech recognition--in long enclosures is difficult. Although there is some literature on the acoustical conditions in underground stations, there is still little information about methods that make estimation of correct reverberation conditions possible. This paper discusses the assessment of the reverberation conditions of underground stations. A comparison of the measurements of reverberation time in Warsaw's underground stations with calculated data proves there are divergences between measured and calculated early decay time values, especially for long source-receiver distances. Rapid speech transmission index values for measured stations are also presented.

  20. Feed-forward alignment correction for advanced overlay process control using a standalone alignment station "Litho Booster"

    NASA Astrophysics Data System (ADS)

    Yahiro, Takehisa; Sawamura, Junpei; Dosho, Tomonori; Shiba, Yuji; Ando, Satoshi; Ishikawa, Jun; Morita, Masahiro; Shibazaki, Yuichi

    2018-03-01

    One of the main components of an On-Product Overlay (OPO) error budget is the process induced wafer error. This necessitates wafer-to-wafer correction in order to optimize overlay accuracy. This paper introduces the Litho Booster (LB), standalone alignment station as a solution to improving OPO. LB can execute high speed alignment measurements without throughput (THP) loss. LB can be installed in any lithography process control loop as a metrology tool, and is then able to provide feed-forward (FF) corrections to the scanners. In this paper, the detailed LB design is described and basic LB performance and OPO improvement is demonstrated. Litho Booster's extendibility and applicability as a solution for next generation manufacturing accuracy and productivity challenges are also outlined

  1. Modeling The Hydrology And Water Allocation Under Climate Change In Rural River Basins: A Case Study From Nam Ngum River Basin, Laos

    NASA Astrophysics Data System (ADS)

    Jayasekera, D. L.; Kaluarachchi, J.; Kim, U.

    2011-12-01

    Rural river basins with sufficient water availability to maintain economic livelihoods can be affected with seasonal fluctuations of precipitation and sometimes by droughts. In addition, climate change impacts can also alter future water availability. General Circulation Models (GCMs) provide credible quantitative estimates of future climate conditions but such estimates are often characterized by bias and coarse scale resolution making it necessary to downscale the outputs for use in regional hydrologic models. This study develops a methodology to downscale and project future monthly precipitation in moderate scale basins where data are limited. A stochastic framework for single-site and multi-site generation of weekly rainfall is developed while preserving the historical temporal and spatial correlation structures. The spatial correlations in the simulated occurrences and the amounts are induced using spatially correlated yet serially independent random numbers. This method is applied to generate weekly precipitation data for a 100-year period in the Nam Ngum River Basin (NNRB) that has a land area of 16,780 km2 located in Lao P.D.R. This method is developed and applied using precipitation data from 1961 to 2000 for 10 selected weather stations that represents the basin rainfall characteristics. Bias-correction method, based on fitted theoretical probability distribution transformations, is applied to improve monthly mean frequency, intensity and the amount of raw GCM precipitation predicted at a given weather station using CGCM3.1 and ECHAM5 for SRES A2 emission scenario. Bias-correction procedure adjusts GCM precipitation to approximate the long-term frequency and the intensity distribution observed at a given weather station. Index of agreement and mean absolute error are determined to assess the overall ability and performance of the bias correction method. The generated precipitation series aggregated at monthly time step was perturbed by the change factors estimated using the corrected GCM and baseline scenarios for future time periods of 2011-2050 and 2051-2090. A network based hydrologic and water resources model, WEAP, was used to simulate the current water allocation and management practices to identify the impacts of climate change in the 20th century. The results of this work are used to identify the multiple challenges faced by stakeholders and planners in water allocation for competing demands in the presence of climate change impacts.

  2. Understanding Laterally Varying Path Effects on P/S Ratios and their Effectiveness for Event Discrimination at Local Distances

    NASA Astrophysics Data System (ADS)

    Pyle, M. L.; Walter, W. R.

    2017-12-01

    Discrimination between underground explosions and naturally occurring earthquakes is an important endeavor for global security and test-ban treaty monitoring, and ratios of seismic P to S-wave amplitudes at regional distances have proven to be an effective discriminant. The use of the P/S ratio is rooted in the idea that explosive sources should theoretically only generate compressional energy. While, in practice, shear energy is observed from explosions, generally when corrections are made for magnitude and distance, P/S ratios from explosions are higher than those from surrounding earthquakes. At local distances (< 200 km) that might be needed to detect smaller events, however, this discriminant becomes less reliable. While ratios at some stations still show separation between earthquake and explosion populations, at other stations the populations are indistinguishable. There is no clear distance or azimuthal trend for which stations show discriminating abilities and which do not. A number of factors may play a role in differences we see between regional and local discrimination, including source effects such as depth and radiation pattern, and path effects such as laterally varying attenuation and focusing/defocusing from layers and scattering. We use data from the Source Physics Experiment (SPE) to investigate some of these effects. SPE is a series of chemical explosions at the Nevada National Security Site (NNSS) designed to improve our understanding and modeling capabilities of shear waves generated by explosions. Phase I consisted of 5 explosions in granite and Phase II will move to a contrasting dry alluvium geology. We apply a high-resolution 2D attenuation model to events near the NNSS to examine what effect path plays in local P/S ratios, and how well an earthquake-derived model can account for shallower explosion paths. The model incorporates both intrinsic attenuation and scattering effects and extends to 16 Hz, allowing us to make lateral path corrections and consider high-frequency ratios. Preliminary work suggests that while 2D path corrections modestly improve earthquake amplitude predictions, explosion amplitudes are not well matched, and so P/S ratios do not necessarily improve. Further work is needed to better understand the uses and limitation of 2D path corrections for local P/S ratios.

  3. A new dataset of Wood Anderson magnitude from the Trieste (Italy) seismic station

    NASA Astrophysics Data System (ADS)

    Sandron, Denis; Gentile, G. Francesco; Gentili, Stefania; Rebez, Alessandro; Santulin, Marco; Slejko, Dario

    2014-05-01

    The standard torsion Wood Anderson (WA) seismograph owes its fame to the fact that historically it has been used for the definition of the magnitude of an earthquake (Richter, 1935). With the progress of the technology, digital broadband (BB) seismographs replaced it. However, for historical consistency and homogeneity with the old seismic catalogues, it is still important continuing to compute the so called Wood Anderson magnitude. In order to evaluate WA magnitude, the synthetic seismograms WA equivalent are simulated convolving the waveforms recorded by a BB instrument with a suitable transfer function. The value of static magnification that should be applied in order to simulate correctly the WA instrument is debated. The original WA instrument in Trieste operated from 1971 to 1992 and the WA magnitude (MAW) estimates were regularly reported in the seismic station bulletins. The calculation of the local magnitude was performed following the Richter's formula (Richter, 1935), using the table of corrections factor unmodified from those calibrated for California and without station correction applied (Finetti, 1972). However, the WA amplitudes were computed as vector sum rather than arithmetic average of the horizontal components, resulting in a systematic overestimation of approximately 0.25, depending on the azimuth. In this work, we have retrieved the E-W and N-S components of the original recordings and re-computed MAW according to the original Richter (1935) formula. In 1992, the WA recording were stopped, due to the long time required for the daily development of the photographic paper, the costs of the photographic paper and the progress of the technology. After a decade of interruption, the WA was recovered and modernized by replacing the recording on photographic paper with an electronic device and it continues presently to record earthquakes. The E-W and N-S components records were memorized, but not published till now. Since 2004, next to the WA (few decimeters apart), a Guralp 40-T BB seismometer was installed, with a proper period extended to 60 s. Aim of the present work is twofold: from one side to recover the whole data set of MAW values recorded from 1971 until now, with the correct estimate of magnitude, and from the other side to verify the WA static magnification, comparing the real WA data with the ones simulated from broadband seismometer recordings.

  4. Design Document for Differential GPS Ground Reference Station Pseudorange Correction Generation Algorithm

    DOT National Transportation Integrated Search

    1986-12-01

    The algorithms described in this report determine the differential corrections to be broadcast to users of the Global Positioning System (GPS) who require higher accuracy navigation or position information than the 30 to 100 meters that GPS normally ...

  5. Accuracy of tretyakov precipitation gauge: Result of wmo intercomparison

    USGS Publications Warehouse

    Yang, Daqing; Goodison, Barry E.; Metcalfe, John R.; Golubev, Valentin S.; Elomaa, Esko; Gunther, Thilo; Bates, Roy; Pangburn, Timothy; Hanson, Clayton L.; Emerson, Douglas G.; Copaciu, Voilete; Milkovic, Janja

    1995-01-01

    The Tretyakov non-recording precipitation gauge has been used historically as the official precipitation measurement instrument in the Russian (formerly the USSR) climatic and hydrological station network and in a number of other European countries. From 1986 to 1993, the accuracy and performance of this gauge were evaluated during the WMO Solid Precipitation Measurement Intercomparison at 11 stations in Canada, the USA, Russia, Germany, Finland, Romania and Croatia. The double fence intercomparison reference (DFIR) was the reference standard used at all the Intercomparison stations in the Intercomparison. The Intercomparison data collected at the different sites are compatible with respect to the catch ratio (measured/DFIR) for the same gauge, when compared using mean wind speed at the height of the gauge orifice during the observation period.The Intercomparison data for the Tretyakov gauge were compiled from measurements made at these WMO intercomparison sites. These data represent a variety of climates, terrains and exposures. The effects of environmental factors, such as wind speed, wind direction, type of precipitation and temperature, on gauge catch ratios were investigated. Wind speed was found to be the most important factor determining the gauge catch and air temperature had a secondary effect when precipitation was classified into snow, mixed and rain. The results of the analysis of gauge catch ratio versus wind speed and temperature on a daily time step are presented for various types of precipitation. Independent checks of the correction equations against the DFIR have been conducted at those Intercomparison stations and a good agreement (difference less than 10%) has been obtained. The use of such adjustment procedures should significantly improve the accuracy and homogeneity of gauge-measured precipitation data over large regions of the former USSR and central Europe.

  6. Joint Inversion of Phase and Amplitude Data of Surface Waves for North American Upper Mantle

    NASA Astrophysics Data System (ADS)

    Hamada, K.; Yoshizawa, K.

    2015-12-01

    For the reconstruction of the laterally heterogeneous upper-mantle structure using surface waves, we generally use phase delay information of seismograms, which represents the average phase velocity perturbation along a ray path, while the amplitude information has been rarely used in the velocity mapping. Amplitude anomalies of surface waves contain a variety of information such as anelastic attenuation, elastic focusing/defocusing, geometrical spreading, and receiver effects. The effects of elastic focusing/defocusing are dependent on the second derivative of phase velocity across the ray path, and thus, are sensitive to shorter-wavelength structure than the conventional phase data. Therefore, suitably-corrected amplitude data of surface waves can be useful for improving the lateral resolution of phase velocity models. In this study, we collect a large-number of inter-station phase velocity and amplitude ratio data for fundamental-mode surface waves with a non-linear waveform fitting between two stations of USArray. The measured inter-station phase velocity and amplitude ratios are then inverted simultaneously for phase velocity maps and local amplification factor at receiver locations in North America. The synthetic experiments suggest that, while the phase velocity maps derived from phase data only reflect large-scale tectonic features, those from phase and amplitude data tend to exhibit better recovery of the strength of velocity perturbations, which emphasizes local-scale tectonic features with larger lateral velocity gradients; e.g., slow anomalies in Snake River Plain and Rio Grande Rift, where significant local amplification due to elastic focusing are observed. Also, the spatial distribution of receiver amplification factor shows a clear correlation with the velocity structure. Our results indicate that inter-station amplitude-ratio data can be of help in reconstructing shorter-wavelength structures of the upper mantle.

  7. 47 CFR 25.272 - General inter-system coordination procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... network control center which will have the responsibility to do the following: (1) Monitor space-to-Earth transmissions in its system (thus indirectly monitoring uplink earth station transmissions in its system) and (2... and correct the problem promptly. (b) [Reserved] (c) The transmitting earth station licensee shall...

  8. Improving the Accuracy of the AFWA-NASA (ANSA) Blended Snow-Cover Product over the Lower Great Lakes Region

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Foster, James L.; Kumar, Sujay; Chien, Janety Y. L.; Riggs, George A.

    2012-01-01

    The Air Force Weather Agency (AFWA) -- NASA blended snow-cover product, called ANSA, utilizes Earth Observing System standard snow products from the Moderate- Resolution Imaging Spectroradiometer (MODIS) and the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) to map daily snow cover and snow-water equivalent (SWE) globally. We have compared ANSA-derived SWE with SWE values calculated from snow depths reported at 1500 National Climatic Data Center (NCDC) co-op stations in the Lower Great Lakes Basin. Compared to station data, the ANSA significantly underestimates SWE in densely-forested areas. We use two methods to remove some of the bias observed in forested areas to reduce the root-mean-square error (RMSE) between the ANSA- and station-derived SWE. First, we calculated a 5- year mean ANSA-derived SWE for the winters of 2005-06 through 2009-10, and developed a five-year mean bias-corrected SWE map for each month. For most of the months studied during the five-year period, the 5-year bias correction improved the agreement between the ANSA-derived and station-derived SWE. However, anomalous months such as when there was very little snow on the ground compared to the 5-year mean, or months in which the snow was much greater than the 5-year mean, showed poorer results (as expected). We also used a 7-day running mean (7DRM) bias correction method using days just prior to the day in question to correct the ANSA data. This method was more effective in reducing the RMSE between the ANSA- and co-op-derived SWE values, and in capturing the effects of anomalous snow conditions.

  9. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the real effect and how is the contribute of L1 mass-market permanent stations to the CORSs Network both for geodetic and low-cost receivers; in particular is described how the use of the network products which are generated by the network (in real-time and post-processing) can improve the accuracy and precision of a rover 5, 10 and 15 km far from the nearest station. Some tests have been carried out considering different types of receivers (geodetic and mass market) and antennas (patch and geodetic). The tests have been conducted considering several positioning approaches (static, stop and go and real time) in order to make the analysis more complete. Good and interesting results were obtained: the followed approach will be useful for many types of applications (landslides monitoring, traffic control), especially where the inter-station distances of GNSS permanent station are greater than 30 km.

  10. Local magnitude calibration of the Hellenic Unified Seismic Network

    NASA Astrophysics Data System (ADS)

    Scordilis, E. M.; Kementzetzidou, D.; Papazachos, B. C.

    2016-01-01

    A new relation is proposed for accurate determination of local magnitudes in Greece. This relation is based on a large number of synthetic Wood-Anderson (SWA) seismograms corresponding to 782 regional shallow earthquakes which occurred during the period 2007-2013 and recorded by 98 digital broad-band stations. These stations are installed and operated by the following: (a) the National Observatory of Athens (HL), (b) the Department of Geophysics of the Aristotle University of Thessaloniki (HT), (c) the Seismological Laboratory of the University of Athens (HA), and (d) the Seismological Laboratory of the Patras University (HP). The seismological networks of the above institutions constitute the recently (2004) established Hellenic Unified Seismic Network (HUSN). These records are used to calculate a refined geometrical spreading factor and an anelastic attenuation coefficient, representative for Greece and surrounding areas, proper for accurate calculation of local magnitudes in this region. Individual station corrections depending on the crustal structure variations in their vicinity and possible inconsistencies in instruments responses are also considered in order to further ameliorate magnitude estimation accuracy. Comparison of such calculated local magnitudes with corresponding original moment magnitudes, based on an independent dataset, revealed that these magnitude scales are equivalent for a wide range of values.

  11. Space Station Human Factors Research Review. Volume 3: Space Station Habitability and Function: Architectural Research

    NASA Technical Reports Server (NTRS)

    Cohen, Marc M. (Editor); Eichold, Alice (Editor); Heers, Susan (Editor)

    1987-01-01

    Articles are presented on a space station architectural elements model study, space station group activities habitability module study, full-scale architectural simulation techniques for space stations, and social factors in space station interiors.

  12. REGIONAL SEISMIC AMPLITUDE MODELING AND TOMOGRAPHY FOR EARTHQUAKE-EXPLOSION DISCRIMINATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, W R; Pasyanos, M E; Matzel, E

    2008-07-08

    We continue exploring methodologies to improve earthquake-explosion discrimination using regional amplitude ratios such as P/S in a variety of frequency bands. Empirically we demonstrate that such ratios separate explosions from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are also examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling (e. g. Ford et al 2008). For example, regional waveform modeling showsmore » strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East. Monitoring the world for potential nuclear explosions requires characterizing seismic events and discriminating between natural and man-made seismic events, such as earthquakes and mining activities, and nuclear weapons testing. We continue developing, testing, and refining size-, distance-, and location-based regional seismic amplitude corrections to facilitate the comparison of all events that are recorded at a particular seismic station. These corrections, calibrated for each station, reduce amplitude measurement scatter and improve discrimination performance. We test the methods on well-known (ground truth) datasets in the U.S. and then apply them to the uncalibrated stations in Eurasia, Africa, and other regions of interest to improve underground nuclear test monitoring capability.« less

  13. Assimilation of SMOS Retrievals in the Land Information System

    NASA Technical Reports Server (NTRS)

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.

    2016-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm(sub 3 cm(sub -3). These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve.

  14. Assimilation of SMOS Retrievals in the Land Information System

    PubMed Central

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.

    2018-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm3 cm−3. These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve. PMID:29367795

  15. Design of an MSAT-X mobile transceiver and related base and gateway stations

    NASA Technical Reports Server (NTRS)

    Fang, Russell J. F.; Bhaskar, Udaya; Hemmati, Farhad; Mackenthun, Kenneth M.; Shenoy, Ajit

    1987-01-01

    This paper summarizes the results of a design study of the mobile transceiver, base station, and gateway station for NASA's proposed Mobile Satellite Experiment (MSAT-X). Major ground segment system design issues such as frequency stability control, modulation method, linear predictive coding vocoder algorithm, and error control technique are addressed. The modular and flexible transceiver design is described in detail, including the core, RF/IF, modem, vocoder, forward error correction codec, amplitude-companded single sideband, and input/output modules, as well as the flexible interface. Designs for a three-carrier base station and a 10-carrier gateway station are also discussed, including the interface with the controllers and with the public-switched telephone networks at the gateway station. Functional specifications are given for the transceiver, the base station, and the gateway station.

  16. Design of an MSAT-X mobile transceiver and related base and gateway stations

    NASA Astrophysics Data System (ADS)

    Fang, Russell J. F.; Bhaskar, Udaya; Hemmati, Farhad; Mackenthun, Kenneth M.; Shenoy, Ajit

    This paper summarizes the results of a design study of the mobile transceiver, base station, and gateway station for NASA's proposed Mobile Satellite Experiment (MSAT-X). Major ground segment system design issues such as frequency stability control, modulation method, linear predictive coding vocoder algorithm, and error control technique are addressed. The modular and flexible transceiver design is described in detail, including the core, RF/IF, modem, vocoder, forward error correction codec, amplitude-companded single sideband, and input/output modules, as well as the flexible interface. Designs for a three-carrier base station and a 10-carrier gateway station are also discussed, including the interface with the controllers and with the public-switched telephone networks at the gateway station. Functional specifications are given for the transceiver, the base station, and the gateway station.

  17. 78 FR 44028 - Review of Foreign Ownership Policies for Common Carrier and Aeronautical Radio Licensees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-23

    ... Foreign Ownership Policies for Common Carrier and Aeronautical Radio Licensees AGENCY: Federal... route and aeronautical fixed radio station licensees. DATES: Effective on August 9, 2013. FOR FURTHER... following corrections are made: Subpart F--Wireless Radio Services Applications and Proceedings [Corrected...

  18. Determination of differential arrival times by cross-correlating worldwide seismological data

    NASA Astrophysics Data System (ADS)

    Godano, M.; Nolet, G.; Zaroli, C.

    2012-12-01

    Cross-correlation delays are the preferred body wave observables in global tomography. Heterogeneity is the main factor influencing delay times found by cross-correlation. Not only the waveform, but also the arrival time itself is affected by differences in seismic velocity encountered along the way. An accurate method for estimating differential times of seismic arrivals across a regional array by cross-correlation was developed by VanDecar and Crosson [1990]. For the estimation of global travel time delays in different frequency bands, Sigloch and Nolet [2006] developed a method for the estimation of body wave delays using a matched filter, which requires the separate estimation of the source time function. Sigloch et al. [2008] found that waveforms often cluster in and opposite the direction of rupture propagation on the fault, confirming that the directivity effect is a major factor in shaping the waveform of large events. We propose a generalization of the VanDecar-Crosson method to which we add a correction for the directivity effect in the seismological data. The new method allows large events to be treated without the need to estimate the source time function for the computation of a matched synthetic waveform. The procedure consists in (1) the detection of the directivity effect in the data and the determination of a rupture model (unilateral or bilateral) explaining the differences in pulse duration among the stations, (2) the determination of an apparent fault rupture length explaining the pulse durations, (3) the removal of the delay due to the directivity effect in the pulse duration , by stretching or contracting the seismograms for directive and anti-directive stations respectively and (4) the application of a generalized VanDecar and Crosson method using only delays between pairs of stations that have an acceptable correlation coefficient. We validate our method by performing tests on synthetic data. Results show that the error between theoretical and measured differential arrival time are significantly reduced for the corrected data. We illustrate our method on data from several real earthquakes.

  19. How well is black carbon in the Arctic atmosphere captured by models?

    NASA Astrophysics Data System (ADS)

    Eckhardt, Sabine; Berntsen, Terje; Cherian, Ribu; Daskalakis, Nikos; Heyes, Chris; Hodnebrog, Øivind; Kanakidou, Maria; Klimont, Zbigniew; Law, Kathy; Lund, Marianne; Myhre, Gunnar; Myriokefalitakis, Stelios; Olivie, Dirk; Quaas, Johannes; Quennehen, Boris; Raut, Jean-Christophe; Samset, Bjørn; Schulz, Michael; Skeie, Ragnhild; Stohl, Andreas

    2014-05-01

    A correct representation of the spatial distribution of aerosols in atmospheric models is essential for realistic simulations of deposition and calculations of radiative forcing. It has been observed that transport of black carbon (BC) into the Arctic and scavenging is sometimes not captured accurately enough in chemistry transport models (CTM) as well as global circulation models (GCM). In this study we determine the discrepancies between measured equivalent BC (EBC) and modeled BC for several Arctic measurement stations as well as for Arctic aircraft campaigns. For this, we use the output of a set of 5 models based on the same emission dataset (ECLIPSE emissions, see eclipse.nilu.no) and evaluate the simulated concentrations at the measurement locations and times. Emissions are separated for different sources such as biomass burning, domestic heating, gas flaring, industry and the transport sector. We focus on the years 2008 and 2009, where many campaigns took place in the framework of the International Polar Year. Arctic stations like Barrow, Alert, Station Nord in Greenland and Zeppelin show a very pronounced winter/spring maximum in BC. While monthly averaged measured EBC values are around 80 ng/m^3, the models severely underestimate this with some models simulating only a small percentage of the observed values. During summer measured concentrations are a magnitude lower, and still underestimated by almost an order of magnitude in some models. However, the best models are correct within a factor of 2 in winter/spring and give realistic concentrations in summer. In order to get information on the vertical profile we used measurements from aircraft campaigns like ARCTAS, ARCPAC and HIPPO. It is found that BC in latitudes below 60 degrees is better captured by the models than BC at higher latitudes, even though it is overestimated at high altitudes. A systematic analysis of the performance of different models is presented. With the dataset we use we capture remote, polluted and fire-influenced conditions. We estimate the impact of model deficiencies on calculated BC radiative forcing by introducing scaling factors based on the model-measurement comparisons.

  20. First Industrial Tests of a Drum Monitor Matrix Correction for the Fissile Mass Measurement in Large Volume Historic Metallic Residues with the Differential Die-away Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, R.; Passard, C.; Perot, B.

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT. In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (NML) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor ({sup 3}He proportional counter inside the measurement cavity).more » A previous study performed with the NML R and D measurement cell PROMETHEE 6 has shown the feasibility of method, and the capability of MCNP simulations to correctly reproduce experimental data and to assess the performances of the proposed correction. A next step of the study has focused on the performance assessment of the method on the industrial station using numerical simulation. A correlation between the prompt calibration coefficient of the {sup 239}Pu signal and the drum monitor signal was established using the MCNPX computer code and a fractional factorial experimental design composed of matrix parameters representative of the variation range of historical waste. Calculations have showed that the method allows the assay of the fissile mass with an uncertainty within a factor of 2, while the matrix effect without correction ranges on 2 decades. In this paper, we present and discuss the first experimental tests on the industrial ACC measurement system. A calculation vs. experiment benchmark has been achieved by performing dedicated calibration measurement with a representative drum and {sup 235}U samples. The preliminary comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)« less

  1. Research in Application of Geodetic GPS Receivers in Time Synchronization

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Zhang, P.; Sun, Z.; Wang, F.; Wang, X.

    2018-04-01

    In recent years, with the development of satellite orbit and clock parameters accurately determining technology and the popularity of geodetic GPS receivers, Common-View (CV) which proposed in 1980 by Allan has gained widespread application and achieved higher accuracy time synchronization results. GPS Common View (GPS CV) is the technology that based on multi-channel geodetic GPS receivers located in different place and under the same common-view schedule to receiving same GPS satellite signal at the same time, and then calculating the time difference between respective local receiver time and GPST by weighted theory, we will obtain the difference between above local time of receivers that installed in different station with external atomic clock. Multi-channel geodetic GPS receivers have significant advantages such as higher stability, higher accuracy and more common-view satellites in long baseline time synchronization application over the single-channel geodetic GPS receivers. At present, receiver hardware delay and surrounding environment influence are main error factors that affect the accuracy of GPS common-view result. But most error factors will be suppressed by observation data smoothing and using of observation data from different satellites in multi-channel geodetic GPS receiver. After the SA (Selective Availability) cancellation, using a combination of precise satellite ephemeris, ionospheric-free dual-frequency P-code observations and accurately measuring of receiver hardware delay, we can achieve time synchronization result on the order of nanoseconds (ns). In this paper, 6 days observation data of two IGS core stations with external atomic clock (PTB, USNO distance of two stations about 6000 km) were used to verify the GPS common-view theory. Through GPS observation data analysis, there are at least 2-4 common-view satellites and 5 satellites in a few tracking periods between two stations when the elevation angle is 15°, even there will be at least 2 common-view satellites for each tracking period when the elevation angle is 30°. Data processing used precise GPS satellite ephemeris, double-frequency P-code combination observations without ionosphere effects and the correction of the Black troposphere Delay Model. the weighted average of all common-viewed GPS satellites in the same tracking period is taken by weighting the root-mean-square error of each satellite, finally a time comparison data between two stations is obtained, and then the time synchronization result between the two stations (PTB and USNO) is obtained. It can be seen from the analysis of time synchronization result that the root mean square error of REFSV (the difference between the local frequency standard at the mid-point of the actual tracking length and the tracked satellite time in unit of 0.1 ns) shows a linear change within one day, However the jump occurs when jumping over the day which is mainly caused by satellites position being changed due to the interpolation of two-day precise satellite ephemeris across the day. the overall trend of time synchronization result is declining and tends to be stable within a week-long time. We compared the time synchronization results (without considering the hardware delay correction) with those published by the International Bureau of Weights and Measures (BIPM), and the comparing result from a week earlier shows that the trend is same but there is a systematic bias which was mainly caused by hardware delays of geodetic GPS receiver. Regardless of the hardware delay, the comparing result is about between 102 ns and 106 ns. the vast majority of the difference within 2 ns but the difference of individual moment does not exceed 4ns when taking into account the systemic bias which mainly caused by hardware delay. Therefore, it is feasible to use the geodetic GPS receiver to achieve the time synchronization result in nanosecond order between two stations which separated by thousands kilometers, and multi-channel geodetic GPS receivers have obvious advantages over single-channel geodetic GPS receivers in the number of common-viewing satellites. In order to obtain higher precision (e.g sub-nanosecond order) time synchronization results, we shall take account into carrier phase observations, hardware delay ,and more error-influencing factors should be considered such as troposphere delay correction, multipath effects, and hardware delays changes due to temperature changes.

  2. Assessing the implementation of bias correction in the climate prediction

    NASA Astrophysics Data System (ADS)

    Nadrah Aqilah Tukimat, Nurul

    2018-04-01

    An issue of the climate changes nowadays becomes trigger and irregular. The increment of the greenhouse gases (GHGs) emission into the atmospheric system day by day gives huge impact to the fluctuated weather and global warming. It becomes significant to analyse the changes of climate parameters in the long term. However, the accuracy in the climate simulation is always be questioned to control the reliability of the projection results. Thus, the Linear Scaling (LS) as a bias correction method (BC) had been applied to treat the gaps between observed and simulated results. About two rainfall stations were selected in Pahang state there are Station Lubuk Paku and Station Temerloh. Statistical Downscaling Model (SDSM) used to perform the relationship between local weather and atmospheric parameters in projecting the long term rainfall trend. The result revealed the LS was successfully to reduce the error up to 3% and produced better climate simulated results.

  3. Unexplained Discontinuity in the US Radiosonde Temperature Data. Part 2; Stratosphere

    NASA Technical Reports Server (NTRS)

    Redder, Christopher R.; Luers, Jim K.; Eskridge, Robert E.

    2003-01-01

    In part I of this paper, the United States (US) radiosonde temperature data are shown to have significant and unexplained inhomogeneities in the mid-troposphere. This part discusses the differences between observations taken at 0 and 12 UTC especially in the stratosphere by the Vaisala RS80 radiosondes that are integrated within the National Weather Service's (NWS) Micro-ART system. The results show that there is a large maxima in the horizontal distribution of the monthly means of the 0/12 UTC differences over the central US that is absent over Canada and this maxima is as large as 5 C at 10 hPa. The vertical profiles of the root-mean-square of the monthly means are much larger in the US than those else where. The data clearly shows that the 0/12 UTC differences are largely artificial especially over the central US and originate in the post processing software at observing stations, thus confirming the findings in part I. Special flight data from the NWS's test facility at Sterling, Va. have been obtained. This data can be used to deduce the bias correction applied by Vaisala's post processing system. By analyzing the correction data, it can be shown that the inconsistencies with non-US Vaisala RS80 data as well as most of the large 0/12 UTC differences over the US can be accounted for by multiplying the reported elapsed time (i.e. time since launch) by the factor which is incorrectly applied by the post processing software. After being presented with the findings in this paper, Vaisala further isolated the source of the inconsistencies to a software coding error in the radiation bias correction scheme. The error effects only the software installed at US stations.

  4. Observation of the Earth Liquid Core Resonance by Extensometers

    NASA Astrophysics Data System (ADS)

    Bán, Dóra; Mentes, Gyula; Kis, Márta; Koppán, András

    2018-05-01

    We performed Earth tidal measurements by quartz tube extensometers of the same type at several observatories (Budapest, Pécs, Sopronbánfalva in Hungary and Vyhne in Slovakia). In this paper, the first attempts to reveal the effect of the Free Core Nutation (FCN) from strain measurements are described. The effect of the FCN on the P1, K1, Ψ1 and Φ1 tidal waves were studied on the basis of tidal results obtained in four observatories. Effectiveness of the correction of tidal data for temperature, barometric pressure and ocean load was also investigated. The obtained K1/O1 ratios are close to the theoretical values with exception of the Pécs station. We found a discrepancy between the observed and theoretical P1/O1 values for all stations with exception of the Budapest station. It was found that the difference between the measured and theoretical Ψ1/O1 and Φ1/O1 ratios was very large independently of correction of the strain data. These discrepancies need further investigations. According to our results, fluid core resonance effects can also be detected by our quartz tube extensometers but correction of strain data for local effects is necessary.

  5. RTX Correction Accuracy and Real-Time Data Processing of the New Integrated SeismoGeodetic System with Real-Time Acceleration and Displacement Measurements for Earthquake Characterization Based on High-Rate Seismic and GPS Data

    NASA Astrophysics Data System (ADS)

    Zimakov, L. G.; Raczka, J.; Barrientos, S. E.

    2016-12-01

    We will discuss and show the results obtained from an integrated SeismoGeodetic System, model SG160-09, installed in the Chile (Chilean National Network), Italy (University of Naples Network), and California. The SG160-09 provides the user high rate GNSS and accelerometer data, full epoch-by-epoch measurement integrity and the ability to create combined GNSS and accelerometer high-rate (200Hz) displacement time series in real-time. The SG160-09 combines seismic recording with GNSS geodetic measurement in a single compact, ruggedized case. The system includes a low-power, 220-channel GNSS receiver powered by the latest Trimble-precise Maxwell™6 technology and supports tracking GPS, GLONASS and Galileo signals. The receiver incorporates on-board GNSS point positioning using Real-Time Precise Point Positioning (PPP) technology with satellite clock and orbit corrections delivered over IP networks. The seismic recording includes an ANSS Class A, force balance accelerometer with the latest, low power, 24-bit A/D converter, producing high-resolution seismic data. The SG160-09 processor acquires and packetizes both seismic and geodetic data and transmits it to the central station using an advanced, error-correction protocol providing data integrity between the field and the processing center. The SG160-09 has been installed in three seismic stations in different geographic locations with different Trimble global reference stations coverage The hardware includes the SG160-09 system, external Zephyr Geodetic-2 GNSS antenna, both radio and high-speed Internet communication media. Both acceleration and displacement data was transmitted in real-time to the centralized Data Acquisition Centers for real-time data processing. Command/Control of the field station and real-time GNSS position correction are provided via the Pivot platform. Data from the SG160-09 system was used for seismic event characterization along with data from traditional seismic and geodetic stations installed in the network. Our presentation will focus on the key improvements of the network installation with the SG160-09 system, RTX correction accuracy obtained from Trimble Global RTX tracking network, rapid data transmission, and real-time data processing for strong seismic events and aftershock characterization.

  6. 47 CFR 80.1125 - Search and rescue coordinating communications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... station involved may impose silence on stations which interfere with that traffic. This instruction may be... “silence, m'aider”; (2) In narrow-band direct-printing telegraphy normally using forward-error correcting mode, the signal SILENCE MAYDAY. However, the ARQ mode may be used when it is advantageous to do so. (f...

  7. Refined lateral energy correction functions for the KASCADE-Grande experiment based on Geant4 simulations

    NASA Astrophysics Data System (ADS)

    Gherghel-Lascu, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.

    2015-02-01

    In previous studies of KASCADE-Grande data, a Monte Carlo simulation code based on the GEANT3 program has been developed to describe the energy deposited by EAS particles in the detector stations. In an attempt to decrease the simulation time and ensure compatibility with the geometry description in standard KASCADE-Grande analysis software, several structural elements have been neglected in the implementation of the Grande station geometry. To improve the agreement between experimental and simulated data, a more accurate simulation of the response of the KASCADE-Grande detector is necessary. A new simulation code has been developed based on the GEANT4 program, including a realistic geometry of the detector station with structural elements that have not been considered in previous studies. The new code is used to study the influence of a realistic detector geometry on the energy deposited in the Grande detector stations by particles from EAS events simulated by CORSIKA. Lateral Energy Correction Functions are determined and compared with previous results based on GEANT3.

  8. Insights into Inpatients with Poor Vision: A High Value Proposition

    PubMed Central

    Press, Valerie G.; Matthiesen, Madeleine I.; Ranadive, Alisha; Hariprasad, Seenu M.; Meltzer, David O.; Arora, Vineet M.

    2015-01-01

    Background Vision impairment is an under-recognized risk factor for adverse events among hospitalized patients, yet vision is neither routinely tested nor documented for inpatients. Low-cost ($8 and up) non-prescription ‘readers’ may be a simple, high-value intervention to improve inpatients’ vision. We aimed to study initial feasibility and efficacy of screening and correcting inpatients’ vision. Methods From June 2012 through January 2014 we began testing whether participants’ vision corrected with non-prescription lenses for eligible participants failing a vision screen (Snellen chart) performed by research assistants (RAs). Descriptive statistics and tests of comparison, including t-tests and chi-squared tests, were used when appropriate. All analyses were performed using Stata version 12 (StataCorps, College Station, TX). Results Over 800 participants’ vision was screened (n=853). Older (≥65 years; 56%) participants were more likely to have insufficient vision than younger (<65 years; 28%; p<0.001). Non-prescription readers corrected the majority of eligible participants’ vision (82%, 95/116). Discussion Among an easily identified sub-group of inpatients with poor vision, low-cost ‘readers’ successfully corrected most participants’ vision. Hospitalists and other clinicians working in the inpatient setting can play an important role in identifying opportunities to provide high-value care related to patients’ vision. PMID:25755206

  9. Fusion of Location Fingerprinting and Trilateration Based on the Example of Differential Wi-Fi Positioning

    NASA Astrophysics Data System (ADS)

    Retscher, G.

    2017-09-01

    Positioning of mobile users in indoor environments with Wireless Fidelity (Wi-Fi) has become very popular whereby location fingerprinting and trilateration are the most commonly employed methods. In both the received signal strength (RSS) of the surrounding access points (APs) are scanned and used to estimate the user's position. Within the scope of this study the advantageous qualities of both methods are identified and selected to benefit their combination. By a fusion of these technologies a higher performance for Wi-Fi positioning is achievable. For that purpose, a novel approach based on the well-known Differential GPS (DGPS) principle of operation is developed and applied. This approach for user localization and tracking is termed Differential Wi-Fi (DWi-Fi) by analogy with DGPS. From reference stations deployed in the area of interest differential measurement corrections are derived and applied at the mobile user side. Hence, range or coordinate corrections can be estimated from a network of reference station observations as it is done in common CORS GNSS networks. A low-cost realization with Raspberry Pi units is employed for these reference stations. These units serve at the same time as APs broadcasting Wi-Fi signals as well as reference stations scanning the receivable Wi-Fi signals of the surrounding APs. As the RSS measurements are carried out continuously at the reference stations dynamically changing maps of RSS distributions, so-called radio maps, are derived. Similar as in location fingerprinting this radio maps represent the RSS fingerprints at certain locations. From the areal modelling of the correction parameters in combination with the dynamically updated radio maps the location of the user can be estimated in real-time. The novel approach is presented and its performance demonstrated in this paper.

  10. Revision of earthquake hypocentre locations in global bulletin data sets using source-specific station terms

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Saul, Joachim; Heimann, Sebastian; Tilmann, Frederik; Dahm, Torsten

    2017-02-01

    Global earthquake locations are often associated with very large systematic travel-time residuals even for clear arrivals, especially for regional and near-regional stations in subduction zones because of their strongly heterogeneous velocity structure. Travel-time corrections can drastically reduce travel-time residuals at regional stations and, in consequence, improve the relative location accuracy. We have extended the shrinking-box source-specific station terms technique to regional and teleseismic distances and adopted the algorithm for probabilistic, nonlinear, global-search location. We evaluated the potential of the method to compute precise relative hypocentre locations on a global scale. The method has been applied to two specific test regions using existing P- and pP-phase picks. The first data set consists of 3103 events along the Chilean margin and the second one comprises 1680 earthquakes in the Tonga-Fiji subduction zone. Pick data were obtained from the GEOFON earthquake bulletin, produced using data from all available, global station networks. A set of timing corrections varying as a function of source position was calculated for each seismic station. In this way, we could correct the systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure without explicitly solving for a velocity model. Residual statistics show that the median absolute deviation of the travel-time residuals is reduced by 40-60 per cent at regional distances, where the velocity anomalies are strong. Moreover, the spread of the travel-time residuals decreased by ˜20 per cent at teleseismic distances (>28°). Furthermore, strong variations in initial residuals as a function of recording distance are smoothed out in the final residuals. The relocated catalogues exhibit less scattered locations in depth and sharper images of the seismicity associated with the subducting slabs. Comparison with a high-resolution local catalogue reveals that our relocation process significantly improves the hypocentre locations compared to standard locations.

  11. Dengue outbreak in a large military station: Have we learnt any lesson?

    PubMed

    Kunwar, R; Prakash, R

    2015-01-01

    An outbreak was reported from a large military station located in South India in 2013. In spite of instituting the preventive measures early, it took more than 2 months to bring the outbreak under control. This paper brings out lessons learnt and suggests strategy for controlling similar outbreak in future. The Military station comprises of 6 large Regimental Centres and many smaller units. The approximate strength of the serving personnel and their families is 25,000. Besides the unit Regimental Medical Officers, a large tertiary care hospital and a Station Health Organization is available to provide health care. A total of 266 patients including 192 serving personnel and 74 of their dependents were hospitalized for dengue between 15 May 2013 and 28 Jul 2013. Many dependents not having severe symptoms, were not hospitalized and treated on outpatient basis. Health advisories and instructions for constituting Dengue Task Force (DTF) were issued well in advance. Preventive measures were instituted early. But the outbreak was controlled only after intervention from higher administrative authorities. Lessons learnt included correct and timely perception of threat is essential; behavioural change of individuals is desired; availability of adequate health functionaries is mandatory; and complete dataset helps correct perception. Future strategy for control of dengue outbreak should include repeated and timely survey of entire area for correct risk perception, assessment of behavioural change among individuals; operational research to assess the impact of ongoing public health campaign.

  12. RTK and DGPS measurements using INTERNET and GSM radiolink

    NASA Astrophysics Data System (ADS)

    Rogowski, J. B.; Rogowski, A.; Kujawa, L.

    2003-04-01

    The practical need for GNSS positioning in real time caused to develop the medium for data transmission. The DGPS correction could be transmitted on the area of a few hundreds kilometers (test in Polish Solec Kujawski radio station) on log waves. The RTK technique needs the greater flow capacity of the radio lines and shorter distance between the base stations. The RTK data from the base stations could be transmitted in the DARC system by the local stations on UKF channels, but the local stations are not interested in propagation of RTCM data. The experiences of RTK and DGPS measurements using data transmissions by INTERNET and GSM radio link are presented in the paper.

  13. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network

    PubMed Central

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-01-01

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months. PMID:26213941

  14. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network.

    PubMed

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-07-24

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  15. Strategy to minimize the impact of the South Atlantic Anomaly effect on the DORIS station position estimation

    NASA Astrophysics Data System (ADS)

    Capdeville, H.; Moreaux, G.; Lemoine, J. M.

    2017-12-01

    All the Ultra Stable Oscillators (USO) of DORIS satellites are more or less sensitive to the South Atlantic Anomaly (SAA) effect. For Jason-1 and SPOT-5 satellites, a corrective model has been developed and used for the realization of the ITRF2014. However, Jason-2 is also impacted, not at the same level as Jason-1 but strong enough to worsen the multi-satellite solution provided for ITRF2014 for the SAA stations. The last DORIS satellites are also impacted by the SAA effect, in particular Jason-3. Thanks to the extremely precise time-tagging of the T2L2 experiment on-board Jason-2, A. Belli and the GEOAZUR team managed to draw up a model that accurately represents the variations of Jason-2 USO's frequency. This model will be evaluated by analyzing its impact on the position estimation of the SAA stations. While awaiting a DORIS data corrective model for the others satellites Jason-3 and Sentinel-3A, we propose here different strategies to minimize the SAA effect on the orbit and also and in particular on the station position estimation. We will compare the DORIS positions of the SAA stations with the GNSS positions collocated.

  16. Trends in the Vertical Distribution of Ozone: A Comparison of Two Analyses of Ozonesonde Data

    NASA Technical Reports Server (NTRS)

    Loogan, J. A.; Megretskaia, I. A.; Miller, A. J.; Tiao, G. C.; Choi, D.; Zhang, L.; Bishop, L.; Stolarski, R.; Labow, G. J.; Hollandsworth, S. M.; hide

    1998-01-01

    We present the results of two independent analyses of ozonesonde measurements of the vertical profile of ozone. For most of the ozonesonde stations we use data that were recently reprocessed and reevaluated to improve their quality and internal consistency. The two analyses give similar results for trends in ozone. We attribute differences in results primarily to differences in data selection criteria and in utilization of data correction factors, rather than in statistical trend models. We find significant decreases in stratospheric ozone at all stations in middle and high latitudes of the northern hemisphere from 1970 to 1996, with the largest decreases located between 12 and 21 km, and trends of -3 to -10 %/decade near 17 km. The decreases are largest at the Canadian and the most northerly Japanese station, and are smallest at the European stations, and at Wallops Island, U.S.A. The mean mid-latitude trend is largest, -7 %/decade, from 12 to 17.5 km for 1970-96. For 1980-96, the decrease is more negative by 1-2 %/decade, with a maximum trend of -9 %/decade in the lowermost stratosphere. The trends vary seasonally from about 12 to 17.5 km, with largest ozone decreases in winter and spring. Trends in tropospheric ozone are highly variable and depend on region. There are decreases or zero trends at the Canadian stations for 1970-96, and decreases of -2 to -8 %/decade for the mid-troposphere for 1980-96; the three European stations show increases for 1970-96, but trends are close to zero for two stations for 1980-96 and positive for one; there are increases in ozone for the three Japanese stations for 1970-96, but trends are either positive or zero for 1980-96; the U.S. stations show zero or slightly negative trends in tropospheric ozone after 1980. It is not possible to define reliably a mean tropospheric ozone trend for northern mid-latitudes, given the small number of stations and the large variability in trends. The integrated column trends derived from the sonde data are consistent with trends derived from both surface based and satellite measurements of the ozone column.

  17. Temperature trend biases

    NASA Astrophysics Data System (ADS)

    Venema, Victor; Lindau, Ralf

    2016-04-01

    In an accompanying talk we show that well-homogenized national dataset warm more than temperatures from global collections averaged over the region of common coverage. In this poster we want to present auxiliary work about possible biases in the raw observations and on how well relative statistical homogenization can remove trend biases. There are several possible causes of cooling biases, which have not been studied much. Siting could be an important factor. Urban stations tend to move away from the centre to better locations. Many stations started inside of urban areas and are nowadays more outside. Even for villages the temperature difference between the centre and edge can be 0.5°C. When a city station moves to an airport, which often happened around WWII, this takes the station (largely) out of the urban heat island. During the 20th century the Stevenson screen was established as the dominant thermometer screen. This screen protected the thermometer much better against radiation than earlier designs. Deficits of earlier measurement methods have artificially warmed the temperatures in the 19th century. Newer studies suggest we may have underestimated the size of this bias. Currently we are in a transition to Automatic Weather Stations. The net global effect of this transition is not clear at this moment. Irrigation on average decreases the 2m-temperature by about 1 degree centigrade. At the same time, irrigation has increased significantly during the last century. People preferentially live in irrigated areas and weather stations serve agriculture. Thus it is possible that there is a higher likelihood that weather stations are erected in irrigated areas than elsewhere. In this case irrigation could lead to a spurious cooling trend. In the Parallel Observations Science Team of the International Surface Temperature Initiative (ISTI-POST) we are studying influence of the introduction of Stevenson screens and Automatic Weather Stations using parallel measurements, as well as the influence of relocations. Previous validation studies of statistical homogenizations unfortunately have some caveats when it comes to the large-scale trends. The main problem is that the validation datasets had a relatively large signal to noise ratio (SNR), i.e., they had a large break variance relative to the variance of the noise of the difference time series. Our recent work on multiple breakpoint detection methods shows that SNR is very important and that for a SNR around 0.5 the segmentation is about as good as a random segmentation. If the corrections are computed with a composite reference that also contains breaks, the bias due to network-wide transitions that are executed over short periods will reduce the obvious breaks in the single stations, but may not reduce the large-scale bias much. The joint correction method using a decomposition approach (ANOVA) can remove the bias when all breaks (predictors) are known. Any error in the predictors will, however, lead to undercorrection of any large-scale trend biases.

  18. a Climatology of Global Precipitation.

    NASA Astrophysics Data System (ADS)

    Legates, David Russell

    A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.

  19. An Iterative, Geometric, Tilt Correction Method for Radiation and Albedo Observed by Automatic Weather Stations on Snow-Covered Surfaces: Application to Greenland

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zender, C. S.; van As, D.; Smeets, P.; van den Broeke, M.

    2015-12-01

    Surface melt and mass loss of Greenland Ice Sheet may play crucial roles in global climate change due to their positive feedbacks and large fresh water storage. With few other regular meteorological observations available in this extreme environment, measurements from Automatic Weather Stations (AWS) are the primary data source for the surface energy budget studies, and for validating satellite observations and model simulations. However, station tilt, due to surface melt and compaction, results in considerable biases in the radiation and thus albedo measurements by AWS. In this study, we identify the tilt-induced biases in the climatology of surface radiative flux and albedo, and then correct them based on geometrical principles. Over all the AWS from the Greenland Climate Network (GC-Net), the Kangerlussuaq transect (K-transect) and the Programme for Monitoring of the Greenland Ice Sheet (PROMICE), only ~15% of clear days have the correct solar noon time, with the largest bias to be 3 hours. Absolute hourly biases in the magnitude of surface insolation can reach up to 200 W/m2, with daily average exceeding 100 W/m2. The biases are larger in the accumulation zone due to the systematic tilt at each station, although variabilities of tilt angles are larger in the ablation zone. Averaged over the whole Greenland Ice Sheet in the melting season, the absolute bias in insolation is ~23 W/m2, enough to melt 0.51 m snow water equivalent. We estimate the tilt angles and their directions by comparing the simulated insolation at a horizontal surface with the observed insolation by these tilted AWS under clear-sky conditions. Our correction reduces the RMSE against satellite measurements and reanalysis by ~30 W/m2 relative to the uncorrected data, with correlation coefficients over 0.95 for both references. The corrected diurnal changes of albedo are more smooth, with consistent semi-smiling patterns (see Fig. 1). The seasonal cycles and annual variabilities of albedo are in a better agreement with previous studies (see Fig. 2 and 3). The consistent tilt-corrected shortwave radiation dataset derived here will provide better observations and validations for surface energy budget studies on Greenland Ice Sheet, including albedo variation, surface melt simulations and cloud radiative forcing estimates.

  20. Replacing the CCSDS Telecommand Protocol with Next Generation Uplink

    NASA Technical Reports Server (NTRS)

    Kazz, Greg; Burleigh, Scott; Greenberg, Ed

    2012-01-01

    Better performing Forward Error Correction on the forward link along with adequate power in the data open an uplink operations trade space that enable missions to: Command to greater distances in deep space (increased uplink margin) Increase the size of the payload data (latency may be a factor) Provides space for the security header/trailer of the CCSDS Space Data Link Security Protocol Note: These higher rates could be used for relief of emergency communication margins/rates and not limited to improving top-end rate performance. A higher performance uplink could also reduce the requirements on flight emergency antenna size and/or the performance required from ground stations. Use of a selective repeat ARQ protocol may increase the uplink design requirements but the resultant development is deemed acceptable, due the factor of 4 to 8 potential increase in uplink data rate.

  1. NAND Flash Qualification Guideline

    NASA Technical Reports Server (NTRS)

    Heidecker, Jason

    2012-01-01

    Better performing Forward Error Correction on the forward link along with adequate power in the data open an uplink operations trade space that enable missions to: Command to greater distances in deep space (increased uplink margin). Increase the size of the payload data (latency may be a factor). Provides space for the security header/trailer of the CCSDS Space Data Link Security Protocol. Note: These higher rates could be used for relief of emergency communication margins/rates and not limited to improving top-end rate performance. A higher performance uplink could also reduce the requirements on flight emergency antenna size and/or the performance required from ground stations. Use of a selective repeat ARQ protocol may increase the uplink design requirements but the resultant development is deemed acceptable, due the factor of 4 to 8 potential increase in uplink data rate.

  2. Principal facts of gravity stations with gravity and magnetic profiles from the Southwest Nevada Test Site, Nye County, Nevada, as of January, 1982

    USGS Publications Warehouse

    Jansma, P.E.; Snyder, D.B.; Ponce, David A.

    1983-01-01

    Three gravity profiles and principal facts of 2,604 gravity stations in the southwest quadrant of the Nevada Test Site are documented in this data report. The residual gravity profiles show the gravity measurements and the smoothed curves derived from these points that were used in geophysical interpretations. The principal facts include station label, latitude, longitude, elevation, observed gravity value, and terrain correction for each station as well as the derived complete Bouguer and isostatic anomalies, reduced at 2.67 g/cm 3. Accuracy codes, where available, further document the data.

  3. ACTS TDMA network control. [Advanced Communication Technology Satellite

    NASA Technical Reports Server (NTRS)

    Inukai, T.; Campanella, S. J.

    1984-01-01

    This paper presents basic network control concepts for the Advanced Communications Technology Satellite (ACTS) System. Two experimental systems, called the low-burst-rate and high-burst-rate systems, along with ACTS ground system features, are described. The network control issues addressed include frame structures, acquisition and synchronization procedures, coordinated station burst-time plan and satellite-time plan changes, on-board clock control based on ground drift measurements, rain fade control by means of adaptive forward-error-correction (FEC) coding and transmit power augmentation, and reassignment of channel capacities on demand. The NASA ground system, which includes a primary station, diversity station, and master control station, is also described.

  4. Overview: Human Factors Issues in Space Station Architecture

    NASA Technical Reports Server (NTRS)

    Cohen, M. M.

    1985-01-01

    An overview is presented of human factors issues in space station architecture. The status of the space station program is given. Habitability concerns such as vibroacoustics, lighting systems, privacy and work stations are discussed in detail.

  5. Generating daily weather data for ecosystem modelling in the Congo River Basin

    NASA Astrophysics Data System (ADS)

    Petritsch, Richard; Pietsch, Stephan A.

    2010-05-01

    Daily weather data are an important constraint for diverse applications in ecosystem research. In particular, temperature and precipitation are the main drivers for forest ecosystem productivity. Mechanistic modelling theory heavily relies on daily values for minimum and maximum temperatures, precipitation, incident solar radiation and vapour pressure deficit. Although the number of climate measurement stations increased during the last centuries, there are still regions with limited climate data. For example, in the WMO database there are only 16 stations located in Gabon with daily weather measurements. Additionally, the available time series are heavily affected by measurement errors or missing values. In the WMO record for Gabon, on average every second day is missing. Monthly means are more robust and may be estimated over larger areas. Therefore, a good alternative is to interpolate monthly mean values using a sparse network of measurement stations, and based on these monthly data generate daily weather data with defined characteristics. The weather generator MarkSim was developed to produce climatological time series for crop modelling in the tropics. It provides daily values for maximum and minimum temperature, precipitation and solar radiation. The monthly means can either be derived from the internal climate surfaces or prescribed as additional inputs. We compared the generated outputs observations from three climate stations in Gabon (Lastourville, Moanda and Mouilla) and found that maximum temperature and solar radiation were heavily overestimated during the long dry season. This is due to the internal dependency of the solar radiation estimates to precipitation. With no precipitation a cloudless sky is assumed and thus high incident solar radiation and a large diurnal temperature range. However, in reality it is cloudy in the Congo River Basin during the long dry season. Therefore, we applied a correction factor to solar radiation and temperature range based on the ratio of values on rainy days and days without rain, respectively. For assessing the impact of our correction, we simulated the ecosystem behaviour using the climate data from Lastourville, Moanda and Mouilla with the mechanistic ecosystem model Biome-BGC. Differences in terms of the carbon, nitrogen and water cycle were subsequently analysed and discussed.

  6. Interannual variability of mean sea level and its sensitivity to wind climate in an inter-tidal basin

    NASA Astrophysics Data System (ADS)

    Gerkema, Theo; Duran-Matute, Matias

    2017-12-01

    The relationship between the annual wind records from a weather station and annual mean sea level in an inter-tidal basin, the Dutch Wadden Sea, is examined. Recent, homogeneous wind records are used, covering the past 2 decades. It is demonstrated that even such a relatively short record is sufficient for finding a convincing relationship. The interannual variability of mean sea level is largely explained by the west-east component of the net wind energy, with some further improvement if one also includes the south-north component and the annual mean atmospheric pressure. Using measured data from a weather station is found to give a slight improvement over reanalysis data, but for both the correlation between annual mean sea level and wind energy in the west-east direction is high. For different tide gauge stations in the Dutch Wadden Sea and along the coast, we find the same qualitative characteristics, but even within this small region, different locations show a different sensitivity of annual mean sea level to wind direction. Correcting observed values of annual mean level for meteorological factors reduces the margin of error (expressed as 95 % confidence interval) by more than a factor of 4 in the trends of the 20-year sea level record. Supplementary data from a numerical hydrodynamical model are used to illustrate the regional variability in annual mean sea level and its interannual variability at a high spatial resolution. This study implies that climatic changes in the strength of winds from a specific direction may affect local annual mean sea level quite significantly.

  7. Validation of SCIAMACHY HDO/H2O measurements using the TCCON and NDACC-MUSICA networks

    NASA Astrophysics Data System (ADS)

    Scheepmaker, R. A.; Frankenberg, C.; Deutscher, N. M.; Schneider, M.; Barthlott, S.; Blumenstock, T.; Garcia, O. E.; Hase, F.; Jones, N.; Mahieu, E.; Notholt, J.; Velazco, V.; Landgraf, J.; Aben, I.

    2015-04-01

    Measurements of the atmospheric HDO/H2O ratio help us to better understand the hydrological cycle and improve models to correctly simulate tropospheric humidity and therefore climate change. We present an updated version of the column-averaged HDO/H2O ratio data set from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY). The data set is extended with 2 additional years, now covering 2003-2007, and is validated against co-located ground-based total column δD measurements from Fourier transform spectrometers (FTS) of the Total Carbon Column Observing Network (TCCON) and the Network for the Detection of Atmospheric Composition Change (NDACC, produced within the framework of the MUSICA project). Even though the time overlap among the available data is not yet ideal, we determined a mean negative bias in SCIAMACHY δD of -35 ± 30‰ compared to TCCON and -69 ± 15‰ compared to MUSICA (the uncertainty indicating the station-to-station standard deviation). The bias shows a latitudinal dependency, being largest (∼ -60 to -80‰) at the highest latitudes and smallest (∼ -20 to -30‰) at the lowest latitudes. We have tested the impact of an offset correction to the SCIAMACHY HDO and H2O columns. This correction leads to a humidity- and latitude-dependent shift in δD and an improvement of the bias by 27‰, although it does not lead to an improved correlation with the FTS measurements nor to a strong reduction of the latitudinal dependency of the bias. The correction might be an improvement for dry, high-altitude areas, such as the Tibetan Plateau and the Andes region. For these areas, however, validation is currently impossible due to a lack of ground stations. The mean standard deviation of single-sounding SCIAMACHY-FTS differences is ∼ 115‰, which is reduced by a factor ∼ 2 when we consider monthly means. When we relax the strict matching of individual measurements and focus on the mean seasonalities using all available FTS data, we find that the correlation coefficients between SCIAMACHY and the FTS networks improve from 0.2 to 0.7-0.8. Certain ground stations show a clear asymmetry in δD during the transition from the dry to the wet season and back, which is also detected by SCIAMACHY. This asymmetry points to a transition in the source region temperature or location of the water vapour and shows the added information that HDO/H2O measurements provide when used in combination with variations in humidity.

  8. Validation of SCIAMACHY HDO/H2O measurements using the TCCON and NDACC-MUSICA networks

    NASA Astrophysics Data System (ADS)

    Scheepmaker, R. A.; Frankenberg, C.; Deutscher, N. M.; Schneider, M.; Barthlott, S.; Blumenstock, T.; Garcia, O. E.; Hase, F.; Jones, N.; Mahieu, E.; Notholt, J.; Velazco, V.; Landgraf, J.; Aben, I.

    2014-11-01

    Measurements of the atmospheric HDO/H2O ratio help us to better understand the hydrological cycle and improve models to correctly simulate tropospheric humidity and therefore climate change. We present an updated version of the column-averaged HDO/H2O ratio dataset from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY). The dataset is extended with two additional years, now covering 2003-2007, and is validated against co-located ground-based total column δD measurements from Fourier-Transform Spectrometers (FTS) of the Total Carbon Column Observing Network (TCCON) and the Network for the Detection of Atmospheric Composition Change (NDACC, produced within the framework of the MUSICA project). Even though the time overlap between the available data is not yet ideal, we determined a mean negative bias in SCIAMACHY δD of -35±30‰ compared to TCCON and -69±15‰ compared to MUSICA (the uncertainty indicating the station-to-station standard deviation). The bias shows a latitudinal dependency, being largest (∼ -60 to -80‰) at the highest latitudes and smallest (∼ -20 to -30‰) at the lowest latitudes. We have tested the impact of an offset correction to the SCIAMACHY HDO and H2O columns. This correction leads to a humidity and latitude dependent shift in δD and an improvement of the bias by 27‰, although it does not lead to an improved correlation with the FTS measurements nor to a strong reduction of the latitudinal dependency of the bias. The correction might be an improvement for dry, high-altitude areas, such as the Tibetan Plateau and the Andes region. For these areas, however, validation is currently impossible due to a lack of ground stations. The mean standard deviation of single-sounding SCIAMACHY-FTS differences is ∼ 115‰, which is reduced by a factor ∼ 2 when we consider monthly means. When we relax the strict matching of individual measurements and focus on the mean seasonalities using all available FTS data, we find that the correlation coefficients between SCIAMACHY and the FTS networks improve from 0.2 to 0.7-0.8. Certain ground stations show a clear asymmetry in δD during the transition from the dry to the wet season and back, which is also detected by SCIAMACHY. This asymmetry points to a transition in the source region temperature or location of the water vapor, and shows the added information that HDO/H2O measurements provide, if used in combination with variations in humidity.

  9. Distribution of Attenuation Factor Beneath the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Fujihara, S.; Hashimoto, M.

    2001-12-01

    In this research, we tried to estimate the distribution of attenuation factor of seismic wave, which is closely related to the above-mentioned inelastic parameters. Here the velocity records of events from the Freesia network and the J-array network were used. The events were selected based on the following criteria: (a) events with JMA magnitudes from 3.8 to 5.0 and hypocentral distance from 20km to 200km, (b) events with JMA magnitudes from 5.1 to 6.8 and hypocentral distance from 200km to 10_?, (c) Depth of all events is greater than 30km with S/N ratio greater than 2. After correcting the instrument response, P-wave spectra were estimated. Following Boatwright (1991), the observed spectra were modeled by the theoretical spectra by assuming the following relation; Aij(f) = Si(f) Pij(f) Cj(f). Brune's model (1970) was assumed for the source model. Aij(f), Si(f), Pij(f), and Cj(f) are defined as observed spectrum, source spectrum, propagation effect, and site effect, respectively. Frequency dependence of attenuation factor was not assumed here. The global standard velocity model (AK135) is used for ray tracing. Ellipticity corrections and station elevation corrections are also done. The block sizes are 50km by 50km laterally and increase vertically. As the results of analysis, the attenuation structure beneath Japanese Islands up to the depth of 180km was reconstructed with relatively good resolution. The low Q distribution is clearly seen in central Hokkaido, western Hokkaido, Tohoku region, Hida region, Izu region, and southern Kyushu. The relatively sharp decrease in Q associated with asthenosphere can be seen below the depth of 70km.

  10. Spatial and temporal air quality pattern recognition using environmetric techniques: a case study in Malaysia.

    PubMed

    Syed Abdul Mutalib, Sharifah Norsukhairin; Juahir, Hafizan; Azid, Azman; Mohd Sharif, Sharifah; Latif, Mohd Talib; Aris, Ahmad Zaharin; Zain, Sharifuddin M; Dominick, Doreena

    2013-09-01

    The objective of this study is to identify spatial and temporal patterns in the air quality at three selected Malaysian air monitoring stations based on an eleven-year database (January 2000-December 2010). Four statistical methods, Discriminant Analysis (DA), Hierarchical Agglomerative Cluster Analysis (HACA), Principal Component Analysis (PCA) and Artificial Neural Networks (ANNs), were selected to analyze the datasets of five air quality parameters, namely: SO2, NO2, O3, CO and particulate matter with a diameter size of below 10 μm (PM10). The three selected air monitoring stations share the characteristic of being located in highly urbanized areas and are surrounded by a number of industries. The DA results show that spatial characterizations allow successful discrimination between the three stations, while HACA shows the temporal pattern from the monthly and yearly factor analysis which correlates with severe haze episodes that have happened in this country at certain periods of time. The PCA results show that the major source of air pollution is mostly due to the combustion of fossil fuel in motor vehicles and industrial activities. The spatial pattern recognition (S-ANN) results show a better prediction performance in discriminating between the regions, with an excellent percentage of correct classification compared to DA. This study presents the necessity and usefulness of environmetric techniques for the interpretation of large datasets aiming to obtain better information about air quality patterns based on spatial and temporal characterizations at the selected air monitoring stations.

  11. The research of a solution on locating optimally a station for seismic disasters rescue in a city

    NASA Astrophysics Data System (ADS)

    Yao, Qing-Lin

    1995-02-01

    When the stations for seismic disasters rescue in future or the similars are designed on a network of communication line, the general absolute center of a graph needs to be solved to reduce the requirements in the number of stations and running parameters and to establish an optimal station in a sense distribution of the rescue arrival time by the way of locating optimally the stations. The existing solution on this problem was proposed by Edward (1978) in which, however, there is serious deviation. In this article, the work of Edward (1978) is developed in both formula and figure, more correct solution is proposed and proved. Then the result from the newer solution is contrasted with that from the older one in a instance about locating optimally the station for seismic disasters rescue.

  12. 75 FR 53843 - Airworthiness Directives; The Boeing Company Model 737-100 and -200 Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... damaged fasteners of certain fuselage frames and stub beams, and corrective actions if necessary. For... hole of the frame at body station 639, stringer S-16, and corrective actions if necessary. For certain... terminates the repetitive inspections for the repaired or modified frame only. For airplanes on which the...

  13. 75 FR 27969 - Airworthiness Directives; The Boeing Company Model 737-100 and -200 Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-19

    ... cracking and damaged fasteners of certain fuselage frames and stub beams, and corrective actions if... the inboard chord fastener hole of the frame at body station 639, stringer S-16, and corrective... inspections for the repaired or modified frame only. For airplanes on which the modification or repair is done...

  14. 75 FR 6154 - Airworthiness Directives; The Boeing Company Model 767 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-08

    ... to include the vertical inner chord at STA 1809.5. This proposed AD results from reported fatigue... horizontal inner chord at STA 1809.5. We are proposing this AD to detect and correct fatigue cracking in the... and correct fatigue cracking in the bulkhead structure at station (STA) 1809.5 and the vertical inner...

  15. Oceanic Loading and Local Distortions at the Baksan, Russia, and Gran Sasso, Italy, Strain Stations

    NASA Astrophysics Data System (ADS)

    Milyukov, V. K.; Amoruso, A.; Crescentini, L.; Mironov, A. P.; Myasnikov, A. V.; Lagutkina, A. V.

    2018-03-01

    Reliable use of strain data in geophysical studies requires their preliminary correction for ocean loading and various local distortions. These effects, in turn, can be estimated from the tidal records which are contributed by solid and oceanic loading. In this work, we estimate the oceanic tidal loading at two European strain stations (Baksan, Russia, and Gran Sasso, Italy) by analyzing the results obtained with the different Earth and ocean models. The influence of local distortions on the strain measurements at the two stations is estimated.

  16. Corrective Action Investigation Plan for Corrective Action Unit 139: Waste Disposal Sites, Nevada Test Site, Nevada, Rev. No.: 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant Evenson

    2006-04-01

    Corrective Action Unit (CAU) 139 is located in Areas 3, 4, 6, and 9 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 139 is comprised of the seven corrective action sites (CASs) listed below: (1) 03-35-01, Burn Pit; (2) 04-08-02, Waste Disposal Site; (3) 04-99-01, Contaminated Surface Debris; (4) 06-19-02, Waste Disposal Site/Burn Pit; (5) 06-19-03, Waste Disposal Trenches; (6) 09-23-01, Area 9 Gravel Gertie; and (7) 09-34-01, Underground Detection Station. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluatemore » and recommend corrective action alternatives with the exception of CASs 09-23-01 and 09-34-01. Regarding these two CASs, CAS 09-23-01 is a gravel gertie where a zero-yield test was conducted with all contamination confined to below ground within the area of the structure, and CAS 09-34-01 is an underground detection station where no contaminants are present. Additional information will be obtained by conducting a corrective action investigation (CAI) before evaluating corrective action alternatives and selecting the appropriate corrective action for the other five CASs where information is insufficient. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on January 4, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 139.« less

  17. Determination of Soil Erosion Risk in the Mustafakemalpasa River Basin, Turkey, Using the Revised Universal Soil Loss Equation, Geographic Information System, and Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ozsoy, Gokhan; Aksoy, Ertugrul; Dirim, M. Sabri; Tumsavas, Zeynal

    2012-10-01

    Sediment transport from steep slopes and agricultural lands into the Uluabat Lake (a RAMSAR site) by the Mustafakemalpasa (MKP) River is a serious problem within the river basin. Predictive erosion models are useful tools for evaluating soil erosion and establishing soil erosion management plans. The Revised Universal Soil Loss Equation (RUSLE) function is a commonly used erosion model for this purpose in Turkey and the rest of the world. This research integrates the RUSLE within a geographic information system environment to investigate the spatial distribution of annual soil loss potential in the MKP River Basin. The rainfall erosivity factor was developed from local annual precipitation data using a modified Fournier index: The topographic factor was developed from a digital elevation model; the K factor was determined from a combination of the soil map and the geological map; and the land cover factor was generated from Landsat-7 Enhanced Thematic Mapper (ETM) images. According to the model, the total soil loss potential of the MKP River Basin from erosion by water was 11,296,063 Mg year-1 with an average soil loss of 11.2 Mg year-1. The RUSLE produces only local erosion values and cannot be used to estimate the sediment yield for a watershed. To estimate the sediment yield, sediment-delivery ratio equations were used and compared with the sediment-monitoring reports of the Dolluk stream gauging station on the MKP River, which collected data for >41 years (1964-2005). This station observes the overall efficiency of the sediment yield coming from the Orhaneli and Emet Rivers. The measured sediment in the Emet and Orhaneli sub-basins is 1,082,010 Mg year-1 and was estimated to be 1,640,947 Mg year-1 for the same two sub-basins. The measured sediment yield of the gauge station is 127.6 Mg km-2 year-1 but was estimated to be 170.2 Mg km-2 year-1. The close match between the sediment amounts estimated using the RUSLE-geographic information system (GIS) combination and the measured values from the Dolluk sediment gauge station shows that the potential soil erosion risk of the MKP River Basin can be estimated correctly and reliably using the RUSLE function generated in a GIS environment.

  18. Nanosecond-level time synchronization of autonomous radio detector stations for extensive air showers

    NASA Astrophysics Data System (ADS)

    The Pierre Auger Collaboration

    2016-01-01

    To exploit the full potential of radio measurements of cosmic-ray air showers at MHz frequencies, a detector timing synchronization within 1 ns is needed. Large distributed radio detector arrays such as the Auger Engineering Radio Array (AERA) rely on timing via the Global Positioning System (GPS) for the synchronization of individual detector station clocks. Unfortunately, GPS timing is expected to have an accuracy no better than about 5 ns. In practice, in particular in AERA, the GPS clocks exhibit drifts on the order of tens of ns. We developed a technique to correct for the GPS drifts, and an independent method is used to cross-check that indeed we reach a nanosecond-scale timing accuracy by this correction. First, we operate a ``beacon transmitter'' which emits defined sine waves detected by AERA antennas recorded within the physics data. The relative phasing of these sine waves can be used to correct for GPS clock drifts. In addition to this, we observe radio pulses emitted by commercial airplanes, the position of which we determine in real time from Automatic Dependent Surveillance Broadcasts intercepted with a software-defined radio. From the known source location and the measured arrival times of the pulses we determine relative timing offsets between radio detector stations. We demonstrate with a combined analysis that the two methods give a consistent timing calibration with an accuracy of 2 ns or better. Consequently, the beacon method alone can be used in the future to continuously determine and correct for GPS clock drifts in each individual event measured by AERA.

  19. Nanosecond-level time synchronization of autonomous radio detector stations for extensive air showers

    DOE PAGES

    Aab, Alexander

    2016-01-29

    To exploit the full potential of radio measurements of cosmic-ray air showers at MHz frequencies, a detector timing synchronization within 1 ns is needed. Large distributed radio detector arrays such as the Auger Engineering Radio Array (AERA) rely on timing via the Global Positioning System (GPS) for the synchronization of individual detector station clocks. Unfortunately, GPS timing is expected to have an accuracy no better than about 5 ns. In practice, in particular in AERA, the GPS clocks exhibit drifts on the order of tens of ns. We developed a technique to correct for the GPS drifts, and an independentmore » method used for cross-checks that indeed we reach nanosecond-scale timing accuracy by this correction. First, we operate a “beacon transmitter” which emits defined sine waves detected by AERA antennas recorded within the physics data. The relative phasing of these sine waves can be used to correct for GPS clock drifts. In addition to this, we observe radio pulses emitted by commercial airplanes, the position of which we determine in real time from Automatic Dependent Surveillance Broadcasts intercepted with a software-defined radio. From the known source location and the measured arrival times of the pulses we determine relative timing offsets between radio detector stations. We demonstrate with a combined analysis that the two methods give a consistent timing calibration with an accuracy of 2 ns or better. Consequently, the beacon method alone can be used in the future to continuously determine and correct for GPS clock drifts in each individual event measured by AERA.« less

  20. Very Long Baseline Interferometry Applied to Polar Motion, Relativity and Geodesy. Ph.D. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Ma, C.

    1978-01-01

    The causes and effects of diurnal polar motion are described. An algorithm is developed for modeling the effects on very long baseline interferometry observables. Five years of radio-frequency very long baseline interferometry data from stations in Massachusetts, California, and Sweden are analyzed for diurnal polar motion. It is found that the effect is larger than predicted by McClure. Corrections to the standard nutation series caused by the deformability of the earth have a significant effect on the estimated diurnal polar motion scaling factor and the post-fit residual scatter. Simulations of high precision very long baseline interferometry experiments taking into account both measurement uncertainty and modeled errors are described.

  1. Assessing reference evapotranspiration at regional scale based on remote sensing, weather forecast and GIS tools

    NASA Astrophysics Data System (ADS)

    Ramírez-Cuesta, J. M.; Cruz-Blanco, M.; Santos, C.; Lorite, I. J.

    2017-03-01

    Reference evapotranspiration (ETo) is a key component in efficient water management, especially in arid and semi-arid environments. However, accurate ETo assessment at the regional scale is complicated by the limited number of weather stations and the strict requirements in terms of their location and surrounding physical conditions for the collection of valid weather data. In an attempt to overcome this limitation, new approaches based on the use of remote sensing techniques and weather forecast tools have been proposed. Use of the Land Surface Analysis Satellite Application Facility (LSA SAF) tool and Geographic Information Systems (GIS) have allowed the design and development of innovative approaches for ETo assessment, which are especially useful for areas lacking available weather data from weather stations. Thus, by identifying the best-performing interpolation approaches (such as the Thin Plate Splines, TPS) and by developing new approaches (such as the use of data from the most similar weather station, TS, or spatially distributed correction factors, CITS), errors as low as 1.1% were achieved for ETo assessment. Spatial and temporal analyses reveal that the generated errors were smaller during spring and summer as well as in homogenous topographic areas. The proposed approaches not only enabled accurate calculations of seasonal and daily ETo values, but also contributed to the development of a useful methodology for evaluating the optimum number of weather stations to be integrated into a weather station network and the appropriateness of their locations. In addition to ETo, other variables included in weather forecast datasets (such as temperature or rainfall) could be evaluated using the same innovative methodology proposed in this study.

  2. Estimating small amplitude tremor sources

    NASA Astrophysics Data System (ADS)

    Katakami, S.; Ito, Y.; Ohta, K.

    2017-12-01

    Various types of slow earthquakes have been recently observed at both the updip and downdip edges of the coseismic slip areas [Obara and Kato, 2016]. Frequent occurrence of slow earthquakes may help us to reveal the physics underlying megathrust events as useful analogs. Maeda and Obara [2009] estimated spatiotemporal distribution of seismic energy radiation from low-frequency tremors. They applied their method to only the tremors, whose hypocenters had been decided with multiple station method. However, recently Katakami et al. (2016) identified a lot of continuous tremors with small amplitude that were not recorded multiple stations. These small events should be important to reveal the whole slow earthquake activity and to understand strain condition around a plate boundary in subduction zones. First, we apply the modified frequency scanning method (mFSM) at a single station to NIED Hi-net data in the southwestern Japan to understand whole tremor activity which were included weak signal tremors. Second, we developed a method to identify the tremor source area by using the difference of apparent tremor energy at each station by mFSM. We estimated the apparent source tremor energy after correcting both site amplification factor and geometrical spreading. Finally we calculate a tremor source area if the difference of apparent tremor energy between each pair of sites is the smallest. We checked a validity of this analysis by using only tremors which were already detected by envelope correlation method [Idehara et al., 2014]. We calculated the average amplitude as apparent tremor energy in 5 minutes window after occurring tremor at each station. Our results almost consistent to hypocenters which were determined the envelope correlation method. We successfully determined apparent tremor source areas of weak continuous tremors after estimating possible tremor occurrence time windows by using mFSM.

  3. Enabler operator station

    NASA Technical Reports Server (NTRS)

    Bailey, Andrea; Kietzman, John; King, Shirlyn; Stover, Rae; Wegner, Torsten

    1992-01-01

    The objective of this project was to design an onboard operator station for the conceptual Lunar Work Vehicle (LWV). The LWV would be used in the colonization of a lunar outpost. The details that follow, however, are for an Earth-bound model. The operator station is designed to be dimensionally correct for an astronaut wearing the current space shuttle EVA suit (which include life support). The proposed operator station will support and restrain an astronaut as well as to provide protection from the hazards of vehicle rollover. The threat of suit puncture is eliminated by rounding all corners and edges. A step-plate, located at the front of the vehicle, provides excellent ease of entry and exit. The operator station weight requirements are met by making efficient use of rigid members, semi-rigid members, and woven fabrics.

  4. Research on station management in subway operation safety

    NASA Astrophysics Data System (ADS)

    Li, Yiman

    2017-10-01

    The management of subway station is an important part of the safe operation of urban subway. In order to ensure the safety of subway operation, it is necessary to study the relevant factors that affect station management. In the protection of subway safety operations on the basis of improving the quality of service, to promote the sustained and healthy development of subway stations. This paper discusses the influencing factors of subway operation accident and station management, and analyzes the specific contents of station management security for subway operation, and develops effective suppression measures. It is desirable to improve the operational quality and safety factor for subway operations.

  5. Benefit of Complete State Monitoring For GPS Realtime Applications With Geo++ Gnsmart

    NASA Astrophysics Data System (ADS)

    Wübbena, G.; Schmitz, M.; Bagge, A.

    Today, the demand for precise positioning at the cm-level in realtime is worldwide growing. An indication for this is the number of operational RTK network installa- tions, which use permanent reference station networks to derive corrections for dis- tance dependent GPS errors and to supply corrections to RTK users in realtime. Gen- erally, the inter-station distances in RTK networks are selected at several tens of km in range and operational installations cover areas of up to 50000 km x km. However, the separation of the permanent reference stations can be increased to sev- eral hundred km, while a correct modeling of all error components is applied. Such networks can be termed as sparse RTK networks, which cover larger areas with a reduced number of stations. The undifferenced GPS observable is best suited for this task estimating the complete state of a permanent GPS network in a dynamic recursive Kalman filter. A rigorous adjustment of all simultaneous reference station data is re- quired. The sparse network design essentially supports the state estimation through its large spatial extension. The benefit of the approach and its state modeling of all GPS error components is a successful ambiguity resolution in realtime over long distances. The above concepts are implemented in the operational GNSMART (GNSS State Monitoring and Representation Technique) software of Geo++. It performs a state monitoring of all error components at the mm-level, because for RTK networks this accuracy is required to sufficiently represent the distance dependent errors for kine- matic applications. One key issue of the modeling is the estimation of clocks and hard- ware delays in the undifferenced approach. This pre-requisite subsequently allows for the precise separation and modeling of all other error components. Generally most of the estimated parameters are considered as nuisance parameters with respect to pure positioning tasks. As the complete state vector of GPS errors is available in a GPS realtime network, additional information besides position can be derived e.g. regional precise satellite clocks, orbits, total ionospheric electron content, tropospheric water vapor distribution, and also dynamic reference station movements. The models of GNSMART are designed to work with regional, continental or even global data. Results from GNSMART realtime networks with inter-station distances of several hundred km are presented to demonstrate the benefits of the operational implemented concepts.

  6. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  7. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  8. International water and steam quality standards for thermal power station drum-type and waste heat recovery boilers with the treatment of boiler water with phosphates and NaOH

    NASA Astrophysics Data System (ADS)

    Petrova, T. I.; Orlov, K. A.; Dooley, R. B.

    2017-01-01

    One of the ways for improving the operational reliability and economy of thermal power station equipment, including combined-cycle equipment, is to decrease the rates of the corrosion of constructional materials and the formation of scales in the water-steam circuit. These processes can be reduced to a minimum via the use of water with a minimum content of admixtures and the correction treatment of a heat-transfer fluid. The International Association for the Properties of Water and Steam (IAPWS), which unites specialists from every country of the world, has developed water and steam quality standards for power station equipment of different types on the basis of theoretical studies and long-term experience in the operation of power plants in 21 countries. Different water chemistry regimes are currently used at conventional and combined-cycle thermal power stations. This paper describes the conditions for the implementation of water chemistry regimes with the use of sodium salts of phosphoric acid and NaOH for the quality correction of boiler water. Water and steam quality standards and some recommendations for their maintenance under different operational conditions are given for each of the considered water chemistry regimes. The standards are designed for the water-steam circuit of conventional and combined-cycle thermal power stations. It is pointed out that the quality control of a heat-transfer fluid must be especially careful at combined-cycle thermal power stations with frequent startups and shutdowns.

  9. ILRS Station Reporting

    NASA Technical Reports Server (NTRS)

    Noll, Carey E.; Pearlman, Michael Reisman; Torrence, Mark H.

    2013-01-01

    Network stations provided system configuration documentation upon joining the ILRS. This information, found in the various site and system log files available on the ILRS website, is essential to the ILRS analysis centers, combination centers, and general user community. Therefore, it is imperative that the station personnel inform the ILRS community in a timely fashion when changes to the system occur. This poster provides some information about the various documentation that must be maintained. The ILRS network consists of over fifty global sites actively ranging to over sixty satellites as well as five lunar reflectors. Information about these stations are available on the ILRS website (http://ilrs.gsfc.nasa.gov/network/stations/index.html). The ILRS Analysis Centers must have current information about the stations and their system configuration in order to use their data in generation of derived products. However, not all information available on the ILRS website is as up-to-date as necessary for correct analysis of their data.

  10. Homogenisation of minimum and maximum air temperature in northern Portugal

    NASA Astrophysics Data System (ADS)

    Freitas, L.; Pereira, M. G.; Caramelo, L.; Mendes, L.; Amorim, L.; Nunes, L.

    2012-04-01

    Homogenization of minimum and maximum air temperature has been carried out for northern Portugal for the period 1941-2010. The database corresponds to the values of the monthly arithmetic averages calculated from daily values observed at stations within the network of stations managed by the national Institute of Meteorology (IM). Some of the weather stations of IM's network are collecting data for more than a century; however, during the entire observing period, some factors have affected the climate series and have to be considered such as, changes in the station surroundings and changes related to replacement of manually operated instruments. Besides these typical changes, it is of particular interest the station relocation to rural areas or to the urban-rural interface and the installation of automatic weather stations in the vicinity of the principal or synoptic stations with the aim of replacing them. The information from these relocated and new stations was merged to produce just one but representative time series of that site. This process starts at the end 90's and the information of the time series fusion process constitutes the set of metadata used. Two basic procedures were performed: (i) preliminary statistical and quality control analysis; and, (ii) detection and correction of problems of homogeneity. In the first case, was developed and used software for quality control, specifically dedicated for the detection of outliers, based on the quartile values of the time series itself. The analysis of homogeneity was performed using the MASH (Multiple Analysis of Series for Homogenisation) and HOMER, which is a software application developed and recently made available within the COST Action ES0601 (COST-ES0601, 2012). Both methods provide a fast quality control of the original data and were developed for automatic processing, analyzing, homogeneity testing and adjusting of climatological data, but manual usage is also possible. Obtained results with both methods will be presented, compared and discussed along with the results of the sensitivity tests performed with both methods. COST-ES0601, 2012: "ACTION COST-ES0601 - Advances in homogenisation methods of climate series: an integrated approach HOME". Available at http://www.homogenisation.org/v_02_15/ [accessed 3 January 2012].

  11. Determination of the 4D-Tropospheric Water Vapor Distribution by GPS for the Assimilation into Numerical Weather Prediction Models

    NASA Astrophysics Data System (ADS)

    Perler, D.; Geiger, A.; Rothacher, M.

    2011-12-01

    Water vapor is involved in many atmospheric processes and is therefore a crucial quantity in numerical weather prediction (NWP). Recent flood events in Switzerland have pointed out several deficiencies in planning and prediction methods used for flood risk mitigation. Investigations have shown that one of the limiting factors to forecast such events with NWP models is the insufficient knowledge of the water vapor distribution in the atmosphere. Global Navigation Satellite System (GNSS) ground-based tomography is a technique to monitor the 4D distribution of water vapor in the troposphere and has the potential to considerably improve the initial water vapor field used in NWP. We developed a GNSS tomography software called AWATOS-2 which is based on the Kalman filter technique and provides different parameterizations of the tropospheric wet refractivity field (Perler et al., 2010; Perler et al., 2011). The software can be used for the assimilation of different observations such as GNSS zero-differences, GNSS double-differences and any kind of point observations (e.g. balloons, aircrafts). In this talk, we present the results of a long-term study where GPS double-difference delays have been processed. The tomographic solutions have been investigated in view of their assimilation into local NWP models. The data set comprises observations from 46 GPS stations collected during 1 year. The core area of the investigation is located in Central Europe. We analyzed the performance of different voxel parameterizations used in the tomographic reconstruction of the troposphere and developed a new bias correction model which minimizes systematic differences. The correction model reduces the root-mean-square error (RMSE) with respected to the NWP model from 4.6 ppm to 3.0 ppm. After bias correction, high-elevation stations still show high RMSEs. In the presentation, we will discuss the treatment of such stations in terms of assimilation into NWP models and will show how sophisticated voxel parameterizations improve the accuracy. Perler, D.; Hurter, F.; Brockmann, E.; Leuenberger, D.; Ruffieux, D.; Geiger, A. and Rothacher, M. (2010). In Proceedings of the 7th Management Committee (MC7) and Working Group (WG) Meeting, Colone (Germany), 8 pp. Perler, D.; Geiger, A. and Hurter, F. (2011). 4D GPS water vapor tomography: new parameterized approaches. J. Geodesy 85(8), pp. 539-550, DOI 10.1007/s00190-011-0454-2.

  12. Analysis on influencing factors of EV charging station planning based on AHP

    NASA Astrophysics Data System (ADS)

    Yan, F.; Ma, X. F.

    2016-08-01

    As a new means of transport, electric vehicle (EV) is of great significance to alleviate the energy crisis. EV charging station planning has a far-reaching significance for the development of EV industry. This paper analyzes the impact factors of EV charging station planning, and then uses the analytic hierarchy process (AHP) to carry on the further analysis to the influencing factors, finally it gets the weight of each influence factor, and provides the basis for the evaluation scheme of the planning of charging stations for EV.

  13. An extended linear scaling method for downscaling temperature and its implication in the Jhelum River basin, Pakistan, and India, using CMIP5 GCMs

    NASA Astrophysics Data System (ADS)

    Mahmood, Rashid; JIA, Shaofeng

    2017-11-01

    In this study, the linear scaling method used for the downscaling of temperature was extended from monthly scaling factors to daily scaling factors (SFs) to improve the daily variations in the corrected temperature. In the original linear scaling (OLS), mean monthly SFs are used to correct the future data, but mean daily SFs are used to correct the future data in the extended linear scaling (ELS) method. The proposed method was evaluated in the Jhelum River basin for the period 1986-2000, using the observed maximum temperature (Tmax) and minimum temperature (Tmin) of 18 climate stations and the simulated Tmax and Tmin of five global climate models (GCMs) (GFDL-ESM2G, NorESM1-ME, HadGEM2-ES, MIROC5, and CanESM2), and the method was also compared with OLS to observe the improvement. Before the evaluation of ELS, these GCMs were also evaluated using their raw data against the observed data for the same period (1986-2000). Four statistical indicators, i.e., error in mean, error in standard deviation, root mean square error, and correlation coefficient, were used for the evaluation process. The evaluation results with GCMs' raw data showed that GFDL-ESM2G and MIROC5 performed better than other GCMs according to all the indicators but with unsatisfactory results that confine their direct application in the basin. Nevertheless, after the correction with ELS, a noticeable improvement was observed in all the indicators except correlation coefficient because this method only adjusts (corrects) the magnitude. It was also noticed that the daily variations of the observed data were better captured by the corrected data with ELS than OLS. Finally, the ELS method was applied for the downscaling of five GCMs' Tmax and Tmin for the period of 2041-2070 under RCP8.5 in the Jhelum basin. The results showed that the basin would face hotter climate in the future relative to the present climate, which may result in increasing water requirements in public, industrial, and agriculture sectors; change in the hydrological cycle and monsoon pattern; and lack of glaciers in the basin.

  14. Assessing the validity of station location assumptions made in the calculation of the geomagnetic disturbance index, Dst

    USGS Publications Warehouse

    Gannon, Jennifer

    2012-01-01

    In this paper, the effects of the assumptions made in the calculation of the Dst index with regard to longitude sampling, hemisphere bias, and latitude correction are explored. The insights gained from this study will allow operational users to better understand the local implications of the Dst index and will lead to future index formulations that are more physically motivated. We recompute the index using 12 longitudinally spaced low-latitude stations, including the traditional 4 (in Honolulu, Kakioka, San Juan, and Hermanus), and compare it to the standard United States Geological Survey definitive Dst. We look at the hemisphere balance by comparing stations at equal geomagnetic latitudes in the Northern and Southern hemispheres. We further separate the 12-station time series into two hemispheric indices and find that there are measurable differences in the traditional Dst formulation due to the undersampling of the Southern Hemisphere in comparison with the Northern Hemisphere. To analyze the effect of latitude correction, we plot latitudinal variation in a disturbance observed during the year 2005 using two separate longitudinal observatory chains. We separate these by activity level and find that while the traditional cosine form fits the latitudinal distributions well for low levels of activity, at higher levels of disturbance the cosine form does not fit the observed variation. This suggests that the traditional latitude scaling is insufficient during active times. The effect of the Northern Hemisphere bias and the inadequate latitude scaling is such that the standard correction underestimates the true disturbance by 10–30 nT for storms of main phase magnitude deviation greater than 150 nT in the traditional Dst index.

  15. Analytical Evaluation of a Method of Midcourse Guidance for Rendezvous with Earth Satellites

    NASA Technical Reports Server (NTRS)

    Eggleston, John M.; Dunning, Robert S.

    1961-01-01

    A digital-computer simulation was made of the midcourse or ascent phase of a rendezvous between a ferry vehicle and a space station. The simulation involved a closed-loop guidance system in which both the relative position and relative velocity between ferry and station are measured (by simulated radar) and the relative-velocity corrections required to null the miss distance are computed and applied. The results are used to study the effectiveness of a particular set of guidance equations and to study the effects of errors in the launch conditions and errors in the navigation data. A number of trajectories were investigated over a variety of initial conditions for cases in which the space station was in a circular orbit and also in an elliptic orbit. Trajectories are described in terms of a rotating coordinate system fixed in the station. As a result of this study the following conclusions are drawn. Successful rendezvous can be achieved even with launch conditions which are substantially less accurate than those obtained with present-day techniques. The average total-velocity correction required during the midcourse phase is directly proportional to the radar accuracy but the miss distance is not. Errors in the time of booster burnout or in the position of the ferry at booster burnout are less important than errors in the ferry velocity at booster burnout. The use of dead bands to account for errors in the navigational (radar) equipment appears to depend upon a compromise between the magnitude of the velocity corrections to be made and the allowable miss distance at the termination of the midcourse phase of the rendezvous. When approximate guidance equations are used, there are limits on their accuracy which are dependent on the angular distance about the earth to the expected point of rendezvous.

  16. Remotely-interrogated high data rate free space laser communications link

    DOEpatents

    Ruggiero, Anthony J [Livermore, CA

    2007-05-29

    A system and method of remotely extracting information from a communications station by interrogation with a low power beam. Nonlinear phase conjugation of the low power beam results in a high power encoded return beam that automatically tracks the input beam and is corrected for atmospheric distortion. Intracavity nondegenerate four wave mixing is used in a broad area semiconductor laser in the communications station to produce the return beam.

  17. Newberry EGS Seismic Velocity Model

    DOE Data Explorer

    Templeton, Dennise

    2013-10-01

    We use ambient noise correlation (ANC) to create a detailed image of the subsurface seismic velocity at the Newberry EGS site down to 5 km. We collected continuous data for the 22 stations in the Newberry network, together with 12 additional stations from the nearby CC, UO and UW networks. The data were instrument corrected, whitened and converted to single bit traces before cross correlation according to the methodology in Benson (2007). There are 231 unique paths connecting the 22 stations of the Newberry network. The additional networks extended that to 402 unique paths crossing beneath the Newberry site.

  18. Human factors in space station architecture 1: Space station program implications for human factors research

    NASA Technical Reports Server (NTRS)

    Cohen, M. M.

    1985-01-01

    The space station program is based on a set of premises on mission requirements and the operational capabilities of the space shuttle. These premises will influence the human behavioral factors and conditions on board the space station. These include: launch in the STS Orbiter payload bay, orbital characteristics, power supply, microgravity environment, autonomy from the ground, crew make-up and organization, distributed command control, safety, and logistics resupply. The most immediate design impacts of these premises will be upon the architectural organization and internal environment of the space station.

  19. Determination of Focal Mechanisms of Non-Volcanic Tremors Based on S-Wave Polarization Data Corrected for the Effects of Anisotropy

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Uchide, T.; Takeda, N.

    2014-12-01

    We propose a method to determine focal mechanisms of non-volcanic tremors (NVTs) based on S-wave polarization angles. The successful retrieval of polarization angles in low S/N tremor signals owes much to the observation that NVTs propagate slowly and therefore they do not change their location immediately. This feature of NVTs enables us to use a longer window to compute a polarization angle (e.g., one minute or longer), resulting in a stack of particle motions. Following Zhang and Schwartz (1994), we first correct for the splitting effect to recover the source polarization angle (anisotropy-corrected angle). This is a key step, because shear-wave splitting distorts the particle motion excited by a seismic source. We then determine the best double-couple solution using anisotropy-corrected angles of multiple stations. The present method was applied to a tremor sequence at Kii Peninsula, southwest Japan, which occurred at the beginning of April 2013. A standard splitting and polarization analysis were subject to a one-minute-long moving window to determine the splitting parameters as well as anisotropy-corrected angles. A grid search approach was performed at each hour to determine the best double-couple solution satisfying one-hour average polarization angles. Most solutions show NW-dipping low-angle planes consistent with the plate boundary or SE-dipping high-angle planes. Because of 180 degrees ambiguity in polarization angles, the present method alone cannot distinguish compressional quadrant from dilatational one. Together with the observation of very low-frequency earthquakes near the present study area (Ito et al., 2007), it is reasonable to consider that they represent shear slip on low-angle thrust faults. It is also noted that some of solutions contain strike-slip component. Acknowledgements: Seismograph stations used in this study include permanent stations operated by NIED (Hi-net), JMA, Earthquake Research Institute, together with Geological Survey of Japan, AIST. This work was supported by JSPS KAKENHI Grant Number 24540463.

  20. KENNEDY SPACE CENTER, FLA. - A rudder speed brake actuator sits on an air-bearing pallet to undergo X-raying. Four actuators to be installed on the orbiter Discovery are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.

    NASA Image and Video Library

    2004-03-08

    KENNEDY SPACE CENTER, FLA. - A rudder speed brake actuator sits on an air-bearing pallet to undergo X-raying. Four actuators to be installed on the orbiter Discovery are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.

  1. Atmospheric refraction correction for Ka-band blind pointing on the DSS-13 beam waveguide antenna

    NASA Technical Reports Server (NTRS)

    Perez-Borroto, I. M.; Alvarez, L. S.

    1992-01-01

    An analysis of the atmospheric refraction corrections at the DSS-13 34-m diameter beam waveguide (BWG) antenna for the period Jul. - Dec. 1990 is presented. The current Deep Space Network (DSN) atmospheric refraction model and its sensitivity with respect to sensor accuracy are reviewed. Refraction corrections based on actual atmospheric parameters are compared with the DSS-13 station default corrections for the six-month period. Average blind-pointing improvement during the worst month would have amounted to 5 mdeg at 10 deg elevation using actual surface weather values. This would have resulted in an average gain improvement of 1.1 dB.

  2. Effect of Receiver Choosing on Point Positions Determination in Network RTK

    NASA Astrophysics Data System (ADS)

    Bulbul, Sercan; Inal, Cevat

    2016-04-01

    Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.

  3. Real Time GPS- Satellite Clock Estimation Development of a RTIGS Web Service

    NASA Astrophysics Data System (ADS)

    Opitz, M.; Weber, R.; Caissy, M.

    2006-12-01

    Since 3 years the IGS (International GNSS Service) Real-Time Working Group disseminates via Internet raw observation data of a subset of stations of the IGS network. This observation data can be used to establish a real-time integrity monitoring of the IGS predicted orbits (Ultra Rapid (IGU-) Orbits) and clocks, according to the recommendations of the IGS Workshop 2004 in Bern. The Institute for "Geodesy and Geophysics" of the TU-Vienna develops in cooperation with the IGS Real-Time Working Group the software "RTR- Control", which currently provides a real-time integrity monitoring of predicted IGU Clock Corrections to GPS Time. Our poster presents the results of a prototype version which is in operation since August this year. Besides RTR-Control allows for the comparison of pseudoranges measured at any permanent station in the global network with theoretical pseudoranges calculated on basis of the IGU- orbits. Thus, the programme can diagnose incorrectly predicted satellite orbits and clocks as well as detect multi-path distorted pseudoranges in real- time. RTR- Control calculates every 15 seconds Satellite Clock Corrections with respect to the most recent IGU- clocks (updated in a 6 hours interval). The clock estimations are referenced to a stable station clock (H-maser) with a small offset to GPS- time. This real-time Satellite Clocks are corrected for individual outliers and modelling errors. The most recent GPS- Satellite Clock Corrections (updated every 60 seconds) are published in Real Time via the Internet. The user group interested in a rigorous integrity monitoring comprises on the one hand the components of IGS itself to qualify the issued orbital data and on the other hand all users of the IGS Ultra Rapid Products (e.g. for PPP in Real Time).

  4. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  5. Atmospheric emission characterization of Marcellus shale natural gas development sites.

    PubMed

    Goetz, J Douglas; Floerchinger, Cody; Fortner, Edward C; Wormhoudt, Joda; Massoli, Paola; Knighton, W Berk; Herndon, Scott C; Kolb, Charles E; Knipping, Eladio; Shaw, Stephanie L; DeCarlo, Peter F

    2015-06-02

    Limited direct measurements of criteria pollutants emissions and precursors, as well as natural gas constituents, from Marcellus shale gas development activities contribute to uncertainty about their atmospheric impact. Real-time measurements were made with the Aerodyne Research Inc. Mobile Laboratory to characterize emission rates of atmospheric pollutants. Sites investigated include production well pads, a well pad with a drill rig, a well completion, and compressor stations. Tracer release ratio methods were used to estimate emission rates. A first-order correction factor was developed to account for errors introduced by fenceline tracer release. In contrast to observations from other shale plays, elevated volatile organic compounds, other than CH4 and C2H6, were generally not observed at the investigated sites. Elevated submicrometer particle mass concentrations were also generally not observed. Emission rates from compressor stations ranged from 0.006 to 0.162 tons per day (tpd) for NOx, 0.029 to 0.426 tpd for CO, and 67.9 to 371 tpd for CO2. CH4 and C2H6 emission rates from compressor stations ranged from 0.411 to 4.936 tpd and 0.023 to 0.062 tpd, respectively. Although limited in sample size, this study provides emission rate estimates for some processes in a newly developed natural gas resource and contributes valuable comparisons to other shale gas studies.

  6. Mesospheric radar wind comparisons at high and middle southern latitudes

    NASA Astrophysics Data System (ADS)

    Reid, Iain M.; McIntosh, Daniel L.; Murphy, Damian J.; Vincent, Robert A.

    2018-05-01

    We compare hourly averaged neutral winds derived from two meteor radars operating at 33.2 and 55 MHz to estimate the errors in these measurements. We then compare the meteor radar winds with those from a medium-frequency partial reflection radar operating at 1.94 MHz. These three radars are located at Davis Station, Antarctica. We then consider a middle-latitude 55 MHz meteor radar wind comparison with a 1.98 MHz medium-frequency partial reflection radar to determine how representative the Davis results are. At both sites, the medium-frequency radar winds are clearly underestimated, and the underestimation increases from 80 km to the maximum height of 98 km. Correction factors are suggested for these results.[Figure not available: see fulltext.

  7. Peak ground motion predictions with empirical site factors using Taiwan Strong Motion Network recordings

    NASA Astrophysics Data System (ADS)

    Chung, Jen-Kuang

    2013-09-01

    A stochastic method called the random vibration theory (Boore, 1983) has been used to estimate the peak ground motions caused by shallow moderate-to-large earthquakes in the Taiwan area. Adopting Brune's ω-square source spectrum, attenuation models for PGA and PGV were derived from path-dependent parameters which were empirically modeled from about one thousand accelerograms recorded at reference sites mostly located in a mountain area and which have been recognized as rock sites without soil amplification. Consequently, the predicted horizontal peak ground motions at the reference sites, are generally comparable to these observed. A total number of 11,915 accelerograms recorded from 735 free-field stations of the Taiwan Strong Motion Network (TSMN) were used to estimate the site factors by taking the motions from the predictive models as references. Results from soil sites reveal site amplification factors of approximately 2.0 ~ 3.5 for PGA and about 1.3 ~ 2.6 for PGV. Finally, as a result of amplitude corrections with those empirical site factors, about 75% of analyzed earthquakes are well constrained in ground motion predictions, having average misfits ranging from 0.30 to 0.50. In addition, two simple indices, R 0.57 and R 0.38, are proposed in this study to evaluate the validity of intensity map prediction for public information reports. The average percentages of qualified stations for peak acceleration residuals less than R 0.57 and R 0.38 can reach 75% and 54%, respectively, for most earthquakes. Such a performance would be good enough to produce a faithful intensity map for a moderate scenario event in the Taiwan region.

  8. 78 FR 70499 - An Inquiry Into the Commission's Policies and Rules Regarding AM Radio Service Directional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-26

    ... Antenna Performance Verification AGENCY: Federal Communications Commission. ACTION: Final rule; correction... as follows: Subpart BB--Disturbance of AM Broadcast Station Antenna Patterns * * * * * Federal...

  9. SOUL: the Single conjugated adaptive Optics Upgrade for LBT

    NASA Astrophysics Data System (ADS)

    Pinna, E.; Esposito, S.; Hinz, P.; Agapito, G.; Bonaglia, M.; Puglisi, A.; Xompero, M.; Riccardi, A.; Briguglio, R.; Arcidiacono, C.; Carbonaro, L.; Fini, L.; Montoya, M.; Durney, O.

    2016-07-01

    We present here SOUL: the Single conjugated adaptive Optics Upgrade for LBT. Soul will upgrade the wavefront sensors replacing the existing CCD detector with an EMCCD camera and the rest of the system in order to enable the closed loop operations at a faster cycle rate and with higher number of slopes. Thanks to reduced noise, higher number of pixel and framerate, we expect a gain (for a given SR) around 1.5-2 magnitudes at all wavelengths in the range 7.5 70% in I-band and 0.6asec seeing) and the sky coverage will be multiplied by a factor 5 at all galactic latitudes. Upgrading the SCAO systems at all the 4 focal stations, SOUL will provide these benefits in 2017 to the LBTI interferometer and in 2018 to the 2 LUCI NIR spectro-imagers. In the same year the SOUL correction will be exploited also by the new generation of LBT instruments: V-SHARK, SHARK-NIR and iLocater.

  10. Egnos Limitations over Central and Eastern Poland - Results of Preliminary Tests of Egnos-Eupos Integration Project

    NASA Astrophysics Data System (ADS)

    Jaworski, Leszek; Swiatek, Anna; Zdunek, Ryszard

    2013-09-01

    The problem of insufficient accuracy of EGNOS correction for the territory of Poland, located at the edge of EGNOS range is well known. The EEI PECS project (EGNOS EUPOS Integration) assumes improving the EGNOS correction by using the GPS observations from Polish ASG-EUPOS stations. One of the EEI project tasks was the identification of EGNOS performance limitations over Poland and services for EGNOSS-EUPOS combination. The two sets of data were used for those goals: statistical, theoretical data obtained using the SBAS simulator software, real data obtained during the measurements. The real measurements were managed as two types of measurements: static and dynamic. Static measurements are continuously managing using Septentrio PolaRx2 receiver. The SRC permanent station works in IMAGE/PERFECT project. Dynamic measurements were managed using the Mobile GPS Laboratory (MGL). Receivers (geodetic and navigation) were working in two modes: determining navigation position from standalone GPS, determining navigation position from GPS plus EGNOS correction. The paper presents results of measurements' analyses and conclusions based on which the next tasks in EEI project are completed

  11. Enabler operator station. [lunar surface vehicle

    NASA Technical Reports Server (NTRS)

    Bailey, Andrea; Keitzman, John; King, Shirlyn; Stover, Rae; Wegner, Torsten

    1992-01-01

    The objective of this project was to design an onboard operator station for the conceptual Lunar Work Vehicle (LWV). This LWV would be used in the colonization of a lunar outpost. The details that follow, however, are for an earth-bound model. Several recommendations are made in the appendix as to the changes needed in material selection for the lunar environment. The operator station is designed dimensionally correct for an astronaut wearing the current space shuttle EVA suit (which includes life support). The proposed operator station will support and restrain an astronaut as well as provide protection from the hazards of vehicle rollover. The threat of suit puncture is eliminated by rounding all corners and edges. A step-plate, located at the front of the vehicle, provides excellent ease of entry and exit. The operator station weight requirements are met by making efficient use of grid members, semi-rigid members and woven fabrics.

  12. Validation and correction of rainfall data from the WegenerNet high density network in southeast Austria

    NASA Astrophysics Data System (ADS)

    O, Sungmin; Foelsche, U.; Kirchengast, G.; Fuchsberger, J.

    2018-01-01

    Eight years of daily rainfall data from WegenerNet were analyzed by comparison with data from Austrian national weather stations. WegenerNet includes 153 ground level weather stations in an area of about 15 km × 20 km in the Feldbach region in southeast Austria. Rainfall has been measured by tipping bucket gauges at 150 stations of the network since the beginning of 2007. Since rain gauge measurements are considered close to true rainfall, there are increasing needs for WegenerNet data for the validation of rainfall data products such as remote sensing based estimates or model outputs. Serving these needs, this paper aims at providing a clearer interpretation on WegenerNet rainfall data for users in hydro-meteorological communities. Five clusters - a cluster consists of one national weather station and its four closest WegenerNet stations - allowed us close comparison of datasets between the stations. Linear regression analysis and error estimation with statistical indices were conducted to quantitatively evaluate the WegenerNet daily rainfall data. It was found that rainfall data between the stations show good linear relationships with an average correlation coefficient (r) of 0.97 , while WegenerNet sensors tend to underestimate rainfall according to the regression slope (0.87). For the five clusters investigated, the bias and relative bias were - 0.97 mm d-1 and - 11.5 % on average (except data from new sensors). The average of bias and relative bias, however, could be reduced by about 80 % through a simple linear regression-slope correction, with the assumption that the underestimation in WegenerNet data was caused by systematic errors. The results from the study have been employed to improve WegenerNet data for user applications so that a new version of the data (v5) is now available at the WegenerNet data portal (www.wegenernet.org).

  13. 47 CFR 74.705 - TV broadcast analog station protection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...

  14. 47 CFR 74.705 - TV broadcast analog station protection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...

  15. 47 CFR 74.705 - TV broadcast analog station protection.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...

  16. 47 CFR 74.705 - TV broadcast analog station protection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...

  17. 47 CFR 74.705 - TV broadcast analog station protection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...

  18. Geomagnetic activity during 10 - 11 solar cycles that has been observed by old Russian observatories.

    NASA Astrophysics Data System (ADS)

    Seredyn, Tomasz; Wysokinski, Arkadiusz; Kobylinski, Zbigniew; Bialy, Jerzy

    2016-07-01

    A good knowledge of solar-terrestrial relations during past solar activity cycles could give the appropriate tools for a correct space weather forecast. The paper focuses on the analysis of the historical collections of the ground based magnetic observations and their operational indices from the period of two sunspot solar cycles 10 - 11, period 1856 - 1878 (Bartels rotations 324 - 635). We use hourly observations of H and D geomagnetic field components registered at Russian stations: St. Petersburg - Pavlovsk, Barnaul, Ekaterinburg, Nertshinsk, Sitka, and compare them to the data obtained from the Helsinki observatory. We compare directly these records and also calculated from the data of the every above mentioned station IHV indices introduced by Svalgaard (2003), which have been used for further comparisons in epochs of assumed different polarity of the heliospheric magnetic field. We used also local index C9 derived by Zosimovich (1981) from St. Petersburg - Pavlovsk data. Solar activity is represented by sunspot numbers. The correlative and continuous wavelet analyses are applied for estimation of the correctness of records from different magnetic stations. We have specially regard to magnetic storms in the investigated period and the special Carrington event of 1-2 Sep 1859. Generally studied magnetic time series correctly show variability of the geomagnetic activity. Geomagnetic activity presents some delay in relation to solar one as it is seen especially during descending and minimum phase of the even 11-year cycle. This pattern looks similarly in the case of 16 - 17 solar cycles.

  19. Tasking and control of a squad of robotic vehicles

    NASA Astrophysics Data System (ADS)

    Lewis, Christopher L.; Feddema, John T.; Klarer, Paul

    2001-09-01

    Sandia National Laboratories have developed a squad of robotic vehicles as a test-bed for investigating cooperative control strategies. The squad consists of eight RATLER vehicles and a command station. The RATLERs are medium-sized all-electric vehicles containing a PC104 stack for computation, control, and sensing. Three separate RF channels are used for communications; one for video, one for command and control, and one for differential GPS corrections. Using DGPS and IR proximity sensors, the vehicles are capable of autonomously traversing fairly rough terrain. The control station is a PC running Windows NT. A GUI has been developed that allows a single operator to task and monitor all eight vehicles. To date, the following mission capabilities have been demonstrated: 1. Way-Point Navigation, 2. Formation Following, 3. Perimeter Surveillance, 4. Surround and Diversion, and 5. DGPS Leap Frog. This paper describes the system and briefly outlines each mission capability. The DGPS Leap Frog capability is discussed in more detail. This capability is unique in that it demonstrates how cooperation allows the vehicles to accurately navigate beyond the RF communication range. One vehicle stops and uses its corrected GPS position to re-initialize its receiver to become the DGPS correction station for the other vehicles. Error in position accumulates each time a new vehicle takes over the DGPS duties. The accumulation in error is accurately modeled as a random walk phenomenon. This paper demonstrates how useful accuracy can be maintained beyond the vehicle's range.

  20. Spatial and temporal evolution of climatic factors and its impacts on potential evapotranspiration in Loess Plateau of Northern Shaanxi, China.

    PubMed

    Li, C; Wu, P T; Li, X L; Zhou, T W; Sun, S K; Wang, Y B; Luan, X B; Yu, X

    2017-07-01

    Agriculture is very sensitive to climate change, and correct forecasting of climate change is a great help to accurate allocation of irrigation water. The use of irrigation water is influenced by crop water demand and precipitation. Potential evapotranspiration (ET 0 ) is a measure of the ability of the atmosphere to remove water from the surface through the processes of evaporation and transpiration, assuming no control on water supply. It plays an important role in assessing crop water requirements, regional dry-wet conditions, and other factors of water resource management. This study analyzed the spatial and temporal evolution processes and characteristics of major meteorological parameters at 10 stations in the Loess Plateau of northern Shaanxi (LPNS). By using the Mann-Kendall trend test with trend-free pre-whitening and the ArcGIS platform, the potential evapotranspiration of each station was quantified by using the Penman-Monteith equation, and the effects of climatic factors on potential evapotranspiration were assessed by analyzing the contribution rate and sensitivity of the climatic factors. The results showed that the climate in LPNS has become warmer and drier. In terms of the sensitivity of ET 0 to the variation of each climatic factor in LPNS, relative humidity (0.65) had the highest sensitivity, followed by daily maximum temperature, wind speed, sunshine hours, and daily minimum temperature (-0.05). In terms of the contribution rate of each factor to ET 0 , daily maximum temperature (5.16%) had the highest value, followed by daily minimum temperature, sunshine hours, relative humidity, and wind speed (1.14%). This study provides a reference for the management of agricultural water resources and for countermeasures to climate change. According to the climate change and the characteristics of the study area, farmers in the region should increase irrigation to guarantee crop water demand. Copyright © 2017. Published by Elsevier B.V.

  1. Implementation of a Digital Signal Processing Subsystem for a Long Wavelength Array Station

    NASA Technical Reports Server (NTRS)

    Soriano, Melissa; Navarro, Robert; D'Addario, Larry; Sigman, Elliott; Wang, Douglas

    2011-01-01

    This paper describes the implementation of a Digital Signal Processing (DP) subsystem for a single Long Wavelength Array (LWA) station.12 The LWA is a radio telescope that will consist of many phased array stations. Each LWA station consists of 256 pairs of dipole-like antennas operating over the 10-88 MHz frequency range. The Digital Signal Processing subsystem digitizes up to 260 dual-polarization signals at 196 MHz from the LWA Analog Receiver, adjusts the delay and amplitude of each signal, and forms four independent beams. Coarse delay is implemented using a first-in-first-out buffer and fine delay is implemented using a finite impulse response filter. Amplitude adjustment and polarization corrections are implemented using a 2x2 matrix multiplication

  2. Shuttle to Space Station. Heart Assist Implant. Hubble Update. X-30 Mock-Up

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Shuttle to Space Station, Heart Assist Implant, Hubble Update, and X-30 Mockup are the four parts that are discussed in this video. The first part, Shuttle to Space Station, is focussed on the construction and function of the Space Station Freedom. While part two, Heart Assist Implant, discusses a newly developed electromechanical device that helps to reduce heart attack by using electric shocks. Interviews with the co-inventor and patients are also included. Brief introduction to Hubble Telescope, problem behind its poor image quality (mirror aberration), and the plan to correct this problem are the three issues that are discussed in part three, Hubble Update. The last part, part four, reviews the X-30 Mockup designed by the staff and students of Mississippi State University.

  3. Artist's Concept of International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Pictured is an artist's concept of the International Space Station (ISS) with solar panels fully deployed. In addition to the use of solar energy, the ISS will employ at least three types of propulsive support systems for its operation. The first type is to reboost the Station to correct orbital altitude to offset the effects of atmospheric and other drag forces. The second function is to maneuver the ISS to avoid collision with oribting bodies (space junk). The third is for attitude control to position the Station in the proper attitude for various experiments, temperature control, reboost, etc. The ISS, a gateway to permanent human presence in space, is a multidisciplinary laboratory, technology test bed, and observatory that will provide an unprecedented undertaking in scientific, technological, and international experimentation by cooperation of sixteen countries.

  4. International Space Station (ISS)

    NASA Image and Video Library

    2004-04-15

    Pictured is an artist's concept of the International Space Station (ISS) with solar panels fully deployed. In addition to the use of solar energy, the ISS will employ at least three types of propulsive support systems for its operation. The first type is to reboost the Station to correct orbital altitude to offset the effects of atmospheric and other drag forces. The second function is to maneuver the ISS to avoid collision with oribting bodies (space junk). The third is for attitude control to position the Station in the proper attitude for various experiments, temperature control, reboost, etc. The ISS, a gateway to permanent human presence in space, is a multidisciplinary laboratory, technology test bed, and observatory that will provide an unprecedented undertaking in scientific, technological, and international experimentation by cooperation of sixteen countries.

  5. Non-linear motions in reprocessed GPS station position time series

    NASA Astrophysics Data System (ADS)

    Rudenko, Sergei; Gendt, Gerd

    2010-05-01

    Global Positioning System (GPS) data of about 400 globally distributed stations obtained at time span from 1998 till 2007 were reprocessed using GFZ Potsdam EPOS (Earth Parameter and Orbit System) software within International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Pilot Project and IGS Data Reprocessing Campaign with the purpose to determine weekly precise coordinates of GPS stations located at or near tide gauges. Vertical motions of these stations are used to correct the vertical motions of tide gauges for local motions and to tie tide gauge measurements to the geocentric reference frame. Other estimated parameters include daily values of the Earth rotation parameters and their rates, as well as satellite antenna offsets. The solution GT1 derived is based on using absolute phase center variation model, ITRF2005 as a priori reference frame, and other new models. The solution contributed also to ITRF2008. The time series of station positions are analyzed to identify non-linear motions caused by different effects. The paper presents the time series of GPS station coordinates and investigates apparent non-linear motions and their influence on GPS station height rates.

  6. Applications of a Next-Generation MDAC Discrimination Procedure Using Two-Dimensional Grids of Regional P/S Spectral Ratios

    DTIC Science & Technology

    2008-09-01

    explosions (UNEs) at the Semipalatinsk Test Site and regional earthquakes recorded by station WMQ (Urumchi, China). Measurements from the grids are... Semipalatinsk , Lop Nor, Novaya Zemlya, and Nevada Test Sites (STS, LNTS, NZTS, NTS, respectively) and regional earthquakes. We used phase-specific window...stations (triangles) within 2000 km of STS and LNTS. Semipalatinsk Test Site Figure 2 shows Pn/Lg spectral ratios, corrected for site and distance

  7. Trends and uncertainties in U.S. cloud cover from weather stations and satellite data

    NASA Astrophysics Data System (ADS)

    Free, M. P.; Sun, B.; Yoo, H. L.

    2014-12-01

    Cloud cover data from ground-based weather observers can be an important source of climate information, but the record of such observations in the U.S. is disrupted by the introduction of automated observing systems and other artificial shifts that interfere with our ability to assess changes in cloudiness at climate time scales. A new dataset using 54 National Weather Service (NWS) and 101 military stations that continued to make human-augmented cloud observations after the 1990s has been adjusted using statistical changepoint detection and visual scrutiny. The adjustments substantially reduce the trends in U.S. mean total cloud cover while increasing the agreement between the cloud cover time series and those of physically related climate variables such as diurnal temperature range and number of precipitation days. For 1949-2009, the adjusted time series give a trend in U.S. mean total cloud of 0.11 ± 0.22 %/decade for the military data, 0.55 ± 0.24 %/decade for the NWS data, and 0.31 ± 0.22 %/decade for the combined dataset. These trends are less than half those in the original data. For 1976-2004, the original data give a significant increase but the adjusted data show an insignificant trend of -0.17 (military stations) to 0.66 %/decade (NWS stations). The differences between the two sets of station data illustrate the uncertainties in the U.S. cloud cover record. We compare the adjusted station data to cloud cover time series extracted from several satellite datasets: ISCCP (International Satellite Cloud Climatology Project), PATMOS-x (AVHRR Pathfinder Atmospheres Extended) and CLARA-a1 (CM SAF cLoud Albedo and RAdiation), and the recently developed PATMOS-x diurnally corrected dataset. Like the station data, satellite cloud cover time series may contain inhomogeneities due to changes in the observing systems and problems with retrieval algorithms. Overall we find good agreement between interannual variability in most of the satellite data and that in our station data, with the diurnally corrected PATMOS-x product generally showing the best match. For the satellite period 1984-2007, trends in the U.S. mean cloud cover from satellite data vary widely among the datasets, and all are more negative than those in the station data, with PATMOS-x having the trends closest to those in the station data.

  8. Determination of soil erosion risk in the Mustafakemalpasa River Basin, Turkey, using the revised universal soil loss equation, geographic information system, and remote sensing.

    PubMed

    Ozsoy, Gokhan; Aksoy, Ertugrul; Dirim, M Sabri; Tumsavas, Zeynal

    2012-10-01

    Sediment transport from steep slopes and agricultural lands into the Uluabat Lake (a RAMSAR site) by the Mustafakemalpasa (MKP) River is a serious problem within the river basin. Predictive erosion models are useful tools for evaluating soil erosion and establishing soil erosion management plans. The Revised Universal Soil Loss Equation (RUSLE) function is a commonly used erosion model for this purpose in Turkey and the rest of the world. This research integrates the RUSLE within a geographic information system environment to investigate the spatial distribution of annual soil loss potential in the MKP River Basin. The rainfall erosivity factor was developed from local annual precipitation data using a modified Fournier index: The topographic factor was developed from a digital elevation model; the K factor was determined from a combination of the soil map and the geological map; and the land cover factor was generated from Landsat-7 Enhanced Thematic Mapper (ETM) images. According to the model, the total soil loss potential of the MKP River Basin from erosion by water was 11,296,063 Mg year(-1) with an average soil loss of 11.2 Mg year(-1). The RUSLE produces only local erosion values and cannot be used to estimate the sediment yield for a watershed. To estimate the sediment yield, sediment-delivery ratio equations were used and compared with the sediment-monitoring reports of the Dolluk stream gauging station on the MKP River, which collected data for >41 years (1964-2005). This station observes the overall efficiency of the sediment yield coming from the Orhaneli and Emet Rivers. The measured sediment in the Emet and Orhaneli sub-basins is 1,082,010 Mg year(-1) and was estimated to be 1,640,947 Mg year(-1) for the same two sub-basins. The measured sediment yield of the gauge station is 127.6 Mg km(-2) year(-1) but was estimated to be 170.2 Mg km(-2) year(-1). The close match between the sediment amounts estimated using the RUSLE-geographic information system (GIS) combination and the measured values from the Dolluk sediment gauge station shows that the potential soil erosion risk of the MKP River Basin can be estimated correctly and reliably using the RUSLE function generated in a GIS environment.

  9. Spacecraft Station-Keeping Trajectory and Mission Design Tools

    NASA Technical Reports Server (NTRS)

    Chung, Min-Kun J.

    2009-01-01

    Two tools were developed for designing station-keeping trajectories and estimating delta-v requirements for designing missions to a small body such as a comet or asteroid. This innovation uses NPOPT, a non-sparse, general-purpose sequential quadratic programming (SQP) optimizer and the Two-Level Differential Corrector (T-LDC) in LTool (Libration point mission design Tool) to design three kinds of station-keeping scripts: vertical hovering, horizontal hovering, and orbiting. The T-LDC is used to differentially correct several trajectory legs that join hovering points. In a vertical hovering, the maximum and minimum range points must be connected smoothly while maintaining the spacecrafts range from a small body, all within the law of gravity and the solar radiation pressure. The same is true for a horizontal hover. A PatchPoint is an LTool class that denotes a space-time event with some extra information for differential correction, including a set of constraints to be satisfied by T-LDC. Given a set of PatchPoints, each with its own constraint, the T-LDC differentially corrects the entire trajectory by connecting each trajectory leg joined by PatchPoints while satisfying all specified constraints at the same time. Vertical and horizontal hover both are needed to minimize delta-v spent for station keeping. A Python I/F to NPOPT has been written to be used from an LTool script. In vertical hovering, the spacecraft stays along the line joining the Sun and a small body. An instantaneous delta-v toward the anti- Sun direction is applied at the closest approach to the small body for station keeping. For example, the spacecraft hovers between the minimum range (2 km) point and the maximum range (2.5 km) point from the asteroid 1989ML. Horizontal hovering buys more time for a spacecraft to recover if, for any reason, a planned thrust fails, by returning almost to the initial position after some time later via a near elliptical orbit around the small body. The mapping or staging orbit may be similarly generated using T-LDC with a set of constraints. Some delta-v tables are generated for several different asteroid masses.

  10. An Updated Global Grid Point Surface Air Temperature Anomaly Data Set: 1851-1990 (revised 1991) (NDP-020)

    DOE Data Explorer

    Jones, P. D. [University of East Anglia, Norwich, United Kingdom; Raper, S. C.B. [University of East Anglia, Norwich, United Kingdom; Cherry, B. S.G. [University of East Anglia, Norwich, United Kingdom; Goodess, C. M. [University of East Anglia, Norwich, United Kingdom; Wigley, T. M. L. [University of East Anglia, Norwich, United Kingdom; Santer, B. [University of East Anglia, Norwich, United Kingdom; Kelly, P. M. [University of East Anglia, Norwich, United Kingdom; Bradley, R. S. [University of Massachusetts, Amherst, Massachusetts (USA); Diaz, H. F. [National Oceanic and Atmospheric Administration (NOAA), Environmental Research Laboratories, Boulder, CO (United States).

    1991-01-01

    This NDP presents land-based monthly surface-air-temperature anomalies (departures from a 1951-1970 reference period mean) on a 5° latitude by 10° longitude global grid. Monthly surface-air-temperature anomalies (departures from a 1957-1975 reference period mean) for the Antarctic (grid points from 65°S to 85°S) are presented in a similar way as a separate data set. The data were derived primarily from the World Weather Records and from the archives of the United Kingdom Meteorological Office. This long-term record of temperature anomalies may be used in studies addressing possible greenhouse-gas-induced climate changes. To date, the data have been employed in producing regional, hemispheric, and global time series for determining whether recent (i.e., post-1900) warming trends have taken place. The present updated version of this data set is identical to the earlier version for all records from 1851-1978 except for the addition of the Antarctic surface-air-temperature anomalies beginning in 1957. Beginning with the 1979 data, this package differs from the earlier version in several ways. Erroneous data for some sites have been corrected after a review of the actual station temperature data, and inconsistencies in the representation of missing values have been removed. For some grid locations, data have been added from stations that had not contributed to the original set. Data from satellites have also been used to correct station records where large discrepancies were evident. The present package also extends the record by adding monthly surface-air-temperature anomalies for the Northern (grid points from 85°N to 0°) and Southern (grid points from 5°S to 60°S) Hemispheres for 1985-1990. In addition, this updated package presents the monthly-mean-temperature records for the individual stations that were used to produce the set of gridded anomalies. The periods of record vary by station. Northern Hemisphere data have been corrected for inhomogeneities, while Southern Hemisphere data are presented in uncorrected form.

  11. Preliminary maintenance experience for DSS 13 unattended operations demonstration. [Deep Space Network

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Lorden, G.

    1979-01-01

    The maintenance data base collected for 15 weeks of recent unattended and automated operation of DSS 13 is summarized. During this period, DSS 13 was receiving spacecraft telemetry while being controlled remotely from JPL in Pasadena. Corrective and preventive maintenance manhours are reported by subsystem for DSS 13 including the equipment added for the automation demonstration. The corrective and preventive maintenance weekly manhours at DSS 13 averaged 22 and 40, respectively. The antenna hydraulic and electronic systems accounted for about half of the preventive and corrective maintenance manhours for a comparable attended DSN station, DSS 11.

  12. Concentrating Solar Power Projects - Liddell Power Station | Concentrating

    Science.gov Websites

    : Linear Fresnel reflector Turbine Capacity: Net: 3.0 MW Gross: 3.0 MW Status: Currently Non-Operational Start Year: 2012 Do you have more information, corrections, or comments? Background Technology: Linear

  13. Interferometry theory for the block 2 processor

    NASA Technical Reports Server (NTRS)

    Thomas, J. B.

    1987-01-01

    Presented is the interferometry theory for the Block 2 processor, including a high-level functional description and a discussion of data structure. The analysis covers the major processing steps: cross-correlation, fringe counter-rotation, transformation to the frequency domain, phase calibration, bandwidth synthesis, and extraction of the observables of amplitude, phase, phase rate, and delay. Also included are analyses for fractional bitshift correction, station clock error, ionosphere correction, and effective frequencies for the observables.

  14. Reevaluation of air surveillance station siting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, K.; Jannik, T.

    2016-07-06

    DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less

  15. Continuous H/V spectral ratio analysis of ambient noise: a necessity to understand microzonation results obtained by mobile stations

    NASA Astrophysics Data System (ADS)

    Van Noten, Koen; Lecocq, Thomas

    2016-04-01

    Estimating the resonance frequency (f0) and amplification factor of unconsolidated sediments by H/V spectral ratio (HVSR) analysis of seismic ambient noise has been widely used since Nakamura's proposal in 1989. To measure f0 properly, Nakamura suggested to perform microzonation surveys at night when the artificial microtremor is small and does not fully disrupt the ambient seismic noise. As nightly fieldwork is not always a reasonable demand, we propose an alternative workflow of Nakamura's technique to improve the quality of HVSR results obtained by ambient noise measurements of mobile stations during the day. This new workflow includes the automated H/V calculation of continuous seismic data of a stationary or permanent station installed near the microzonation site for as long as the survey lasts in order to control the error in the HVSR analysis obtained by the mobile stations. In this presentation, we apply this workflow on one year of seismic data at two different case studies; i.e. a rural site with a shallow bedrock depth of 30 m and an urban site (Brussels, capital of Belgium, bedrock depth of 110 m) where human activity is continuous 24h/day. By means of an automated python script, the fundamental peak frequency and the H/V amplitude are automatically picked from H/V spectra that are calculated from 50% overlapping, 30 minute windows during the whole year. Afterwards, the f0 and amplitude picks are averaged per hour/per day for the whole year. In both case studies, the H/V amplitude and the fundamental frequencies range considerable, up to ˜15% difference between the daily and nightly measurements. As bedrock depth is known from boreholes at both sites, we concluded that the nightly picked f0 is the true one. Our results thus suggest that changes in the determined f0 and H/V amplitude are dominantly caused by the human behaviour which is stored in the ambient seismic noise (e.g. later onset of traffic in a weekend, quiet Sundays, differences between daily/nightly activity,…). Consequently, performing a continuous HVSR analysis next to your microzonation site allows you to characterise the deviation of the measured f0 to the true f0 during the period of investigation (in our case during the whole year)! As mobile stations are affected by the same variation stored in the ambient noise, then a correction factor can be applied on the calculated f0 of individual measurements during the microzonation survey and a proper Vs can be estimated. Based on these results we recommend that microzonation with mobile stations should always be accompanied by a stationary seismic station to characterise the ambient noise and to control the error.

  16. A Voice Enabled Procedure Browser for the International Space Station

    NASA Technical Reports Server (NTRS)

    Rayner, Manny; Chatzichrisafis, Nikos; Hockey, Beth Ann; Farrell, Kim; Renders, Jean-Michel

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station (ISS), is to the best of our knowledge the first spoken dialog system in space. This paper gives background on the system and the ISS procedures, then discusses the research developed to address three key problems: grammar-based speech recognition using the Regulus toolkit; SVM based methods for open microphone speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations.

  17. The ac power system testbed

    NASA Technical Reports Server (NTRS)

    Mildice, J.; Sundberg, R.

    1987-01-01

    The object of this program was to design, build, test, and deliver a high frequency (20 kHz) Power System Testbed which would electrically approximate a single, separable power channel of an IOC Space Station. That program is described, including the technical background, and the results are discussed showing that the major assumptions about the characteristics of this class of hardware (size, mass, efficiency, control, etc.) were substantially correct. This testbed equipment was completed and delivered and is being operated as part of the Space Station Power System Test Facility.

  18. Teleseismic Investigations of the Malawi and Luangwa Rift Zones: Ongoing Observations From the SAFARI Experiment

    NASA Astrophysics Data System (ADS)

    Reed, C. A.; Gao, S. S.; Liu, K. H.; Yu, Y.; Chindandali, P. R. N.; Massinque, B.; Mdala, H. S.; Mutamina, D. M.

    2015-12-01

    In order to evaluate the influence of crustal and mantle heterogeneities upon the initiation of the Malawi rift zone (MRZ) and reactivation of the Zambian Luangwa rift zone (LRZ) subject to Cenozoic plate boundary stress fields and mantle buoyancy forces, we installed and operated 33 Seismic Arrays For African Rift Initiation (SAFARI) three-component broadband seismic stations in Malawi, Mozambique, and Zambia from 2012 to 2014. During the twenty-four month acquisition period, nearly 6200 radial receiver functions (RFs) were recorded. Stations situated within the MRZ, either along the coastal plains or within the Shire Graben toward the south, report an average crustal thickness of 42 km relative to approximately 46 km observed at stations located along the rift flanks. This implies the juvenile MRZ is characterized by a stretching factor not exceeding 1.1. Meanwhile, P-to-S velocity ratios within the MRZ increase from 1.71 to 1.82 in southernmost Malawi, indicating a substantial modification of the crust during Recent rifting. Time-series stacking of approximately 5500 RFs recorded by the SAFARI and 44 neighboring network stations reveals an apparent uplift of 10 to 15 km along both the 410- and 660-km mantle transition zone (MTZ) discontinuities beneath the MRZ and LRZ which, coupled with an apparently normal 250-km MTZ thickness, implies a first-order high-velocity contribution from thickened lithosphere. Preliminary manual checking of SAFARI shear-wave splitting (SWS) measurements provides roughly 650 high-quality XKS phases following a component re-orientation to correct station misalignments. Regional azimuthal variations in SWS fast orientations are observed, from rift-parallel in the vicinity of the LRZ to rift-oblique in the MRZ. A major 60° rotation in the fast orientation occurs at approximately 31°E, possibly resulting from the modulation of mantle flow around a relatively thick lithospheric keel situated between the two rift zones.

  19. DSN 70-meter antenna X- and S-band calibration. Part 1: Gain measurements

    NASA Technical Reports Server (NTRS)

    Richter, P. H.; Slobin, S. D.

    1989-01-01

    Aperture efficiency measurements made during 1988 on the three 70-m stations (DSS-14, DSS-43, and DSS-63) at X-band (8420 MHz) and S-band (2295 MHz) have been analyzed and reduced to yield best estimates of antenna gain versus elevation. The analysis has been carried out by fitting the gain data to a theoretical expression based on the Ruze formula. Newly derived flux density and source-size correction factors for the natural radio calibration sources used in the measurements have been used in the reduction of the data. Peak gains measured at the three stations were 74.18 (plus or minus 0.10) dBi at X-band, and 63.34 (plus or minus 0.03) dBi at S-band, with corresponding peak aperture efficiencies of 0.687 (plus or minus 0.015) and 0.762 (plus or minus 0.006), respectively. The values quoted assume no atmosphere is present, and the estimated absolute accuracy of the gain measurements is approximately plus or minus 0.2 dB at X-band and plus or minus 0.1 dB at S-band (1-sigma values).

  20. The ionospheric eclipse factor method (IEFM) and its application to determining the ionospheric delay for GPS

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Tscherning, C. C.; Knudsen, P.; Xu, G.; Ou, J.

    2008-01-01

    A new method for modeling the ionospheric delay using global positioning system (GPS) data is proposed, called the ionospheric eclipse factor method (IEFM). It is based on establishing a concept referred to as the ionospheric eclipse factor (IEF) λ of the ionospheric pierce point (IPP) and the IEF’s influence factor (IFF) bar{λ}. The IEF can be used to make a relatively precise distinction between ionospheric daytime and nighttime, whereas the IFF is advantageous for describing the IEF’s variations with day, month, season and year, associated with seasonal variations of total electron content (TEC) of the ionosphere. By combining λ and bar{λ} with the local time t of IPP, the IEFM has the ability to precisely distinguish between ionospheric daytime and nighttime, as well as efficiently combine them during different seasons or months over a year at the IPP. The IEFM-based ionospheric delay estimates are validated by combining an absolute positioning mode with several ionospheric delay correction models or algorithms, using GPS data at an international Global Navigation Satellite System (GNSS) service (IGS) station (WTZR). Our results indicate that the IEFM may further improve ionospheric delay modeling using GPS data.

  1. An alternative ionospheric correction model for global navigation satellite systems

    NASA Astrophysics Data System (ADS)

    Hoque, M. M.; Jakowski, N.

    2015-04-01

    The ionosphere is recognized as a major error source for single-frequency operations of global navigation satellite systems (GNSS). To enhance single-frequency operations the global positioning system (GPS) uses an ionospheric correction algorithm (ICA) driven by 8 coefficients broadcasted in the navigation message every 24 h. Similarly, the global navigation satellite system Galileo uses the electron density NeQuick model for ionospheric correction. The Galileo satellite vehicles (SVs) transmit 3 ionospheric correction coefficients as driver parameters of the NeQuick model. In the present work, we propose an alternative ionospheric correction algorithm called Neustrelitz TEC broadcast model NTCM-BC that is also applicable for global satellite navigation systems. Like the GPS ICA or Galileo NeQuick, the NTCM-BC can be optimized on a daily basis by utilizing GNSS data obtained at the previous day at monitor stations. To drive the NTCM-BC, 9 ionospheric correction coefficients need to be uploaded to the SVs for broadcasting in the navigation message. Our investigation using GPS data of about 200 worldwide ground stations shows that the 24-h-ahead prediction performance of the NTCM-BC is better than the GPS ICA and comparable to the Galileo NeQuick model. We have found that the 95 percentiles of the prediction error are about 16.1, 16.1 and 13.4 TECU for the GPS ICA, Galileo NeQuick and NTCM-BC, respectively, during a selected quiet ionospheric period, whereas the corresponding numbers are found about 40.5, 28.2 and 26.5 TECU during a selected geomagnetic perturbed period. However, in terms of complexity the NTCM-BC is easier to handle than the Galileo NeQuick and in this respect comparable to the GPS ICA.

  2. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  3. MAGNITUDE STUDIES CONDUCTED UNDER PROJECTS VT/5054 AND VT/5055.

    DTIC Science & Technology

    statistical model for Blue Mountains Seismological Observatory, Cumberland Plateau Seismological Observatory, Tonto Forest Seismological Observatory, Uinta ... Basin Seismological Observatory, and Wichita Mountains Seismological Observatory. Azimuthal dependence of station correction is not established at any of

  4. An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Kao, H.; Hsu, S.

    2010-12-01

    The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to determine. We use one of the largest aftershocks that can be located by conventional methods as our reference event to calibrate the controlling parameters of ISSA. These parameters include the overall Vp/Vs ratio (because a precise S velocity model was unavailable), the length of scanning time window, and the weighting factor for each station. Our results show that ISSA is not only more efficient in locating earthquake clusters/aftershocks, but also capable of identifying many events missed by conventional phase-picking methods.

  5. KENNEDY SPACE CENTER, FLA. - An X-ray machine is in place to take images of four rudder speed brake actuators to be installed on the orbiter Discovery. The actuators are being X-rayed at the Cape Canaveral Air Force Station’s Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.

    NASA Image and Video Library

    2004-03-08

    KENNEDY SPACE CENTER, FLA. - An X-ray machine is in place to take images of four rudder speed brake actuators to be installed on the orbiter Discovery. The actuators are being X-rayed at the Cape Canaveral Air Force Station’s Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.

  6. ITG: A New Global GNSS Tropospheric Correction Model

    PubMed Central

    Yao, Yibin; Xu, Chaoqian; Shi, Junbo; Cao, Na; Zhang, Bao; Yang, Junjian

    2015-01-01

    Tropospheric correction models are receiving increasing attentions, as they play a crucial role in Global Navigation Satellite System (GNSS). Most commonly used models to date include the GPT2 series and the TropGrid2. In this study, we analyzed the advantages and disadvantages of existing models and developed a new model called the Improved Tropospheric Grid (ITG). ITG considers annual, semi-annual and diurnal variations, and includes multiple tropospheric parameters. The amplitude and initial phase of diurnal variation are estimated as a periodic function. ITG provides temperature, pressure, the weighted mean temperature (Tm) and Zenith Wet Delay (ZWD). We conducted a performance comparison among the proposed ITG model and previous ones, in terms of meteorological measurements from 698 observation stations, Zenith Total Delay (ZTD) products from 280 International GNSS Service (IGS) station and Tm from Global Geodetic Observing System (GGOS) products. Results indicate that ITG offers the best performance on the whole. PMID:26196963

  7. Terrestrial reference frame solution with the Vienna VLBI Software VieVS and implication of tropospheric gradient estimation

    NASA Astrophysics Data System (ADS)

    Spicakova, H.; Plank, L.; Nilsson, T.; Böhm, J.; Schuh, H.

    2011-07-01

    The Vienna VLBI Software (VieVS) has been developed at the Institute of Geodesy and Geophysics at TU Vienna since 2008. In this presentation, we present the module Vie_glob which is the part of VieVS that allows the parameter estimation from multiple VLBI sessions in a so-called global solution. We focus on the determination of the terrestrial reference frame (TRF) using all suitable VLBI sessions since 1984. We compare different analysis options like the choice of loading corrections or of one of the models for the tropospheric delays. The effect of atmosphere loading corrections on station heights if neglected at observation level will be shown. Time series of station positions (using a previously determined TRF as a priori values) are presented and compared to other estimates of site positions from individual IVS (International VLBI Service for Geodesy and Astrometry) Analysis Centers.

  8. The Austrian radiation monitoring network ARAD - best practice and added value

    NASA Astrophysics Data System (ADS)

    Olefs, Marc; Baumgartner, Dietmar; Obleitner, Friedrich; Bichler, Christoph; Foelsche, Ulrich; Pietsch, Helga; Rieder, Harald; Weihs, Philipp; Geyer, Florian; Haiden, Thomas; Schöner, Wolfgang

    2016-04-01

    The Austrian RADiation monitoring network (ARAD) has been established to advance the national climate monitoring and to support satellite retrieval, atmospheric modelling and solar energy techniques development. Measurements cover the downwelling solar and thermal infrared radiation using instruments according to Baseline Surface Radiation Network (BSRN) standards. A unique feature of ARAD is its vertical dimension of five stations, covering an air column between about 200 m a.s.l. (Vienna) and 3100 m a.s.l. (BSRN site Sonnblick). The contribution outlines the aims and scopes of ARAD, its measurement and calibration standards, methods, strategies and station locations. ARAD network operation uses innovative data processing for quality assurance and quality control, applying manual and automated control algorithms. A combined uncertainty estimate for the broadband shortwave radiation fluxes at all five ARAD stations indicates that accuracies range from 1.5 to 23 %. If a directional response error of the pyranometers and the temperature response of the instruments and the data acquisition system (DAQ) is corrected, this expanded uncertainty reduces to 1.4 to 5.2 %. Thus, for large signals (global: 1000 W m-2, diffuse: 500 W m-2) BSRN target accuracies are met or closely met for 70 % of valid measurements at the ARAD stations after this correction. For small signals (50 W m-2), the targets are not achieved as a result of uncertainties associated with the DAQ or the instrument sensitivities. Additional accuracy gains can be achieved in future by additional measurements and corrections. However, for the measurement of direct solar radiation improved instrument accuracy is needed. ARAD could serve as a powerful example for establishing state-of-the-art radiation monitoring at the national level with a multiple-purpose approach. Instrumentation, guidelines and tools (such as the data quality control) developed within ARAD are best practices which could be adopted in other regions, thus saving high development costs.

  9. The Austrian radiation monitoring network ARAD - best practice and added value

    NASA Astrophysics Data System (ADS)

    Olefs, M.; Baumgartner, D. J.; Obleitner, F.; Bichler, C.; Foelsche, U.; Pietsch, H.; Rieder, H. E.; Weihs, P.; Geyer, F.; Haiden, T.; Schöner, W.

    2015-10-01

    The Austrian RADiation monitoring network (ARAD) has been established to advance the national climate monitoring and to support satellite retrieval, atmospheric modelling and solar energy techniques development. Measurements cover the downwelling solar and thermal infrared radiation using instruments according to Baseline Surface Radiation Network (BSRN) standards. A unique feature of ARAD is its vertical dimension of five stations, covering an air column between about 200 m a.s.l. (Vienna) and 3100 m a.s.l. (BSRN site Sonnblick). The paper outlines the aims and scopes of ARAD, its measurement and calibration standards, methods, strategies and station locations. ARAD network operation uses innovative data processing for quality assurance and quality control, applying manual and automated control algorithms. A combined uncertainty estimate for the broadband shortwave radiation fluxes at all five ARAD stations indicates that accuracies range from 1.5 to 23 %. If a directional response error of the pyranometers and the temperature response of the instruments and the data acquisition system (DAQ) is corrected, this expanded uncertainty reduces to 1.4 to 5.2 %. Thus, for large signals (global: 1000 W m-2, diffuse: 500 W m-2) BSRN target accuracies are met or closely met for 70 % of valid measurements at the ARAD stations after this correction. For small signals (50 W m-2), the targets are not achieved as a result of uncertainties associated with the DAQ or the instrument sensitivities. Additional accuracy gains can be achieved in future by additional measurements and corrections. However, for the measurement of direct solar radiation improved instrument accuracy is needed. ARAD could serve as a powerful example for establishing state-of-the-art radiation monitoring at the national level with a multiple-purpose approach. Instrumentation, guidelines and tools (such as the data quality control) developed within ARAD are best practices which could be adopted in other regions, thus saving high development costs.

  10. Helicopter flight test demonstration of differential GPS

    NASA Technical Reports Server (NTRS)

    Denaro, R. P.; Beser, J.

    1985-01-01

    An off-line post-mission processing facility is being established by NASA Ames Research Center to analyze differential GPS flight tests. The current and future differential systems are described, comprising an airborne segment in an SH-3 helicopter, a GPS ground reference station, and a tracking system. The post-mission processing system provides for extensive measurement analysis and differential computation. Both differential range residual corrections and navigation corrections are possible. Some preliminary flight tests were conducted in a landing approach scenario and statically. Initial findings indicate the possible need for filter matching between airborne and ground systems (if used in a navigation correction technique), the advisability of correction smoothing before airborne incorporation, and the insensitivity of accuracy to either of the differential techniques or to update rates.

  11. Lessons Learned in over Two Decades of GPS/GNSS Data Center Support

    NASA Astrophysics Data System (ADS)

    Boler, F. M.; Estey, L. H.; Meertens, C. M.; Maggert, D.

    2014-12-01

    The UNAVCO Data Center in Boulder, Colorado, curates, archives, and distributes geodesy data and products, mainly GPS/GNSS data from 3,000 permanent stations and 10,000 campaign sites around the globe. Although now having core support from NSF and NASA, the archive began around 1992 as a grass-roots effort of a few UNAVCO staff and community members to preserve data going back to 1986. Open access to this data is generally desired, but the Data Center in fact operates under an evolving suite of data access policies ranging from open access to nondisclosure for special cases. Key to processing this data is having the correct equipment metadata; reliably obtaining this metadata continues to be a challenge, in spite of modern cyberinfrastructure and tools, mostly due to human errors or lack of consistent operator training. New metadata problems surface when trying to design and publish modern Digital Object Identifiers for data sets where PIs, funding sources, and historical project names now need to be corrected and verified for data sets going back almost three decades. Originally, the data was GPS-only based on three signals on two carrier frequencies. Modern GNSS covers GPS modernization (three more signals and one additional carrier) as well as open signals and carriers of additional systems such as GLONASS, Galileo, BeiDou, and QZSS, requiring ongoing adaptive strategies to assess the quality of modern datasets. Also, new scientific uses of these data benefit from higher data rates than was needed for early tectonic applications. In addition, there has been a migration from episodic campaign sites (hence sparse data) to continuously operating stations (hence dense data) over the last two decades. All of these factors make it difficult to realistically plan even simple data center functions such as on-line storage capacity.

  12. Mean Tide Level Data in the PSMSL Mean Sea Level Dataset

    NASA Astrophysics Data System (ADS)

    Matthews, Andrew; Bradshaw, Elizabeth; Gordon, Kathy; Jevrejeva, Svetlana; Rickards, Lesley; Tamisiea, Mark; Williams, Simon; Woodworth, Philip

    2016-04-01

    The Permanent Service for Mean Sea Level (PSMSL) is the internationally recognised global sea level data bank for long term sea level change information from tide gauges. Established in 1933, the PSMSL continues to be responsible for the collection, publication, analysis and interpretation of sea level data. The PSMSL operates under the auspices of the International Council for Science (ICSU), is a regular member of the ICSU World Data System and is associated with the International Association for the Physical Sciences of the Oceans (IAPSO) and the International Association of Geodesy (IAG). The PSMSL continues to work closely with other members of the sea level community through the Intergovernmental Oceanographic Commission's Global Sea Level Observing System (GLOSS). Currently, the PSMSL data bank holds over 67,000 station-years of monthly and annual mean sea level data from over 2250 tide gauge stations. Data from each site are quality controlled and, wherever possible, reduced to a common datum, whose stability is monitored through a network of geodetic benchmarks. PSMSL also distributes a data bank of measurements taken from in-situ ocean bottom pressure recorders. Most of the records in the main PSMSL dataset indicate mean sea level (MSL), derived from high-frequency tide gauge data, with sampling typically once per hour or higher. However, some of the older data is based on mean tide level (MTL), which is obtained from measurements taken at high and low tide only. While usually very close, MSL and MTL can occasionally differ by many centimetres, particularly in shallow water locations. As a result, care must be taken when using long sea level records that contain periods of MTL data. Previously, periods during which the values indicated MTL rather than MSL were noted in the documentation, and sometimes suggested corrections were supplied. However, these comments were easy to miss, particularly in large scale studies that used multiple stations from across a wide area. Therefore, the PSMSL have decided to begin applying a correction to all mixed MTL/MSL records in its datum-controlled RLR dataset, where a suitable correction is available. These corrections will be clearly flagged, allowing users of PSMSL data to quickly identify these values and ignore these data, or apply a different correction. Here we describe the corrections applied to the PSMSL dataset, how users can find MTL data and the corrections made, and some caveats and warnings that need to be considered.

  13. Regional downscaling of temporal resolution in near-surface wind from statistically downscaled Global Climate Models (GCMs) for use in San Francisco Bay coastal flood modeling

    NASA Astrophysics Data System (ADS)

    O'Neill, A.; Erikson, L. H.; Barnard, P.

    2013-12-01

    While Global Climate Models (GCMs) provide useful projections of near-surface wind vectors into the 21st century, resolution is not sufficient enough for use in regional wave modeling. Statistically downscaled GCM projections from Multivariate Adaptive Constructed Analogues (MACA) provide daily near-surface winds at an appropriate spatial resolution for wave modeling within San Francisco Bay. Using 30 years (1975-2004) of climatological data from four representative stations around San Francisco Bay, a library of example daily wind conditions for four corresponding over-water sub-regions is constructed. Empirical cumulative distribution functions (ECDFs) of station conditions are compared to MACA GFDL hindcasts to create correction factors, which are then applied to 21st century MACA wind projections. For each projection day, a best match example is identified via least squares error among all stations from the library. The best match's daily variation in velocity components (u/v) is used as an analogue of representative wind variation and is applied at 3-hour increments about the corresponding sub-region's projected u/v values. High temporal resolution reconstructions using this methodology on hindcast MACA fields from 1975-2004 accurately recreate extreme wind values within the San Francisco Bay, and because these extremes in wind forcing are of key importance in wave and subsequent coastal flood modeling, this represents a valuable method of generating near-surface wind vectors for use in coastal flood modeling.

  14. DSS command software update

    NASA Technical Reports Server (NTRS)

    Stinnett, W. G.

    1980-01-01

    The modifications, additions, and testing results for a version of the Deep Space Station command software, generated for support of the Voyager Saturn encounter, are discussed. The software update requirements included efforts to: (1) recode portions of the software to permit recovery of approximately 2000 words of memory; (2) correct five Voyager Ground data System liens; (3) provide capability to automatically turn off the command processor assembly local printer during periods of low activity; and (4) correct anomalies existing in the software.

  15. Biomedical and Human Factors Requirements for a Manned Earth Orbiting Station

    NASA Technical Reports Server (NTRS)

    Helvey, W.; Martell, C.; Peters, J.; Rosenthal, G.; Benjamin, F.; Albright, G.

    1964-01-01

    The primary objective of this study is to determine which biomedical and human factors measurements must be made aboard a space station to assure adequate evaluation of the astronaut's health and performance during prolonged space flights. The study has employed, where possible, a medical and engineering systems analysis to define the pertinent life sciences and space station design parameters and their influence on a measurement program. The major areas requiring evaluation in meeting the study objectives include a definition of the space environment, man's response to the environment, selection of measurement and data management techniques, experimental program, space station design requirements, and a trade-off analysis with final recommendations. The space environment factors that are believed to have a significant effect on man were evaluated. This includes those factors characteristic of the space environment (e. g. weightlessness, radiation) as well as those created within the space station (e. g. toxic contaminants, capsule atmosphere). After establishing the general features of the environment, an appraisal was made of the anticipated response of the astronaut to each of these factors. For thoroughness, the major organ systems and functions of the body were delineated, and a determination was made of their anticipated response to each of the environmental categories. A judgment was then made on the medical significance or importance of each response, which enabled a determination of which physiological and psychological effects should be monitored. Concurrently, an extensive list of measurement techniques and methods of data management was evaluated for applicability to the space station program. The various space station configurations and design parameters were defined in terms of the biomedical and human factors requirements to provide the measurements program. Research design of experimental programs for various station configurations, mission durations, and crew sizes were prepared, and, finally, a trade-off analysis of the critical variables in the station planning was completed with recommendations to enhance the confidence in the measurement program.

  16. The space station: Human factors and productivity

    NASA Technical Reports Server (NTRS)

    Gillan, D. J.; Burns, M. J.; Nicodemus, C. L.; Smith, R. L.

    1986-01-01

    Human factor researchers and engineers are making inputs into the early stages of the design of the Space Station to improve both the quality of life and work on-orbit. Effective integration of the human factors information related to various Intravehicular Activity (IVA), Extravehicular Activity (EVA), and teletobotics systems during the Space Station design will result in increased productivity, increased flexibility of the Space Stations systems, lower cost of operations, improved reliability, and increased safety for the crew onboard the Space Station. The major features of productivity examined include the cognitive and physical effort involved in work, the accuracy of worker output and ability to maintain performance at a high level of accuracy, the speed and temporal efficiency with which a worker performs, crewmember satisfaction with their work environment, and the relation between performance and cost.

  17. Changing Objective Structured Clinical Examinations Stations at Lunchtime During All Day Postgraduate Surgery Examinations Improves Examiner Morale and Stress.

    PubMed

    Brennan, Peter A; Scrimgeour, Duncan S; Patel, Sheena; Patel, Roshnee; Griffiths, Gareth; Croke, David T; Smith, Lee; Arnett, Richard

    Human factors are important causes of error, but little is known about their possible effect during objective structured clinical examinations (OSCE). We have previously identified stress and pressure in OSCE examiners in the postgraduate intercollegiate Membership of the Royal College of Surgeons (MRCS) examination. After modifying examination delivery by changing OSCE stations at lunchtime with no demonstrable effect on candidate outcome, we resurveyed examiners to ascertain whether examiner experience was improved. Examiners (n = 180) from all 4 surgical colleges in the United Kingdom and Ireland were invited to complete the previously validated human factors questionnaire used in 2014. Aggregated scores for each of 4 previously identified factors were compared with the previous data. Unit-weighted z-scores and nonparametric Kruskal-Wallis methods were used to test the hypothesis that there was no difference among the median factor z-scores for each college. Individual Mann-Whitney-Wilcoxon tests (with appropriate Bonferonni corrections) were used to determine any differences between factors and the respective colleges. 141 Completed questionnaires were evaluated (78% response rate) and compared with 108 responses (90%) from the original study. Analysis was based on 26 items common to both studies. In 2014, the college with the highest candidate numbers (England) was significantly different in 1 factor (stress and pressure), compared with Edinburgh (Mann-Whitney-Wilcoxon: W = 1524, p < 0.001) and Glasgow colleges (Mann-Whitney-Wilcoxon: W = 104, p = 0.004). No differences were found among colleges in the same factor in 2016, Kruskall-Wallis: (χ 2 (3) = 1.73, p = 0.63). Analysis of responses found inconsistency among examiners regarding mistakes or omissions made when candidates were performing well. After making changes to OSCE delivery, factor scores relating to examiner stress and pressure are now improved and consistent across the surgical colleges. Stress and pressure can occur in OSCE examiners and examination delivery should ideally minimize these issues, thereby improving morale is also likely to benefit candidates. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  18. Automatic identification of IASLC-defined mediastinal lymph node stations on CT scans using multi-atlas organ segmentation

    NASA Astrophysics Data System (ADS)

    Hoffman, Joanne; Liu, Jiamin; Turkbey, Evrim; Kim, Lauren; Summers, Ronald M.

    2015-03-01

    Station-labeling of mediastinal lymph nodes is typically performed to identify the location of enlarged nodes for cancer staging. Stations are usually assigned in clinical radiology practice manually by qualitative visual assessment on CT scans, which is time consuming and highly variable. In this paper, we developed a method that automatically recognizes the lymph node stations in thoracic CT scans based on the anatomical organs in the mediastinum. First, the trachea, lungs, and spines are automatically segmented to locate the mediastinum region. Then, eight more anatomical organs are simultaneously identified by multi-atlas segmentation. Finally, with the segmentation of those anatomical organs, we convert the text definitions of the International Association for the Study of Lung Cancer (IASLC) lymph node map into patient-specific color-coded CT image maps. Thus, a lymph node station is automatically assigned to each lymph node. We applied this system to CT scans of 86 patients with 336 mediastinal lymph nodes measuring equal or greater than 10 mm. 84.8% of mediastinal lymph nodes were correctly mapped to their stations.

  19. TEM and Gravity Data for Roosevelt Hot Springs, Utah FORGE Site

    DOE Data Explorer

    Hardwick, Christian; Nash, Greg

    2018-02-05

    This submission includes a gravity data in text format and as a GIS point shapefile and transient electromagnetic (TEM) raw data. Each text file additionally contains location data (UTM Zone 12, NAD83) and elevation (meters) data for that station. The gravity data shapefile was in part downloaded from PACES, University of Texas at El Paso, http://gis.utep.edu/subpages/GMData.html, and in part collected by the Utah Geological Survey (UGS) as part of the DOE GTO supported Utah FORGE geothermal energy project near Milford, Utah. The PACES data were examined and scrubbed to eliminate any questionable data. A 2.67 g/cm^3 reduction density was used for the Bouguer correction. The attribute table column headers for the gravity data shapefile are explained below. There is also metadata attached to the GIS shapefile. name: the individual gravity station name. HAE: height above ellipsoid [meter] NGVD29: vertical datum for geoid [meter] obs: observed gravity ERRG: gravity measurement error [mGal] IZTC: inner zone terrain correction [mGal] OZTC: outer zone terrain correction [mGal] Gfa: free air gravity gSBGA: Bouguer horizontal slab sCBGA: Complete Bouguer anomaly

  20. Use of Faraday-rotation data from beacon satellites to determine ionospheric corrections for interplanetary spacecraft navigation

    NASA Technical Reports Server (NTRS)

    Royden, H. N.; Green, D. W.; Walson, G. R.

    1981-01-01

    Faraday-rotation data from the linearly polarized 137-MHz beacons of the ATS-1, SIRIO, and Kiku-2 geosynchronous satellites are used to determine the ionospheric corrections to the range and Doppler data for interplanetary spacecraft navigation. The JPL operates the Deep Space Network of tracking stations for NASA; these stations monitor Faraday rotation with dual orthogonal, linearly polarized antennas, Teledyne polarization tracking receivers, analog-to-digital converter/scanners, and other support equipment. Computer software examines the Faraday data, resolves the pi ambiguities, constructs a continuous Faraday-rotation profile and converts the profile to columnar zenith total electron content at the ionospheric reference point; a second program computes the line-of-sight ionospheric correction for each pass of the spacecraft over each tracking complex. Line-of-sight ionospheric electron content using mapped Faraday-rotation data is compared with that using dispersive Doppler data from the Voyager spacecraft; a difference of about 0.4 meters, or 5 x 10 to the 16th electrons/sq m is obtained. The technique of determining the electron content of interplanetary plasma by subtraction of the ionospheric contribution is demonstrated on the plasma torus surrounding the orbit of Io.

  1. Atmosphere Mitigation in Precise Point Positioning Ambiguity Resolution for Earthquake Early Warning in the Western U.S.

    NASA Astrophysics Data System (ADS)

    Geng, J.; Bock, Y.; Reuveni, Y.

    2014-12-01

    Earthquake early warning (EEW) is a time-critical system and typically relies on seismic instruments in the area around the source to detect P waves (or S waves) and rapidly issue alerts. Thanks to the rapid development of real-time Global Navigation Satellite Systems (GNSS), a good number of sensors have been deployed in seismic zones, such as the western U.S. where over 600 GPS stations are collecting 1-Hz high-rate data along the Cascadia subduction zone, San Francisco Bay area, San Andreas fault, etc. GNSS sensors complement the seismic sensors by recording the static offsets while seismic data provide highly-precise higher frequency motions. An optimal combination of GNSS and accelerometer data (seismogeodesy) has advantages compared to GNSS-only or seismic-only methods and provides seismic velocity and displacement waveforms that are precise enough to detect P wave arrivals, in particular in the near source region. Robust real-time GNSS and seismogeodetic analysis is challenging because it requires a period of initialization and continuous phase ambiguity resolution. One of the limiting factors is unmodeled atmospheric effects, both of tropospheric and ionospheric origin. One mitigation approach is to introduce atmospheric corrections into precise point positioning with ambiguity resolution (PPP-AR) of clients/stations within the monitored regions. NOAA generates hourly predictions of zenith troposphere delays at an accuracy of a few centimeters, and 15-minute slant ionospheric delays of a few TECU (Total Electron Content Unit) accuracy from both geodetic and meteorological data collected at hundreds of stations across the U.S. The Scripps Orbit and Permanent Array Center (SOPAC) is experimenting with a regional ionosphere grid using a few hundred stations in southern California, and the International GNSS Service (IGS) routinely estimates a Global Ionosphere Map using over 100 GNSS stations. With these troposphere and ionosphere data as additional observations, we can shorten the initialization period and improve the ambiguity resolution efficiency of PPP-AR. We demonstrate this with data collected by a cluster of Real-Time Earthquake Analysis for Disaster mItigation (READI) network stations in southern California operated by UNAVCO/PBO and SOPAC.

  2. Tossing on a Rotating Space Station

    NASA Astrophysics Data System (ADS)

    Paetkau, Mark

    2004-10-01

    The following analysis was inspired by a question posed by a listener of a radio science show. The listener asked the question: "If an astronaut in a space station that was rotating to simulate gravity threw a ball up, where would the ball go?" The physicist answered, "The ball would travel straight across the space station (assuming an open structure). "The main point is that to an outside observer the ball would not "fall" back down as on Earth. As I pondered this it occurred to me that while the answer is correct, it is a special case with a more general solution. Below is an analysis of the motions a thrown object can undergo on a rotating space station. The first part of the discussion is aimed at lower-level undergraduates who have a basic understanding of vectors and circular motion, and the motion is described from the point of view of an external reference frame. Further analysis of the motion by an observer on the space station is appropriate for upper-level students.

  3. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  4. Broad-band Lg Attenuation Tomography in Eastern Eurasia and The Resolution, Uncertainty and Data Predication

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Xu, X.

    2017-12-01

    The broad band Lg 1/Q tomographic models in eastern Eurasia are inverted from source- and site-corrected path 1/Q data. The path 1/Q are measured between stations (or events) by the two-station (TS), reverse two-station (RTS) and reverse two-event (RTE) methods, respectively. Because path 1/Q are computed using logarithm of the product of observed spectral ratios and simplified 1D geometrical spreading correction, they are subject to "modeling errors" dominated by uncompensated 3D structural effects. We have found in Chen and Xie [2017] that these errors closely follow normal distribution after the long-tailed outliers are screened out (similar to teleseismic travel time residuals). We thus rigorously analyze the statistics of these errors collected from repeated samplings of station (and event) pairs from 1.0 to 10.0Hz and reject about 15% outliers at each frequency band. The resultant variance of Δ/Q decreases with frequency as 1/f2. The 1/Q tomography using screened data is now a stochastic inverse problem with solutions approximate the means of Gaussian random variables and the model covariance matrix is that of Gaussian variables with well-known statistical behavior. We adopt a new SVD based tomographic method to solve for 2D Q image together with its resolution and covariance matrices. The RTS and RTE yield the most reliable 1/Q data free of source and site effects, but the path coverage is rather sparse due to very strict recording geometry. The TS absorbs the effects of non-unit site response ratios into 1/Q data. The RTS also yields site responses, which can then be corrected from the path 1/Q of TS to make them also free of site effect. The site corrected TS data substantially improve path coverage, allowing able to solve for 1/Q tomography up to 6.0Hz. The model resolution and uncertainty are first quantitively accessed by spread functions (fulfilled by resolution matrix) and covariance matrix. The reliably retrieved Q models correlate well with the distinct tectonic blocks featured by the most recent major deformations and vary with frequencies. With the 1/Q tomographic model and its covariance matrix, we can formally estimate the uncertainty of any path-specific Lg 1/Q prediction. This new capability significantly benefits source estimation for which reliable uncertainty estimate is especially important.

  5. 47 CFR 73.35 - Calculation of improvement factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... for an allotment (See § 73.30) in the 1605-1705 kHz band filed by an existing fulltime AM station licensed in the 535-1605 kHz band will be ranked according to the station's calculated improvement factor... calculations excluding the subject station. The cumulative gain in the above service area is the numerator of...

  6. 47 CFR 73.35 - Calculation of improvement factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... for an allotment (See § 73.30) in the 1605-1705 kHz band filed by an existing fulltime AM station licensed in the 535-1605 kHz band will be ranked according to the station's calculated improvement factor... calculations excluding the subject station. The cumulative gain in the above service area is the numerator of...

  7. 47 CFR 73.35 - Calculation of improvement factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... for an allotment (See § 73.30) in the 1605-1705 kHz band filed by an existing fulltime AM station licensed in the 535-1605 kHz band will be ranked according to the station's calculated improvement factor... calculations excluding the subject station. The cumulative gain in the above service area is the numerator of...

  8. 47 CFR 73.35 - Calculation of improvement factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... for an allotment (See § 73.30) in the 1605-1705 kHz band filed by an existing fulltime AM station licensed in the 535-1605 kHz band will be ranked according to the station's calculated improvement factor... calculations excluding the subject station. The cumulative gain in the above service area is the numerator of...

  9. 47 CFR 73.35 - Calculation of improvement factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... for an allotment (See § 73.30) in the 1605-1705 kHz band filed by an existing fulltime AM station licensed in the 535-1605 kHz band will be ranked according to the station's calculated improvement factor... calculations excluding the subject station. The cumulative gain in the above service area is the numerator of...

  10. Application of a net-based baseline correction scheme to strong-motion records of the 2011 Mw 9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Wang, Rongjiang; Zhang, Yong; Walter, Thomas R.

    2014-06-01

    The description of static displacements associated with earthquakes is traditionally achieved using GPS, EDM or InSAR data. In addition, displacement histories can be derived from strong-motion records, allowing an improvement of geodetic networks at a high sampling rate and a better physical understanding of earthquake processes. Strong-motion records require a correction procedure appropriate for baseline shifts that may be caused by rotational motion, tilting and other instrumental effects. Common methods use an empirical bilinear correction on the velocity seismograms integrated from the strong-motion records. In this study, we overcome the weaknesses of an empirically based bilinear baseline correction scheme by using a net-based criterion to select the timing parameters. This idea is based on the physical principle that low-frequency seismic waveforms at neighbouring stations are coherent if the interstation distance is much smaller than the distance to the seismic source. For a dense strong-motion network, it is plausible to select the timing parameters so that the correlation coefficient between the velocity seismograms of two neighbouring stations is maximized after the baseline correction. We applied this new concept to the KiK-Net and K-Net strong-motion data available for the 2011 Mw 9.0 Tohoku earthquake. We compared the derived coseismic static displacement with high-quality GPS data, and with the results obtained using empirical methods. The results show that the proposed net-based approach is feasible and more robust than the individual empirical approaches. The outliers caused by unknown problems in the measurement system can be easily detected and quantified.

  11. Vertical and horizontal seismometric observations of tides

    NASA Astrophysics Data System (ADS)

    Lambotte, S.; Rivera, L.; Hinderer, J.

    2006-01-01

    Tidal signals have been largely studied with gravimeters, strainmeters and tiltmeters, but can also be retrieved from digital records of the output of long-period seismometers, such as STS-1, particularly if they are properly isolated. Horizontal components are often noisier than the vertical ones, due to sensitivity to tilt at long periods. Hence, horizontal components are often disturbed by local effects such as topography, geology and cavity effects, which imply a strain-tilt coupling. We use series of data (duration larger than 1 month) from several permanent broadband seismological stations to examine these disturbances. We search a minimal set of observable signals (tilts, horizontal and vertical displacements, strains, gravity) necessary to reconstruct the seismological record. Such analysis gives a set of coefficients (per component for each studied station), which are stable over years and then can be used systematically to correct data from these disturbances without needing heavy numerical computation. A special attention is devoted to ocean loading for stations close to oceans (e.g. Matsushiro station in Japon (MAJO)), and to pressure correction when barometric data are available. Interesting observations are made for vertical seismometric components; in particular, we found a pressure admittance between pressure and data 10 times larger than for gravimeters for periods larger than 1 day, while this admittance reaches the usual value of -3.5 nm/s 2/mbar for periods below 3 h. This observation may be due to instrumental noise, but the exact mechanism is not yet understood.

  12. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  13. P-wave Velocity Structure in the Lowermost 600 km of the Mantle beneath Western Pacific Inferred from Travel Times and Amplitudes Observed with NECESSArray

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Kawakatsu, H.; Chen, Y. J.; Ning, J.; Grand, S. P.; Niu, F.; Obayashi, M.; Miyakawa, K.; Idehara, K.; Tonegawa, T.; Iritani, R.; Necessarray Project Team

    2011-12-01

    NECESSArray is a large-scale broadband seismic array deployed in northeastern China. Although its primary aims are to reveal the fate of subducted Pacific plate and to address several tectonic issues, it is also useful as a large aperture array to look at deep Earth. Here, we examine P-wave travel times observed with NECESSArray to determine P-wave velocity structure in the lower mantle beneath Western Pacific. Relative travel times with respect to those predicted by PREM are measured on short period seismograms from 15 earthquakes occurred in Tonga, Fiji, and Kermadec regions since Sep. 2009 to April 2010, so far, by using adaptive stacking method [Rawlinson and Kennett, 2004]. The residuals are defined as fluctuations with respect to an average of the whole array for each event. Station correction is defined as a median value of the residuals at each station. After applying the station corrections and distance corrections for the surface focus, we synthesize all the residuals and finally obtain a characteristic residual variation as a function of epicentral distance from 80 to 95 degrees. The travel time residuals show an inverted V-pattern with the maximum delay of 0.2 s at 87 degrees compared from a reference level at 80 and 95 degrees. To simply interpret this pattern through Herglotz-Wiechert inversion, we assume that the velocity structure above 600 km above the core-mantle boundary (CMB) is identical to PREM and find that the difference of the P-wave velocities from those of PREM gradually increase with depth, and reach the maximum velocity reduction of 0.15% and suddenly increase to those being identical to PREM at 270 km above the CMB. Thickness of a small velocity gradient layer at the base of the mantle is reduced to be 130 km instead of 150 km that is PREM's value. P-wave amplitudes are used as supplementary data. Station corrections for amplitude are inferred from 6 deep Fiji earthquakes in the distance range 75 to 90 degrees, in which focal mechanisms are corrected with the Global CMT solutions and theoretical amplitude variations due to elastic and anelastic structures with the reflectivity method are considered. The corrected amplitude that are sensitive to the velocity structure just the above the CMB are obtained from 3 earthquakes occurred in Kermadec islands (their latitudes vary from 29.2 S to 31.6S) in the distance range from 86 to 96 degrees. Although they are closely located each other, the data from the southernmost event indicate significantly rapid amplitude decay, and those from the northernmost event indicate moderate amplitude decay, those from the middle event show a large scatter. This observation suggests that a rapid horizontal change of the D" structure exists in the southwestern edge of the sampled region.

  14. Pn-waves Travel-time Anomaly beneath Taiwan from Dense Seismic Array Observations and its Possible Tectonic Implications

    NASA Astrophysics Data System (ADS)

    Lin, Y. Y.; Huang, B. S.; Ma, K. F.; Hsieh, M. C.

    2015-12-01

    We investigated travel times of Pn waves, which are of great important for understanding the Moho structure in Taiwan region. Although several high quality tomographic studies had been carried out, observations of Pn waves are still the most comprehensive way to elucidate the Moho structure. Mapping the Moho structure of Taiwan had been a challenging due to the small spatial dimension of Taiwan island with two subduction systems. To decipher the tectonic structure and understanding of earthquake hazard, the island of Taiwan have been implemented by several high density seismic stations, including 71 short-period stations of Central Weather Bureau Seismic Network (CWBSN) and 42 broardband stations of Broadband Array in Taiwan for Seismology (BATS). High quality seismic records of these stations would be used to identify precise Pn-wave arrival times. After station-elevation correction, we measure the difference between the observed and theoretical Pn arrivals from the IASPI 91 model for each station. For correcting uncertainties of earthquake location and origin time, we estimate relative Pn anomaly, ΔtPn , between each station and a reference station. The pattern of ΔtPn reflects the depth anomaly of Moho beneath Taiwan. In general, Pn waves are commonly observed from shallow earthquake at epicentral distance larger than 120 km. We search the global catalog since 2005 and the criteria are M > 5.5, focal depth < 30 km and epicentral distance > 150 km. The 12 medium earthquakes from north Luzon are considered for analysis. We choose a station, TWKB, in the most southern point of Taiwan as the reference station due to that all events are from the south. The results indicate obvious different patterns of ΔtPn from different back-azimuths. The ΔtPn pattern of the events in the first group from the south south-east indicates that the Pn arrivals delay suddenly when the Pn waves pass through the Central Range, suggesting the Moho becomes deep rapidly. However, we cannot recognize the same pattern when the events from due south in the second group. The ΔtPn pattern in the second group has a clear slow gradient from the south to north through Taiwan island. It may be relative to a smooth dipping structure of the Moho. Both ΔtPn patterns reveal large delays in northern Taiwan which may be related to the north subduction structure.

  15. Development of A Tsunami Magnitude Scale Based on DART Buoy Data

    NASA Astrophysics Data System (ADS)

    Leiva, J.; Polet, J.

    2016-12-01

    The quantification of tsunami energy has evolved through time, with a number of magnitude and intensity scales employed in the past century. Most of these scales rely on coastal measurements, which may be affected by complexities due to near-shore bathymetric effects and coastal geometries. Moreover, these datasets are generated by tsunami inundation, and thus cannot serve as a means of assessing potential tsunami impact prior to coastal arrival. With the introduction of a network of ocean buoys provided through the Deep-ocean Assessment and Reporting of Tsunamis (DART) project, a dataset has become available that can be exploited to further our current understanding of tsunamis and the earthquakes that excite them. The DART network consists of 39 stations that have produced estimates of sea-surface height as a function of time since 2003, and are able to detect deep ocean tsunami waves. Data collected at these buoys for the past decade reveals that at least nine major tsunami events, such as the 2011 Tohoku and 2013 Solomon Islands events, produced substantial wave amplitudes across a large distance range that can be implemented in a DART data based tsunami magnitude scale. We present preliminary results from the development of a tsunami magnitude scale that follows the methods used in the development of the local magnitude scale by Charles Richter. Analogous to the use of seismic ground motion amplitudes in the calculation of local magnitude, maximum ocean height displacements due to the passage of tsunami waves will be related to distance from the source in a least-squares exponential regression analysis. The regression produces attenuation curves based on the DART data, a site correction term, attenuation parameters, and an amplification factor. Initially, single event based regressions are used to constrain the attenuation parameters. Additional iterations use the parameters of these event-based fits as a starting point to obtain a stable solution, and include the calculation of station corrections, in order to obtain a final amplification factor for each event, which is used to calculate its tsunami magnitude.

  16. Chordwise and compressibility corrections to slender-wing theory

    NASA Technical Reports Server (NTRS)

    Lomax, Harvard; Sluder, Loma

    1952-01-01

    Corrections to slender-wing theory are obtained by assuming a spanwise distribution of loading and determining the chordwise variation which satisfies the appropriate integral equation. Such integral equations are set up in terms of the given vertical induced velocity on the center line or, depending on the type of wing plan form, its average value across the span at a given chord station. The chordwise distribution is then obtained by solving these integral equations. Results are shown for flat-plate rectangular, and triangular wings.

  17. Sediment Core Descriptions: R/V KANA KEOKI 1972 Cruise, Eastern and Western Pacific Ocean,

    DTIC Science & Technology

    1976-06-01

    of ship tracks and coring stations are shown. Corrected satellite navigation-determined coordinates for each coring operation are indicated, and water depth, length of core, and age of oldest sediment in the cores are given.

  18. Site correction of stochastic simulation in southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan

    2014-05-01

    Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.

  19. Factors Shaping the Evolution of Electronic Documentation Systems. Research Activity No. IM.4.

    ERIC Educational Resources Information Center

    Dede, C. J.; And Others

    The first of 10 sections in this report focuses on factors that will affect the evolution of Space Station Project (SSP) documentation systems. The goal of this project is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge about the space station which…

  20. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  1. Assessment of airborne environmental bacteria and related factors in 25 underground railway stations in Seoul, Korea

    NASA Astrophysics Data System (ADS)

    Hwang, Sung Ho; Yoon, Chung Sik; Ryu, Kyong Nam; Paik, Samuel Y.; Cho, Jun Ho

    2010-05-01

    This study assessed bacterial concentrations in indoor air at 25 underground railway stations in Seoul, Korea, and investigated various related factors including the presence of platform screen doors (PSD), depth of the station, year of construction, temperature, relative humidity, and number of passengers. A total of 72 aerosol samples were collected from all the stations. Concentrations of total airborne bacteria (TAB) ranged from not detected (ND) to 4997 CFU m -3, with an overall geometric mean (GM) of 191 CFU m -3. Airborne bacteria were detected at 23 stations (92%) and Gram-negative bacteria (GNB) were detected at two stations (8%). TAB concentrations of four stations (16%) exceeded 800 CFU m -3, the Korea indoor bio-aerosol guideline. The results of the study showed that TAB concentrations at the stations without PSD showed higher TAB concentrations than those with PSD, though not at statistically significant levels. TAB concentrations of deeper stations revealed significantly higher levels than those of shallower stations. Based on this study, it is recommended that mitigation measures be applied to improve the indoor air quality (IAQ) of underground railway stations in Seoul, with focused attention on deeper stations.

  2. Edge Detection Method Based on Neural Networks for COMS MI Images

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee

    2016-12-01

    Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.

  3. [Atmospheric correction of visible-infrared band FY-3A/MERSI data based on 6S model].

    PubMed

    Wu, Yong-Li; Luan, Qing; Tian, Guo-Zhen

    2011-06-01

    Based on the observation data from the meteorological stations in Taiyuan City and its surrounding areas of Shanxi Province, the atmosphere parameters for 6S model were supplied, and the atmospheric correction of visible-infrared band (precision 250 meters) FY-3A/MERSI data was conducted. After atmospheric correction, the range of visible-infrared band FY-3A/MERSI data was widened, reflectivity increased, high peak was higher, and distribution histogram was smoother. In the meantime, the threshold value of NDVI data reflecting vegetation condition increased, and its high peak was higher, more close to the real data. Moreover, the color synthesis image of correction data showed more abundant information, its brightness increased, contrast enhanced, and the information reflected was more close to real.

  4. Generation of Unbiased Ionospheric Corrections in Brazilian Region for GNSS positioning based on SSR concept

    NASA Astrophysics Data System (ADS)

    Monico, J. F. G.; De Oliveira, P. S., Jr.; Morel, L.; Fund, F.; Durand, S.; Durand, F.

    2017-12-01

    Mitigation of ionospheric effects on GNSS (Global Navigation Satellite System) signals is very challenging, especially for GNSS positioning applications based on SSR (State Space Representation) concept, which requires the knowledge of spatial correlated errors with considerable accuracy level (centimeter). The presence of satellite and receiver hardware biases on GNSS measurements difficult the proper estimation of ionospheric corrections, reducing their physical meaning. This problematic can lead to ionospheric corrections biased of several meters and often presenting negative values, which is physically not possible. In this contribution, we discuss a strategy to obtain SSR ionospheric corrections based on GNSS measurements from CORS (Continuous Operation Reference Stations) Networks with minimal presence of hardware biases and consequently physical meaning. Preliminary results are presented on generation and application of such corrections for simulated users located in Brazilian region under high level of ionospheric activity.

  5. Automated Planning for a Deep Space Communications Station

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Fisher, Forest; Mutz, Darren; Chien, Steve

    1999-01-01

    This paper describes the application of Artificial Intelligence planning techniques to the problem of antenna track plan generation for a NASA Deep Space Communications Station. Me described system enables an antenna communications station to automatically respond to a set of tracking goals by correctly configuring the appropriate hardware and software to provide the requested communication services. To perform this task, the Automated Scheduling and Planning Environment (ASPEN) has been applied to automatically produce antenna trucking plans that are tailored to support a set of input goals. In this paper, we describe the antenna automation problem, the ASPEN planning and scheduling system, how ASPEN is used to generate antenna track plans, the results of several technology demonstrations, and future work utilizing dynamic planning technology.

  6. Planning assistance for the NASA 30/20 GHz program. Network control architecture study.

    NASA Technical Reports Server (NTRS)

    Inukai, T.; Bonnelycke, B.; Strickland, S.

    1982-01-01

    Network Control Architecture for a 30/20 GHz flight experiment system operating in the Time Division Multiple Access (TDMA) was studied. Architecture development, identification of processing functions, and performance requirements for the Master Control Station (MCS), diversity trunking stations, and Customer Premises Service (CPS) stations are covered. Preliminary hardware and software processing requirements as well as budgetary cost estimates for the network control system are given. For the trunking system control, areas covered include on board SS-TDMA switch organization, frame structure, acquisition and synchronization, channel assignment, fade detection and adaptive power control, on board oscillator control, and terrestrial network timing. For the CPS control, they include on board processing and adaptive forward error correction control.

  7. Diurnal measurements with prototype CMOS Omega receivers

    NASA Technical Reports Server (NTRS)

    Burhans, R. W.

    1976-01-01

    Diurnal signals from eight omega channels have been monitored at 10.2 KHz for selected station pairs. All eight Omega stations have been received at least 50 percent of the time over a 24 hour period during the month of October 1976. The data presented confirm the expected performance of the CMOS omega sensor processor in being able to digsignals out of a noisy environment. Of particular interest are possibilities for use of antipodal reception phenomena and a need for some ways of correcting for multi-modal propagation effects.

  8. Space-based augmentation for global navigation satellite systems.

    PubMed

    Grewal, Mohinder S

    2012-03-01

    This paper describes space-based augmentation for global navigation satellite systems (GNSS). Space-based augmentations increase the accuracy and integrity of the GNSS, thereby enhancing users' safety. The corrections for ephemeris, ionospheric delay, and clocks are calculated from reference station measurements of GNSS data in wide-area master stations and broadcast via geostationary earth orbit (GEO) satellites. This paper discusses the clock models, satellite orbit determination, ionospheric delay estimation, multipath mitigation, and GEO uplink subsystem (GUS) as used in the Wide Area Augmentation System developed by the FAA.

  9. AC power system breadboard

    NASA Technical Reports Server (NTRS)

    Wappes, Loran J.; Sundberg, R.; Mildice, J.; Peterson, D.; Hushing, S.

    1987-01-01

    The object of this program was to design, build, test, and deliver a high-frequency (20-kHz) Power System Breadboard which would electrically approximate a pair of dual redundant power channels of an IOC Space Station. This report describes that program, including the technical background, and discusses the results, showing that the major assumptions about the characteristics of this class of hardware (size, mass, efficiency, control, etc.) were substantially correct. This testbed equipment has been completed and delivered to LeRC, where it is operating as a part of the Space Station Power System Test Facility.

  10. Flight evaluation of an engine static pressure noseprobe in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Foote, C. H.; Jaekel, R. F.

    1981-01-01

    The flight testing of an inlet static pressure probe and instrumented inlet case produced results consistent with sea-level and altitude stand testing. The F-15 flight test verified the basic relationship of total to static pressure ratio versus corrected airflow and automatic distortion downmatch with the engine pressure ratio control mode. Additionally, the backup control inlet case statics demonstrated sufficient accuracy for backup control fuel flow scheduling, and the station 6 manifolded production probe was in agreement with the flight test station 6 tota pressure probes.

  11. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  12. Retrieval of suspended sediment concentrations using Landsat-8 OLI satellite images in the Orinoco River (Venezuela)

    NASA Astrophysics Data System (ADS)

    Yepez, Santiago; Laraque, Alain; Martinez, Jean-Michel; De Sa, Jose; Carrera, Juan Manuel; Castellanos, Bartolo; Gallay, Marjorie; Lopez, Jose L.

    2018-01-01

    In this study, 81 Landsat-8 scenes acquired from 2013 to 2015 were used to estimate the suspended sediment concentration (SSC) in the Orinoco River at its main hydrological station at Ciudad Bolivar, Venezuela. This gauging station monitors an upstream area corresponding to 89% of the total catchment area where the mean discharge is of 33,000 m3·s-1. SSC spatial and temporal variabilities were analyzed in relation to the hydrological cycle and to local geomorphological characteristics of the river mainstream. Three types of atmospheric correction models were evaluated to correct the Landsat-8 images: DOS, FLAASH, and L8SR. Surface reflectance was compared with monthly water sampling to calibrate a SSC retrieval model using a bootstrapping resampling. A regression model based on surface reflectance at the Near-Infrared wavelengths showed the best performance: R2 = 0.92 (N = 27) for the whole range of SSC (18 to 203 mg·l-1) measured at this station during the studied period. The method offers a simple new approach to estimate the SSC along the lower Orinoco River and demonstrates the feasibility and reliability of remote sensing images to map the spatiotemporal variability in sediment transport over large rivers.

  13. Robust Real-Time Wide-Area Differential GPS Navigation

    NASA Technical Reports Server (NTRS)

    Yunck, Thomas P. (Inventor); Bertiger, William I. (Inventor); Lichten, Stephen M. (Inventor); Mannucci, Anthony J. (Inventor); Muellerschoen, Ronald J. (Inventor); Wu, Sien-Chong (Inventor)

    1998-01-01

    The present invention provides a method and a device for providing superior differential GPS positioning data. The system includes a group of GPS receiving ground stations covering a wide area of the Earth's surface. Unlike other differential GPS systems wherein the known position of each ground station is used to geometrically compute an ephemeris for each GPS satellite. the present system utilizes real-time computation of satellite orbits based on GPS data received from fixed ground stations through a Kalman-type filter/smoother whose output adjusts a real-time orbital model. ne orbital model produces and outputs orbital corrections allowing satellite ephemerides to be known with considerable greater accuracy than from die GPS system broadcasts. The modeled orbits are propagated ahead in time and differenced with actual pseudorange data to compute clock offsets at rapid intervals to compensate for SA clock dither. The orbital and dock calculations are based on dual frequency GPS data which allow computation of estimated signal delay at each ionospheric point. These delay data are used in real-time to construct and update an ionospheric shell map of total electron content which is output as part of the orbital correction data. thereby allowing single frequency users to estimate ionospheric delay with an accuracy approaching that of dual frequency users.

  14. Study of the GPS inter-frequency calibration of timing receivers

    NASA Astrophysics Data System (ADS)

    Defraigne, P.; Huang, W.; Bertrand, B.; Rovera, D.

    2018-02-01

    When calibrating Global Positioning System (GPS) stations dedicated to timing, the hardware delays of P1 and P2, the P(Y)-codes on frequencies L1 and L2, are determined separately. In the international atomic time (TAI) network the GPS stations of the time laboratories are calibrated relatively against reference stations. This paper aims at determining the consistency between the P1 and P2 hardware delays (called dP1 and dP2) of these reference stations, and to look at the stability of the inter-signal hardware delays dP1-dP2 of all the stations in the network. The method consists of determining the dP1-dP2 directly from the GPS pseudorange measurements corrected for the frequency-dependent antenna phase center and the frequency-dependent ionosphere corrections, and then to compare these computed dP1-dP2 to the calibrated values. Our results show that the differences between the computed and calibrated dP1-dP2 are well inside the expected combined uncertainty of the two quantities. Furthermore, the consistency between the calibrated time transfer solution obtained from either single-frequency P1 or dual-frequency P3 for reference laboratories is shown to be about 1.0 ns, well inside the 2.1 ns uB uncertainty of a time transfer link based on GPS P3 or Precise Point Positioning. This demonstrates the good consistency between the P1 and P2 hardware delays of the reference stations used for calibration in the TAI network. The long-term stability of the inter-signal hardware delays is also analysed from the computed dP1-dP2. It is shown that only variations larger than 2 ns can be detected for a particular station, while variations of 200 ps can be detected when differentiating the results between two stations. Finally, we also show that in the differential calibration process as used in the TAI network, using the same antenna phase center or using different positions for L1 and L2 signals gives maximum differences of 200 ps on the hardware delays of the separate codes P1 and P2; however, the final impact on the P3 combination is less than 10 ps.

  15. Deuterium excess in precipitation of Alpine regions - moisture recycling.

    PubMed

    Froehlich, Klaus; Kralik, Martin; Papesch, Wolfgang; Rank, Dieter; Scheifinger, Helfried; Stichler, Willibald

    2008-03-01

    The paper evaluates long-term seasonal variations of the deuterium excess (d-excess = delta(2)H - 8. delta(18)O) in precipitation of stations located north and south of the main ridge of the Austrian Alps. It demonstrates that sub-cloud evaporation during precipitation and continental moisture recycling are local, respectively, regional processes controlling these variations. In general, sub-cloud evaporation decreases and moisture recycling increases the d-excess. Therefore, evaluation of d-excess variations in terms of moisture recycling, the main aim of this paper, includes determination of the effect of sub-cloud evaporation. Since sub-cloud evaporation is governed by saturation deficit and distance between cloud base and the ground, its effect on the d-excess is expected to be lower at mountain than at lowland/valley stations. To determine quantitatively this difference, we examined long-term seasonal d-excess variations measured at three selected mountain and adjoining valley stations. The altitude differences between mountain and valley stations ranged from 470 to 1665 m. Adapting the 'falling water drop' model by Stewart [J. Geophys. Res., 80(9), 1133-1146 (1975).], we estimated that the long-term average of sub-cloud evaporation at the selected mountain stations (altitudes between about 1600 and 2250 m.a.s.l.) is less than 1 % of the precipitation and causes a decrease of the d-excess of less than 2 per thousand. For the selected valley stations, the corresponding evaporated fraction is at maximum 7 % and the difference in d-excess ranges up to 8 per thousand. The estimated d-excess differences have been used to correct the measured long-term d-excess values at the selected stations. Finally, the corresponding fraction of water vapour has been estimated that recycled by evaporation of surface water including soil water from the ground. For the two mountain stations Patscherkofel and Feuerkogel, which are located north of the main ridge of the Alps, the maximum seasonal change of the corrected d-excess (July/August) has been estimated to be between 5 and 6 per thousand, and the corresponding recycled fraction between 2.5-3 % of the local precipitation. It has been found that the estimated recycled fractions are in good agreement with values derived from other approaches.

  16. Spatio-temporal distribution of energy radiation from low frequency tremor

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Obara, K.

    2007-12-01

    Recent fine-scale hypocenter locations of low frequency tremors (LFTs) estimated by cross-correlation technique (Shelly et al. 2006; Maeda et al. 2006) and new finding of very low frequency earthquake (Ito et al. 2007) suggest that these slow events occur at the plate boundary associated with slow slip events (Obara and Hirose, 2006). However, the number of tremor detected by above technique is limited since continuous tremor waveforms are too complicated. Although an envelope correlation method (ECM) (Obara, 2002) enables us to locate epicenters of LFT without arrival time picks, however, ECM fails to locate LFTs precisely especially on the most active stage of tremor activity because of the low-correlation of envelope amplitude. To reveal total energy release of LFT, here we propose a new method for estimating the location of LFTs together with radiated energy from the tremor source by using envelope amplitude. The tremor amplitude observed at NIED Hi-net stations in western Shikoku simply decays in proportion to the reciprocal of the source-receiver distance after the correction of site- amplification factor even though the phases of the tremor are very complicated. So, we model the observed mean square envelope amplitude by time-dependent energy radiation with geometrical spreading factor. In the model, we do not have origin time of the tremor since we assume that the source of the tremor continuously radiates the energy. Travel-time differences between stations estimated by the ECM technique also incorporated in our locating algorithm together with the amplitude information. Three-component 1-hour Hi-net velocity continuous waveforms with a pass-band of 2-10 Hz are used for the inversion after the correction of site amplification factors at each station estimated by coda normalization method (Takahashi et al. 2005) applied to normal earthquakes in the region. The source location and energy are estimated by applying least square inversion to the 1-min window iteratively. As a first application of our method, we estimated the spatio-temporal distribution of energy radiation for 2006 May episodic tremor and slip event occurred in western Shikoku, Japan, region. Tremor location and their radiated energy are estimated for every 1 minute. We counted the number of located LFTs and summed up their total energy at each grid having 0.05-degree spacing at each day to figure out the spatio-temporal distribution of energy release of tremors. The resultant spatial distribution of radiated energy is concentrated at a specific region. Additionally, we see the daily change of released energy, both of location and amount, which corresponds to the migration of tremor activity. The spatio-temporal distribution of energy radiation of tremors is in good agreement with a spatio-temporal slip distribution of slow slip event estimated from Hi-net tiltmeter record (Hirose et al. 2007). This suggests that small continuous tremors occur associated with a rupture process of slow slip.

  17. Using inferential sensors for quality control of Everglades Depth Estimation Network water-level data

    USGS Publications Warehouse

    Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul

    2016-09-29

    The Everglades Depth Estimation Network (EDEN), with over 240 real-time gaging stations, provides hydrologic data for freshwater and tidal areas of the Everglades. These data are used to generate daily water-level and water-depth maps of the Everglades that are used to assess biotic responses to hydrologic change resulting from the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. The generation of EDEN daily water-level and water-depth maps is dependent on high quality real-time data from water-level stations. Real-time data are automatically checked for outliers by assigning minimum and maximum thresholds for each station. Small errors in the real-time data, such as gradual drift of malfunctioning pressure transducers, are more difficult to immediately identify with visual inspection of time-series plots and may only be identified during on-site inspections of the stations. Correcting these small errors in the data often is time consuming and water-level data may not be finalized for several months. To provide daily water-level and water-depth maps on a near real-time basis, EDEN needed an automated process to identify errors in water-level data and to provide estimates for missing or erroneous water-level data.The Automated Data Assurance and Management (ADAM) software uses inferential sensor technology often used in industrial applications. Rather than installing a redundant sensor to measure a process, such as an additional water-level station, inferential sensors, or virtual sensors, were developed for each station that make accurate estimates of the process measured by the hard sensor (water-level gaging station). The inferential sensors in the ADAM software are empirical models that use inputs from one or more proximal stations. The advantage of ADAM is that it provides a redundant signal to the sensor in the field without the environmental threats associated with field conditions at stations (flood or hurricane, for example). In the event that a station does malfunction, ADAM provides an accurate estimate for the period of missing data. The ADAM software also is used in the quality assurance and quality control of the data. The virtual signals are compared to the real-time data, and if the difference between the two signals exceeds a certain tolerance, corrective action to the data and (or) the gaging station can be taken. The ADAM software is automated so that, each morning, the real-time EDEN data are compared to the inferential sensor signals and digital reports highlighting potential erroneous real-time data are generated for appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.

  18. The ecology of microorganisms in a small closed system: Potential benefits and problems for space station

    NASA Technical Reports Server (NTRS)

    Rodgers, E. B.

    1986-01-01

    The inevitble presence on the space station of microorganisms associated with crew members and their environment will have the potential for both benefits and a range of problems including illness and corrosion of materials. This report reviews the literature presenting information about microorganisms pertinent to Environmental Control and Life Support (ECLS) on the space station. The perspective of the report is ecological, viewing the space station as an ecosystem in which biological relationships are affected by factors such as zero gravity and by closure of a small volume of space. Potential sites and activities of microorganisms on the space station and their environmental limits, microbial standards for the space station, monitoring and control methods, effects of space factors on microorganisms, and extraterrestrial contamination are discussed.

  19. Human factors issues in telerobotic systems for Space Station Freedom servicing

    NASA Technical Reports Server (NTRS)

    Malone, Thomas B.; Permenter, Kathryn E.

    1990-01-01

    Requirements for Space Station Freedom servicing are described and the state-of-the-art for telerobotic system on-orbit servicing of spacecraft is defined. The projected requirements for the Space Station Flight Telerobotic Servicer (FTS) are identified. Finally, the human factors issues in telerobotic servicing are discussed. The human factors issues are basically three: the definition of the role of the human versus automation in system control; the identification of operator-device interface design requirements; and the requirements for development of an operator-machine interface simulation capability.

  20. Fade durations in satellite-path mobile radio propagation

    NASA Technical Reports Server (NTRS)

    Schmier, Robert G.; Bostian, Charles W.

    1986-01-01

    Fades on satellite to land mobile radio links are caused by several factors, the most important of which are multipath propagation and vegetative shadowing. Designers of vehicular satellite communications systems require information about the statistics of fade durations in order to overcome or compensate for the fades. Except for a few limiting cases, only the mean fade duration can be determined analytically, and all other statistics must be obtained experimentally or via simulation. This report describes and presents results from a computer program developed at Virginia Tech to simulate satellite path propagation of a mobile station in a rural area. It generates rapidly-fading and slowly-fading signals by separate processes that yield correct cumulative signal distributions and then combines these to simulate the overall signal. This is then analyzed to yield the statistics of fade duration.

  1. On the impact of reducing global geophysical fluid model deformations in SLR data processing

    NASA Astrophysics Data System (ADS)

    Weigelt, Matthias; Thaller, Daniela

    2016-04-01

    Mass redistributions in the atmosphere, oceans and the continental hydrology cause elastic loading deformations of the Earth's crust and thus systematically influence Earth-bound observation systems such as VLBI, GNSS or SLR. Causing non-linear station variations, these loading deformations have a direct impact on the estimated station coordinates and an indirect impact on other parameters of global space-geodetic solutions, e.g. Earth orientation parameters, geocenter coordinates, satellite orbits or troposphere parameters. Generally, the impact can be mitigated by co-parameterisation or by reducing deformations derived from global geophysical fluid models. Here, we focus on the latter approach. A number of data sets modelling the (non-tidal) loading deformations are generated by various groups. They show regionally and locally significant differences and consequently the impact on the space-geodetic solutions heavily depends on the available network geometry. We present and discuss the differences between these models and choose SLR as the speace-geodetic technique of interest in order to discuss the impact of atmospheric, oceanic and hydrological loading on the parameters of space-geodetic solutions when correcting for the global geophysical fluid models at the observation level. Special emphasis is given to a consistent usage of models for geometric and gravimetric corrections during the data processing. We quantify the impact of the different deformation models on the station coordinates and discuss the improvement in the Earth orientation parameters and the geocenter motion. We also show that a significant reduction in the RMS of the station coordinates can be achieved depending on the model of choice.

  2. Calculating the diffusive flux of persistent organic pollutants between sediments and the water column on the Palos Verdes shelf superfund site using polymeric passive samplers.

    PubMed

    Fernandez, Loretta A; Lao, Wenjian; Maruya, Keith A; Burgess, Robert M

    2014-04-01

    Passive samplers were deployed to the seafloor at a marine Superfund site on the Palos Verdes Shelf, California, USA, and used to determine water concentrations of persistent organic pollutants (POPs) in the surface sediments and near-bottom water. A model of Fickian diffusion across a thin water boundary layer at the sediment-water interface was used to calculate flux of contaminants due to molecular diffusion. Concentrations at four stations were used to calculate the flux of DDE, DDD, DDMU, and selected PCB congeners from sediments to the water column. Three passive sampling materials were compared: PE strips, POM strips, and SPME fibers. Performance reference compounds (PRCs) were used with PE and POM to correct for incomplete equilibration, and the resulting POP concentrations, determined by each material, agreed within 1 order of magnitude. SPME fibers, without PRC corrections, produced values that were generally much lower (1 to 2 orders of magnitude) than those measured using PE and POM, indicating that SPME may not have been fully equilibrated with waters being sampled. In addition, diffusive fluxes measured using PE strips at stations outside of a pilot remedial sand cap area were similar to those measured at a station inside the capped area: 240 to 260 ng cm(-2) y(-1) for p,p'-DDE. The largest diffusive fluxes of POPs were calculated at station 8C, the site where the highest sediment concentrations have been measured in the past, 1100 ng cm(-2) y(-1) for p,p'-DDE.

  3. Coastal Vertical Land motion in the German Bight

    NASA Astrophysics Data System (ADS)

    Becker, Matthias; Fenoglio, Luciana; Reckeweg, Florian

    2017-04-01

    In the framework of the ESA Sea Level Climate Change Initiative (CCI) we analyse a set of GNSS equipped tide gauges at the German Bight. Main goals are the determination of tropospheric zenith delay corrections for altimetric observations, precise coordinates in ITRF2008 and vertical land motion (VLM) rates of the tide gauge stations. These are to be used for georeferencing the tide gauges and the correction of tide gauge observations for VLM. The set of stations includes 38 GNSS stations. 19 stations are in the German Bight, where 15 of them belong to the Bundesanstalt für Gewässerkunde, 3 to EUREF and 1 to GREF. These stations are collocated with tide gauges (TGs). The other 19 GNSS stations in the network belong to EUREF, IGS and GREF. We analyse data in the time span from 2008 till the end of 2016 with the Bernese PPP processing approach. Data are partly rather noisy and disturbed by offsets and data gaps at the coastal TG sites. Special effort is therefore put into a proper estimation of the VLM. We use FODITS (Ostini2012), HECTOR (Bos et al, 2013), CATS (Williams, 2003) and the MIDAS approach of Blewitt (2016) to robustly derive rates and realistic error estimates. The results are compared to those published by the European Permanent Network (EPN), ITRF and the Système d'Observation du Niveau des Eaux Littorales (SONEL) for common stations. Vertical motion is small in general, at the -1 to -2 mm/yr level for most coastal stations. A comparison of the standard deviations of the velocity differences to EPN with the mean values of the estimated velocity standard deviations for our solution shows a very good agreement of the estimated velocities and their standard deviations with the reference solution from EPN. In the comparison with results by SONEL the standard deviation of the differences is slightly higher. The discrepancies may arise from differences in the time span analyzed and gaps, offsets and data preprocessing. The combined estimation of functional and stochastic parameters is rather sensitive to the characteristics of the time series and thus the estimated velocity also depends on the applied stochastic model and on the selected parameters. The GPS vertical land motion rates are finally compared to the difference between sea level rates measured by co-located altimetry and by tide gauge station data, which gives another estimation of VLM.

  4. 78 FR 65171 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... airplane undergoing a passenger- to-freighter conversion. This AD requires doing a general visual... service information identified in this AD, contact Boeing Commercial Airplanes, Attention: Data & Services... proposed to require doing a general visual inspection of the station 1920 splice clip for correct fastener...

  5. WILBER and PyWEED: Event-based Seismic Data Request Tools

    NASA Astrophysics Data System (ADS)

    Falco, N.; Clark, A.; Trabant, C. M.

    2017-12-01

    WILBER and PyWEED are two user-friendly tools for requesting event-oriented seismic data. Both tools provide interactive maps and other controls for browsing and filtering event and station catalogs, and downloading data for selected event/station combinations, where the data window for each event/station pair may be defined relative to the arrival time of seismic waves from the event to that particular station. Both tools allow data to be previewed visually, and can download data in standard miniSEED, SAC, and other formats, complete with relevant metadata for performing instrument correction. WILBER is a web application requiring only a modern web browser. Once the user has selected an event, WILBER identifies all data available for that time period, and allows the user to select stations based on criteria such as the station's distance and orientation relative to the event. When the user has finalized their request, the data is collected and packaged on the IRIS server, and when it is ready the user is sent a link to download. PyWEED is a downloadable, cross-platform (Macintosh / Windows / Linux) application written in Python. PyWEED allows a user to select multiple events and stations, and will download data for each event/station combination selected. PyWEED is built around the ObsPy seismic toolkit, and allows direct interaction and control of the application through a Python interactive console.

  6. Adaptive optics correction into single mode fiber for a low Earth orbiting space to ground optical communication link using the OPALS downlink.

    PubMed

    Wright, Malcolm W; Morris, Jeffery F; Kovalik, Joseph M; Andrews, Kenneth S; Abrahamson, Matthew J; Biswas, Abhijit

    2015-12-28

    An adaptive optics (AO) testbed was integrated to the Optical PAyload for Lasercomm Science (OPALS) ground station telescope at the Optical Communications Telescope Laboratory (OCTL) as part of the free space laser communications experiment with the flight system on board the International Space Station (ISS). Atmospheric turbulence induced aberrations on the optical downlink were adaptively corrected during an overflight of the ISS so that the transmitted laser signal could be efficiently coupled into a single mode fiber continuously. A stable output Strehl ratio of around 0.6 was demonstrated along with the recovery of a 50 Mbps encoded high definition (HD) video transmission from the ISS at the output of the single mode fiber. This proof of concept demonstration validates multi-Gbps optical downlinks from fast slewing low-Earth orbiting (LEO) spacecraft to ground assets in a manner that potentially allows seamless space to ground connectivity for future high data-rates network.

  7. Hyperspectral radiometer for automated measurement of global and diffuse sky irradiance

    NASA Astrophysics Data System (ADS)

    Kuusk, Joel; Kuusk, Andres

    2018-01-01

    An automated hyperspectral radiometer for the measurement of global and diffuse sky irradiance, SkySpec, has been designed for providing the SMEAR-Estonia research station with spectrally-resolved solar radiation data. The spectroradiometer has been carefully studied in the optical radiometry laboratory of Tartu Observatory, Estonia. Recorded signals are corrected for spectral stray light as well as for changes in dark signal and spectroradiometer spectral responsivity due to temperature effects. Comparisons with measurements of shortwave radiation fluxes made at the Baseline Surface Radiation Network (BSRN) station at Tõravere, Estonia, and with fluxes simulated using the atmospheric radiative transfer model 6S and Aerosol Robotic Network (AERONET) data showed that the spectroradiometer is a reliable instrument that provides accurate estimates of integrated fluxes and of their spectral distribution. The recorded spectra can be used to estimate the amount of atmospheric constituents such as aerosol and column water vapor, which are needed for the atmospheric correction of spectral satellite images.

  8. Optical correction and quality of vision of the French soldiers stationed in the Republic of Djibouti in 2009.

    PubMed

    Vignal, Rodolphe; Ollivier, Lénaïck

    2011-03-01

    To ensure vision readiness on the battlefield, the French military has been providing its soldiers with eyewear since World War I. A military refractive surgery program was initiated in 2008. A prospective questionnaire-based investigation on optical correction and quality of vision among active duty members with visual deficiencies stationed in Djibouti, Africa, was conducted in 2009. It revealed that 59.3% of the soldiers were wearing spectacles, 21.2% were wearing contact lenses--despite official recommendations--and 8.5% had undergone refractive surgery. Satisfaction rates were high with refractive surgery and contact lenses; 33.6% of eyeglass wearers were planning to have surgery. Eye dryness and night vision disturbances were the most reported symptoms following surgery. Military optical devices were under-prescribed before deployment. This suggests that additional and more effective studies on the use of military optical devices should be performed and policy supporting refractive surgery in military populations should be strengthened.

  9. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  10. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  11. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  12. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  13. Role of post-field raw data processing: a multi-site and full factorial uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Sabbatini, Simone; Fratini, Gerardo; Arriga, Nicola; Papale, Dario

    2013-04-01

    Uncertainties in the Eddy Covariance flux measurements are a fundamental issue not yet completely solved. The complexity of the method, involving many, not standardized processing steps is one among the source of such uncertainty. The goal of our work is to quantify uncertainties deriving from post-field raw data processing, needed to calculate fluxes from collected turbulence measurements. The methodology we propose is a full-factorial design, performed using as factors a number of selected processing steps. We applied this approach to 15 European flux stations representative of different ecosystems (forests, croplands and grasslands), climates (Mediterranean, Nordic, arid and humid) and instrumental setups (e.g. open vs. closed path systems). Then we processed one year of raw data from each of the selected stations so as to cover all possible combinations of the available options (levels) relative to all the critical processing steps, i.e: angle of attack correction; coordinate rotation; trend removal; time lag compensation; low- and high- frequency spectral correction; correction for air density fluctuations; and length of the flux averaging interval. The software we used is EddyPro™. At last we calculated the cumulative NEE (response) for each process, and performed an analysis of variance of the factorial design. In addition to the global uncertainty, from this statistical approach we obtain information about the factors that most contribute to the uncertainties, and also the most relevant two-level interactions between factors. Here we present partial results from the first sites analysed. For the beech forest of Sorø, Denmark (Gill R2 anemometer and closed path GA, tube length = 50 m) the factor that most contributes to the variance in 2007 (40.4 %) is the trend removal, with an uncertainty of 7.5%. It is followed by the angle of attack (16.1 % of the total variability, uncertainty 3.5 %) and the interaction between trend removal and time lag compensation (11.42 % of variance explained). The overall uncertainty is about 8.7 % (cumulative NEE = -440.22 ± 38.36 g C m-2 year-1). The Oak forest of Roccarespampani, Italy (Metek anemometer and closed path GA, tube length = 22 m) was a sink in 2006. The coordinate rotation has the main influence on the variance (46.14 %, against an uncertainty of 4 %), then comes the average interval with 17.34 % (unc. 2 %), and their interaction (9.70 % of variance explained). The total uncertainty is 4.8 % (NEE = -515.19 ± 24.7 g C m-2 year-1). The mixed forest of Norunda, Sweden, in the year 2008 is a source of CO2 (Metek anemometer and closed path GA, tube length = 8 m). For this site we found a strong influence of the coordinate rotation (74.02 %, with an uncertainty of 32.5 %), while the trend removal explains the 17.41 % of the variance (unc. 15 %), against a total uncertainty of about 26.8 % (155.04 ± 41.50 g C m-2 year-1).

  14. Stray-Light Correction of the Marine Optical Buoy

    NASA Technical Reports Server (NTRS)

    Brown, Steven W.; Johnson, B. Carol; Flora, Stephanie J.; Feinholz, Michael E.; Yarbrough, Mark A.; Barnes, Robert A.; Kim, Yong Sung; Lykke, Keith R.; Clark, Dennis K.

    2003-01-01

    In ocean-color remote sensing, approximately 90% of the flux at the sensor originates from atmospheric scattering, with the water-leaving radiance contributing the remaining 10% of the total flux. Consequently, errors in the measured top-of-the atmosphere radiance are magnified a factor of 10 in the determination of water-leaving radiance. Proper characterization of the atmosphere is thus a critical part of the analysis of ocean-color remote sensing data. It has always been necessary to calibrate the ocean-color satellite sensor vicariously, using in situ, ground-based results, independent of the status of the pre-flight radiometric calibration or the utility of on-board calibration strategies. Because the atmosphere contributes significantly to the measured flux at the instrument sensor, both the instrument and the atmospheric correction algorithm are simultaneously calibrated vicariously. The Marine Optical Buoy (MOBY), deployed in support of the Earth Observing System (EOS) since 1996, serves as the primary calibration station for a variety of ocean-color satellite instruments, including the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Japanese Ocean Color Temperature Scanner (OCTS) , and the French Polarization and Directionality of the Earth's Reflectances (POLDER). MOBY is located off the coast of Lanai, Hawaii. The site was selected to simplify the application of the atmospheric correction algorithms. Vicarious calibration using MOBY data allows for a thorough comparison and merger of ocean-color data from these multiple sensors.

  15. Correction of phase velocity bias caused by strong directional noise sources in high-frequency ambient noise tomography: a case study in Karamay, China

    NASA Astrophysics Data System (ADS)

    Wang, K.; Luo, Y.; Yang, Y.

    2016-12-01

    We collect two months of ambient noise data recorded by 35 broadband seismic stations in a 9×11 km area near Karamay, China, and do cross-correlation of noise data between all station pairs. Array beamforming analysis of the ambient noise data shows that ambient noise sources are unevenly distributed and the most energetic ambient noise mainly comes from azimuths of 40o-70o. As a consequence of the strong directional noise sources, surface wave waveforms of the cross-correlations at 1-5 Hz show clearly azimuthal dependence, and direct dispersion measurements from cross-correlations are strongly biased by the dominant noise energy. This bias renders that the dispersion measurements from cross-correlations do not accurately reflect the interstation velocities of surface waves propagating directly from one station to the other, that is, the cross-correlation functions do not retrieve Empirical Green's Functions accurately. To correct the bias caused by unevenly distributed noise sources, we adopt an iterative inversion procedure. The iterative inversion procedure, based on plane-wave modeling, includes three steps: (1) surface wave tomography, (2) estimation of ambient noise energy and (3) phase velocities correction. First, we use synthesized data to test efficiency and stability of the iterative procedure for both homogeneous and heterogeneous media. The testing results show that: (1) the amplitudes of phase velocity bias caused by directional noise sources are significant, reaching 2% and 10% for homogeneous and heterogeneous media, respectively; (2) phase velocity bias can be corrected by the iterative inversion procedure and the convergences of inversion depend on the starting phase velocity map and the complexity of the media. By applying the iterative approach to the real data in Karamay, we further show that phase velocity maps converge after ten iterations and the phase velocity map based on corrected interstation dispersion measurements are more consistent with results from geology surveys than those based on uncorrected ones. As ambient noise in high frequency band (>1Hz) is mostly related to human activities or climate events, both of which have strong directivity, the iterative approach demonstrated here helps improve the accuracy and resolution of ANT in imaging shallow earth structures.

  16. Improving Earth/Prediction Models to Improve Network Processing

    NASA Astrophysics Data System (ADS)

    Wagner, G. S.

    2017-12-01

    The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.

  17. Rising to the challenge: acute stress appraisals and selection centre performance in applicants to postgraduate specialty training in anaesthesia.

    PubMed

    Roberts, Martin J; Gale, Thomas C E; McGrath, John S; Wilson, Mark R

    2016-05-01

    The ability to work under pressure is a vital non-technical skill for doctors working in acute medical specialties. Individuals who evaluate potentially stressful situations as challenging rather than threatening may perform better under pressure and be more resilient to stress and burnout. Training programme recruitment processes provide an important opportunity to examine applicants' reactions to acute stress. In the context of multi-station selection centres for recruitment to anaesthesia training programmes, we investigated the factors influencing candidates' pre-station challenge/threat evaluations and the extent to which their evaluations predicted subsequent station performance. Candidates evaluated the perceived stress of upcoming stations using a measure of challenge/threat evaluation-the cognitive appraisal ratio (CAR)-and consented to release their demographic details and station scores. Using regression analyses we determined which candidate and station factors predicted variation in the CAR and whether, after accounting for these factors, the CAR predicted candidate performance in the station. The CAR was affected by the nature of the station and candidate gender, but not age, ethnicity, country of training or clinical experience. Candidates perceived stations involving work related tasks as more threatening. After controlling for candidates' demographic and professional profiles, the CAR significantly predicted station performance: 'challenge' evaluations were associated with better performance, though the effect was weak. Our selection centre model can help recruit prospective anaesthetists who are able to rise to the challenge of performing in stressful situations but results do not support the direct use of challenge/threat data for recruitment decisions.

  18. Exposure assessment in front of a multi-band base station antenna.

    PubMed

    Kos, Bor; Valič, Blaž; Kotnik, Tadej; Gajšek, Peter

    2011-04-01

    This study investigates occupational exposure to electromagnetic fields in front of a multi-band base station antenna for mobile communications at 900, 1800, and 2100 MHz. Finite-difference time-domain method was used to first validate the antenna model against measurement results published in the literature and then investigate the specific absorption rate (SAR) in two heterogeneous, anatomically correct human models (Virtual Family male and female) at distances from 10 to 1000 mm. Special attention was given to simultaneous exposure to fields of three different frequencies, their interaction and the additivity of SAR resulting from each frequency. The results show that the highest frequency--2100 MHz--results in the highest spatial-peak SAR averaged over 10 g of tissue, while the whole-body SAR is similar at all three frequencies. At distances > 200 mm from the antenna, the whole-body SAR is a more limiting factor for compliance to exposure guidelines, while at shorter distances the spatial-peak SAR may be more limiting. For the evaluation of combined exposure, a simple summation of spatial-peak SAR maxima at each frequency gives a good estimation for combined exposure, which was also found to depend on the distribution of transmitting power between the different frequency bands. Copyright © 2010 Wiley-Liss, Inc.

  19. The International Space Station Solar Alpha Rotary Joint Anomaly Investigation

    NASA Technical Reports Server (NTRS)

    Harik, Elliot P.; McFatter, Justin; Sweeney, Daniel J.; Enriquez, Carlos F.; Taylor, Deneen M.; McCann, David S.

    2010-01-01

    The Solar Alpha Rotary Joint (SARJ) is a single-axis pointing mechanism used to orient the solar power generating arrays relative to the sun for the International Space Station (ISS). Approximately 83 days after its on-orbit installation, one of the two SARJ mechanisms aboard the ISS began to exhibit high drive motor current draw. Increased structural vibrations near the joint were also observed. Subsequent inspections via Extravehicular Activity (EVA) discovered that the nitrided case-hardened steel bearing race on the outboard side of the joint had extensive damage to one of its three rolling surfaces. A farreaching investigation of the anomaly was undertaken. The investigation included metallurgical inspections, coupon tests, traction kinematics tests, detailed bearing measurements, and thermal and structural analyses. The results of the investigation showed that the anomaly had most probably been caused by high bearing edge stresses that resulted from inadequate lubrication of the rolling contact. The profile of the roller bearings and the metallurgical properties of the race ring were also found to be significant contributing factors. To mitigate the impact of the damage, astronauts cleaned and lubricated the race ring surface with grease. This corrective action led to significantly improved performance of the mechanism both in terms of drive motor current and induced structural vibration.

  20. The International Space Station Solar Alpha Rotary Joint Anomaly Investigation

    NASA Technical Reports Server (NTRS)

    Harik, Elliot P.; McFatter, Justin; Sweeney, Daniel J.; Enriquez, Carlos F.; Taylor, Deneen M.; McCann, David S.

    2010-01-01

    The Solar Alpha Rotary Joint (SARJ) is a single-axis pointing mechanism used to orient the solar power generating arrays relative to the sun for the International Space Station (ISS). Approximately 83 days after its on-orbit installation, one of the two SARJ mechanisms aboard the ISS began to exhibit high drive motor current draw. Increased structural vibrations near the joint were also observed. Subsequent inspections via Extravehicular Activity (EVA) discovered that the nitrided case hardened steel bearing race on the outboard side of the joint had extensive damage to one of its three rolling surfaces. A far-reaching investigation of the anomaly was undertaken. The investigation included metallurgical inspections, coupon tests, traction kinematics tests, detailed bearing measurements, and thermal and structural analyses. The results of the investigation showed that anomaly had most probably been caused by high bearing edge stresses that resulted from inadequate lubrication of the rolling contact. The profile of the roller bearings and the metallurgical properties of the race ring were also found to be significant contributing factors. To mitigate the impact of the damage astronauts cleaned and lubricated the race ring surface with grease. This corrective action led to significantly improved performance of the mechanism both in terms of drive motor current and induced structural vibration.

  1. The effect of submerged aquatic vegetation expansion on a declining turbidity trend in the Sacramento-San Joaquin River Delta

    USGS Publications Warehouse

    Hestir, E.L.; Schoellhamer, David H.; Jonathan Greenberg,; Morgan-King, Tara L.; Ustin, S.L.

    2016-01-01

    Submerged aquatic vegetation (SAV) has well-documented effects on water clarity. SAV beds can slow water movement and reduce bed shear stress, promoting sedimentation and reducing suspension. However, estuaries have multiple controls on turbidity that make it difficult to determine the effect of SAV on water clarity. In this study, we investigated the effect of primarily invasive SAV expansion on a concomitant decline in turbidity in the Sacramento-San Joaquin River Delta. The objective of this study was to separate the effects of decreasing sediment supply from the watershed from increasing SAV cover to determine the effect of SAV on the declining turbidity trend. SAV cover was determined by airborne hyperspectral remote sensing and turbidity data from long-term monitoring records. The turbidity trends were corrected for the declining sediment supply using suspended-sediment concentration data from a station immediately upstream of the Delta. We found a significant negative trend in turbidity from 1975 to 2008, and when we removed the sediment supply signal from the trend it was still significant and negative, indicating that a factor other than sediment supply was responsible for part of the turbidity decline. Turbidity monitoring stations with high rates of SAV expansion had steeper and more significant turbidity trends than those with low SAV cover. Our findings suggest that SAV is an important (but not sole) factor in the turbidity decline, and we estimate that 21–70 % of the total declining turbidity trend is due to SAV expansion.

  2. Is solar correction for long-term trend studies stable?

    NASA Astrophysics Data System (ADS)

    Laštovička, Jan

    2017-04-01

    When calculating long-term trends in the ionosphere, the effect of the 11-year solar cycle (i.e. of solar activity) must be removed from data, because it is much stronger than the long-term trend. When a data series is analyzed for trend, usual approach is first to calculate from all data their dependence on solar activity and create an observational model of dependence on solar activity. Then the model data are subtracted from observations and trend is computed from residuals. This means that it is assumed that the solar activity dependence is stable over the whole data series period of time. But what happens if it is not the case? As an ionospheric parameter we consider foE from two European stations with the best long data series of parameters of the ionospheric E layer, Slough/Chilton and Juliusruh over 1975-2014 (40 years). Noon-time medians (10-14 LT) are analyzed. The trend pattern after removing solar influence with one correction for the whole period is complex. For yearly average values for both stations first foE is slightly decreasing in 1975-1990, then the trend levels off or a very little increase occurs in 1990-2005, and finally in 2006-2014 a remarkable decrease is observed. This does not seem to be physically plausible. However, when the solar correction is calculated separately for the three above periods, we obtain a smooth slightly negative trend which changes after the mid-1990 into no trend in coincidence with change of ozone trend. While solar corrections for the first two periods are similar (even though not equal), the solar activity dependence of foE in the third period (lower solar activity) is clearly different. Also foF2 trend revealed some effect of unstable solar correction. Thus the stability of solar correction should be carefully tested when calculating ionospheric trends. This could perhaps explain some of differences between the past published trend results.

  3. Performance Assessment in the PILOT Experiment On Board Space Stations Mir and ISS.

    PubMed

    Johannes, Bernd; Salnitski, Vyacheslav; Dudukin, Alexander; Shevchenko, Lev; Bronnikov, Sergey

    2016-06-01

    The aim of this investigation into the performance and reliability of Russian cosmonauts in hand-controlled docking of a spacecraft on a space station (experiment PILOT) was to enhance overall mission safety and crew training efficiency. The preliminary findings on the Mir space station suggested that a break in docking training of about 90 d significantly degraded performance. Intensified experiment schedules on the International Space Station (ISS) have allowed for a monthly experiment using an on-board simulator. Therefore, instead of just three training tasks as on Mir, five training flights per session have been implemented on the ISS. This experiment was run in parallel but independently of the operational docking training the cosmonauts receive. First, performance was compared between the experiments on the two space stations by nonparametric testing. Performance differed significantly between space stations preflight, in flight, and postflight. Second, performance was analyzed by modeling the linear mixed effects of all variances (LME). The fixed factors space station, mission phases, training task numbers, and their interaction were analyzed. Cosmonauts were designated as a random factor. All fixed factors were found to be significant and the interaction between stations and mission phase was also significant. In summary, performance on the ISS was shown to be significantly improved, thus enhancing mission safety. Additional approaches to docking performance assessment and prognosis are presented and discussed.

  4. Free Space Laser Communication Experiments from Earth to the Lunar Reconnaissance Orbiter in Lunar Orbit

    NASA Technical Reports Server (NTRS)

    Sun, Xiaoli; Skillman, David R.; Hoffman, Evan D.; Mao, Dandan; McGarry, Jan F.; Zellar, Ronald S.; Fong, Wai H; Krainak, Michael A.; Neumann, Gregory A.; Smith, David E.

    2013-01-01

    Laser communication and ranging experiments were successfully conducted from the satellite laser ranging (SLR) station at NASA Goddard Space Flight Center (GSFC) to the Lunar Reconnaissance Orbiter (LRO) in lunar orbit. The experiments used 4096-ary pulse position modulation (PPM) for the laser pulses during one-way LRO Laser Ranging (LR) operations. Reed-Solomon forward error correction codes were used to correct the PPM symbol errors due to atmosphere turbulence and pointing jitter. The signal fading was measured and the results were compared to the model.

  5. Research in geodesy and geophysics based upon radio-interferometric observations of extragalactic radio sources. Final report, December 1984-December 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, T.A.; Davis, J.L.; Gwinn, C.R.

    1986-10-01

    This report consists of a collection of reprints and preprints. Subjects included: description of Mk-III system for very-long-baseline interferometry (VLBI); geodetic results from the Mk-I and Mk-III systems for VLBI; effects of modeling atmospheric propagation on estimates of baseline length and station height; an improved model for the dry propagation delay; corrections to IAU 1980 nutation series based on VLBI data and geophysical interpretation of those corrections; and a review of the contributions of VLBI to geodynamic studies.

  6. A multi-factor GIS method to identify optimal geographic locations for electric vehicle (EV) charging stations

    NASA Astrophysics Data System (ADS)

    Zhang, Yongqin; Iman, Kory

    2018-05-01

    Fuel-based transportation is one of the major contributors to poor air quality in the United States. Electric Vehicle (EV) is potentially the cleanest transportation technology to our environment. This research developed a spatial suitability model to identify optimal geographic locations for installing EV charging stations for travelling public. The model takes into account a variety of positive and negative factors to identify prime locations for installing EV charging stations in Wasatch Front, Utah, where automobile emission causes severe air pollution due to atmospheric inversion condition near the valley floor. A walkable factor grid was created to store index scores from input factor layers to determine prime locations. 27 input factors including land use, demographics, employment centers etc. were analyzed. Each factor layer was analyzed to produce a summary statistic table to determine the site suitability. Potential locations that exhibit high EV charging usage were identified and scored. A hot spot map was created to demonstrate high, moderate, and low suitability areas for installing EV charging stations. A spatially well distributed EV charging system was then developed, aiming to reduce "range anxiety" from traveling public. This spatial methodology addresses the complex problem of locating and establishing a robust EV charging station infrastructure for decision makers to build a clean transportation infrastructure, and eventually improve environment pollution.

  7. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less

  8. Space station crew safety alternatives study. Volume 3: Safety impact of human factors

    NASA Technical Reports Server (NTRS)

    Rockoff, L. A.; Raasch, R. F.; Peercy, R. L., Jr.

    1985-01-01

    The first 15 years of accumulated space station concepts for Initial Operational Capability (IOC) during the early 1990's was considered. Twenty-five threats to the space station are identified and selected threats addressed as impacting safety criteria, escape and rescue, and human factors safety concerns. Of the 25 threats identified, eight are discussed including strategy options for threat control: fire, biological or toxic contamination, injury/illness, explosion, loss of pressurization, radiation, meteoroid penetration and debris. Of particular interest here is volume three (of five volumes) pertaining to the safety impact of human factors.

  9. Space Station Furnace Facility Management Information System (SSFF-MIS) Development

    NASA Technical Reports Server (NTRS)

    Mead, Robert M.

    1996-01-01

    Thios report summarizes the chronology, results, and lessons learned from the development of the SSFF-MIS. This system has been nearly two years in development and has yielded some valuable insights into specialized MIS development. Attachment A contains additions, corrections, and deletions by the COTR.

  10. The two-way time synchronization system via a satellite voice channel

    NASA Technical Reports Server (NTRS)

    Heng-Qiu, Zheng; Ren-Huan, Zhang; Yong-Hui, HU

    1994-01-01

    A newly developed two-way time synchronization system is described in this paper. The system uses one voice channel at a SCPC satellite digital communication earth station, whose bandwidth is only 45 kHz, thus saving satellite resources greatly. The system is composed of one master station and one or several, up to sixty-two, secondary stations. The master and secondary stations are equipped with the same equipment, including a set of timing equipment, a synthetic data terminal for time synchronizing, and a interface unit between the data terminal and the satellite earth station. The synthetic data terminal for time synchronization also has an IRIG-B code generator and a translator. The data terminal of master station is the key part of whole system. The system synchronization process is full automatic, which is controlled by the master station. Employing an autoscanning technique and conversational mode, the system accomplishes the following tasks: linking up liaison with each secondary station in turn, establishing a coarse time synchronization, calibrating date (years, months, days) and time of day (hours, minutes, seconds), precisely measuring the time difference between local station and the opposite station, exchanging measurement data, statistically processing the data, rejecting error terms, printing the data, calculating the clock difference and correcting the phase, thus realizing real-time synchronization from one point to multiple points. We also designed an adaptive phase circuit to eliminate the phase ambiguity of the PSK demodulator. The experiments have shown that the time synchronization accuracy is better than 2 mu S. The system has been put into regular operation.

  11. Corrective Action Investigation Plan for Corrective Action Unit 165: Areas 25 and 26 Dry Well and Washdown Areas, Nevada Test Site, Nevada (including Record of Technical Change Nos. 1, 2, and 3) (January 2002, Rev. 0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 165 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 165 consists of eight Corrective Action Sites (CASs): CAS 25-20-01, Lab Drain Dry Well; CAS 25-51-02, Dry Well; CAS 25-59-01, Septic System; CAS 26-59-01, Septic System; CAS 25-07-06, Train Decontamination Area; CAS 25-07-07, Vehicle Washdown; CAS 26-07-01, Vehicle Washdown Station; and CAS 25-47-01, Reservoir and French Drain. All eight CASsmore » are located in the Nevada Test Site, Nevada. Six of these CASs are located in Area 25 facilities and two CASs are located in Area 26 facilities. The eight CASs at CAU 165 consist of dry wells, septic systems, decontamination pads, and a reservoir. The six CASs in Area 25 are associated with the Nuclear Rocket Development Station that operated from 1958 to 1973. The two CASs in Area 26 are associated with facilities constructed for Project Pluto, a series of nuclear reactor tests conducted between 1961 to 1964 to develop a nuclear-powered ramjet engine. Based on site history, the scope of this plan will be a two-phased approach to investigate the possible presence of hazardous and/or radioactive constituents at concentrations that could potentially pose a threat to human health and the environment. The Phase I analytical program for most CASs will include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, total petroleum hydrocarbons, polychlorinated biphenyls, and radionuclides. If laboratory data obtained from the Phase I investigation indicates the presence of contaminants of concern, the process will continue with a Phase II investigation to define the extent of contamination. Based on the results of Phase I sampling, the analytical program for Phase II investigation may be reduced. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document.« less

  12. Satellite clock corrections estimation to accomplish real time ppp: experiments for brazilian real time network

    NASA Astrophysics Data System (ADS)

    Marques, Haroldo; Monico, João; Aquino, Marcio; Melo, Weyller

    2014-05-01

    The real time PPP method requires the availability of real time precise orbits and satellites clocks corrections. Currently, it is possible to apply the solutions of clocks and orbits available by BKG within the context of IGS Pilot project or by using the operational predicted IGU ephemeris. The accuracy of the satellite position available in the IGU is enough for several applications requiring good quality. However, the satellites clocks corrections do not provide enough accuracy (3 ns ~ 0.9 m) to accomplish real time PPP with the same level of accuracy. Therefore, for real time PPP application it is necessary to further research and develop appropriated methodologies for estimating the satellite clock corrections in real time with better accuracy. Currently, it is possible to apply the real time solutions of clocks and orbits available by Federal Agency for Cartography and Geodesy (BKG) within the context of IGS Pilot project. The BKG corrections are disseminated by a new proposed format of the RTCM 3.x and can be applied in the broadcasted orbits and clocks. Some investigations have been proposed for the estimation of the satellite clock corrections using GNSS code and phase observable at the double difference level between satellites and epochs (MERVAT, DOUSA, 2007). Another possibility consists of applying a Kalman Filter in the PPP network mode (HAUSCHILD, 2010) and it is also possible the integration of both methods, using network PPP and observables at double difference level in specific time intervals (ZHANG; LI; GUO, 2010). For this work the methodology adopted consists in the estimation of the satellite clock corrections based on the data adjustment in the PPP mode, but for a network of GNSS stations. The clock solution can be solved by using two types of observables: code smoothed by carrier phase or undifferenced code together with carrier phase. In the former, we estimate receiver clock error; satellite clock correction and troposphere, considering that the phase ambiguities are eliminated when applying differences between consecutive epochs. However, when using undifferenced code and phase, the ambiguities may be estimated together with receiver clock errors, satellite clock corrections and troposphere parameters. In both strategies it is also possible to correct the troposphere delay from a Numerical Weather Forecast Model instead of estimating it. The prediction of the satellite clock correction can be performed using a straight line or a second degree polynomial using the time series of the estimated satellites clocks. To estimate satellite clock correction and to accomplish real time PPP two pieces of software have been developed, respectively, "RT_PPP" and "RT_SAT_CLOCK". The system (RT_PPP) is able to process GNSS code and phase data using precise ephemeris and precise satellites clocks corrections together with several corrections required for PPP. In the software RT_SAT_CLOCK we apply a Kalman filter algorithm to estimate satellite clock correction in the network PPP mode. In this case, all PPP corrections must be applied for each station. The experiments were generated in real time and post-processed mode (simulating real time) considering data from the Brazilian continuous GPS network and also from the IGS network in a global satellite clock solution. We have used IGU ephemeris for satellite position and estimated the satellite clock corrections, performing the updates as soon as new ephemeris files were available. Experiments were accomplished in order to assess the accuracy of the estimated clocks when using the Brazilian Numerical Weather Forecast Model (BNWFM) from CPTEC/INPE and also using the ZTD from European Centre for Medium-Range Weather Forecasts (ECMWF) together with Vienna Mapping Function VMF or estimating troposphere with clocks and ambiguities in the Kalman Filter. The daily precision of the estimated satellite clock corrections reached the order of 0.15 nanoseconds. The clocks were applied in the Real Time PPP for Brazilian network stations and also for flight test of the Brazilian airplanes and the results show that it is possible to accomplish real time PPP in the static and kinematic modes with accuracy of the order of 10 to 20 cm, respectively.

  13. Investigation of International Space Station Major Constituent Analyzer Anomalous ORU 02 Performance

    NASA Technical Reports Server (NTRS)

    Gardner, Ben D.; Burchfield, David E.; Pargellis, Andrew; Erwin, Phillip M.; Thoresen, Souzan; Gentry, Grey; Granahan, John; Matty, Chris

    2013-01-01

    The Major Constituent Analyzer (MCA) is a mass spectrometer based system that measures the major atmospheric constituents on the International Space Station. In 2011, two MCA ORU 02 analyzer assemblies experienced premature on-orbit failures. These failures were determined to be the result of off-nominal ion source filament performance. Recent product improvements to ORU 02 designed to improve the lifetime of the ion pump also constrained the allowable tuning criteria for the ion source filaments. This presentation describes the filament failures as well as the corrective actions implemented to preclude such failures in the future.

  14. Principal facts for gravity stations in the Elko, Steptoe Valley, Coyote Spring Valley, and Sheep Range areas, eastern and southern Nevada

    USGS Publications Warehouse

    Berger, D.L.; Schaefer, D.H.; Frick, E.A.

    1990-01-01

    Principal facts for 537 gravity stations in the carbonate-rock province of eastern and southern Nevada are tabulated and presented. The gravity data were collected in support of groundwater studies in several valleys. The study areas include the Elko area, northern Steptoe Valley, Coyote Spring Valley, and the western Sheep Range area. The data for each site include values for latitude, longitude, altitude, observed gravity, free- air anomaly, terrain correction, and Bouguer anomaly (calculated at a bedrock density of 2.67 g/cu cm. (USGS)

  15. A three-station lightning detection system

    NASA Technical Reports Server (NTRS)

    Ruhnke, L. H.

    1972-01-01

    A three-station network is described which senses magnetic and electric fields of lightning. Directional and distance information derived from the data are used to redundantly determine lightning position. This redundancy is used to correct consistent propagation errors. A comparison is made of the relative accuracy of VLF direction finders with a newer method to determine distance to and location of lightning by the ratio of magnetic-to-electric field as observed at 400 Hz. It was found that VLF direction finders can determine lightning positions with only one-half the accuracy of the method that uses the ratio of magnetic-to-electric field.

  16. Potential solar radiation and land cover contributions to digital climate surface modeling

    NASA Astrophysics Data System (ADS)

    Puig, Pol; Batalla, Meritxell; Pesquer, Lluís; Ninyerola, Miquel

    2016-04-01

    Overview: We have designed a series of ad-hoc experiments to study the role of factors that a priori have a strong weight in developing digital models of temperature and precipitation, such as solar radiation and land cover. Empirical test beds have been designed to improve climate (mean air temperature and total precipitation) digital models using statistical general techniques (multiple regression) with residual correction (interpolated with inverse weighting distance). Aim: Understand what roles these two factors (solar radiation and land cover) play to incorporate them into the process of generating mapping of temperature and rainfall. Study area: The Iberian Peninsula and supported in this, Catalonia and the Catalan Pyrenees. Data: The dependent variables used in all experiments relate to data from meteorological stations precipitation (PL), mean temperature (MT), average temperature minimum (MN) and maximum average temperature (MX). These data were obtained monthly from the AEMET (Agencia Estatal de Meteorología). Data series of stations covers the period between 1950 to 2010. Methodology: The idea is to design ad hoc, based on a sample of more equitable space statistician, to detect the role of radiation. Based on the influence of solar radiation on the temperature of the air from a quantitative point of view, the difficulty in answering this lies in the fact that there are lots of weather stations located in areas where solar radiation is similar. This suggests that the role of the radiation variable remains "off" when, instead, we intuitively think that would strongly influence the temperature. We have developed a multiple regression analysis between these meteorological variables as the dependent ones (Temperature and rainfall), and some geographical variables: altitude (ALT), latitude (LAT), continentality (CON) and solar radiation (RAD) as the independent ones. In case of the experiment with land covers, we have used the NDVI index as a proxy of land covers and added this variable in to the independents to improve the models. Results: The role of solar radiation does not improve models only under certain conditions and areas, especially in the Pyrennes. The vegetation index NDVI and therefore the land cover on which the station is located, helps improve rainfall and temperature patterns, obtaining various degrees of improvement in terms of molded variables and months.

  17. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  18. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  19. Sub-soil contamination due to oil spills in zones surrounding oil pipeline-pump stations and oil pipeline right-of-ways in Southwest-Mexico.

    PubMed

    Iturbe, Rosario; Flores, Carlos; Castro, Alejandrina; Torres, Luis G

    2007-10-01

    Oil spills due to oil pipelines is a very frequent problem in Mexico. Petroleos Mexicanos (PEMEX), very concerned with the environmental agenda, has been developing inspection and correction plans for zones around oil pipelines pumping stations and pipeline right-of-way. These stations are located at regular intervals of kilometres along the pipelines. In this study, two sections of an oil pipeline and two pipeline pumping stations zones are characterized in terms of the presence of Total Petroleum Hydrocarbons (TPHs) and Polycyclic Aromatic Hydrocarbons (PAHs). The study comprehends sampling of the areas, delimitation of contamination in the vertical and horizontal extension, analysis of the sampled soils regarding TPHs content and, in some cases, the 16 PAHs considered as priority by USEPA, calculation of areas and volumes contaminated (according to Mexican legislation, specifically NOM-EM-138-ECOL-2002) and, finally, a proposal for the best remediation techniques suitable for the contamination levels and the localization of contaminants.

  20. Automated delay measurement system for an Earth station for Two-Way Satellite Time and Frequency Transfer

    NASA Technical Reports Server (NTRS)

    Dejong, Gerrit; Polderman, Michel C.

    1995-01-01

    The measurement of the difference of the transmit and receive delays of the signals in a Two-Way Satellite Time and Frequency Transfer (TWSTFT) Earth station is crucial for its nanosecond time transfer capability. Also, the monitoring of the change of this delay difference with time, temperature, humidity, or barometric pressure is important for improving the TWSTFT capabilities. An automated system for this purpose has been developed from the initial design at NMi-VSL. It calibrates separately the transmit and receive delays in cables, amplifiers, upconverters and downconverters, and antenna feeds. The obtained results can be applied as corrections to the TWSTFT measurement when, before and after a measurement session, a calibration session is performed. Preliminary results obtained at NMi-VSL will be shown. Also, if available, the results of a manual version of the system that is planned to be circulated in Sept. 1994 together with a USNO portable station on a calibration trip to European TWSTFT Earth stations.

  1. Total ozone trends over the USA during 1979-1991 from Dobson spectrophotometer observations

    NASA Technical Reports Server (NTRS)

    Komhyr, Walter D.; Grass, Robert D.; Koenig, Gloria L.; Quincy, Dorothy M.; Evans, Robert D.; Leonard, R. Kent

    1994-01-01

    Ozone trends for 1979-1991, determined from Dobson spectrophotometer observations made at eight stations in the United States, are augmented with trend data from four foreign cooperative stations operated by NOAA/CMDL. Results are based on provisional data archived routinely throughout the years at the World Ozone Data Center in Toronto, Canada, with calibration corrections applied to some of the data. Trends through 1990 exhibit values of minus 0.3 percent to minus 0.5 percent yr(exp -1) at mid-to-high latitudes in the northern hemisphere. With the addition of 1991 data, however, the trends become less negative, indicating that ozone increased in many parts of the world during 1991. Stations located within the plus or minus 20 deg N-S latitude band exhibit no ozone trends. Early 1992 data show decreased ozone values at some of the stations. At South Pole, Antarctica, October ozone values have remained low during the past 3 years.

  2. Time Averaged Transmitter Power and Exposure to Electromagnetic Fields from Mobile Phone Base Stations

    PubMed Central

    Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo

    2014-01-01

    Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ≈ 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551

  3. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  4. Application of the dynamic calibration method to international monitoring system stations in Central Asia using natural seismicity data

    NASA Astrophysics Data System (ADS)

    Kedrov, O. K.; Kedrov, E. O.; Sergeyeva, N. A.; Zabarinskaya, L. P.; Gordon, V. R.

    2008-05-01

    The dynamic calibration method (DCM), using natural seismicity data and initially elaborated in [Kedrov, 2001; Kedrov et al., 2001; Kedrov and Kedrov, 2003], is applied to International Monitoring System (IMS) stations in Central Asia. The algorithm of the method is refined and a program is designed for calibrating diagnostic parameters (discriminants) that characterize a seismic source on the source-station traces. The DCM calibration of stations in relation to the region under study is performed by the choice of attenuation coefficients that adapt the diagnostic parameters to the conditions in a reference region. In this method, the stable Eurasia region is used as the latter. The calibration used numerical data samples taken from the archive of the International Data Centre (IDC) for the IMS stations MKAR, BVAR, EIL, ASF, and CMAR. In this paper, we used discriminants in the spectral and time domains that have the form D_i = X_i - a_m m_b - b_Δ log Δ and are independent of the magnitude m b and the epicentral distance Δ; these discriminants were elaborated in [Kedrov et al., 1990; Kedrov and Lyuke, 1999] on the basis of a method used for identification of events at regional distances in Eurasia. Prerequisites of the DCM are the assumptions that the coefficient a m is regionindependent and the coefficient b Δ depends only on the geotectonic characteristics of the medium and does not depend on the source type. Thus, b Δ can be evaluated only from a sample of earthquakes in the region studied; it is used for adapting the discriminants D( X i ) in the region studied to the reference region. The algorithm is constructed in such a way that corrected values of D( X i) are calculated from the found values of the calibration coefficients b Δ, after which natural events in the region under study are selected by filtering. Empirical estimates of the filtering efficiency as a function of a station vary in a range of 95 100%. The DCM was independently tested using records obtained at the IRIS (Incorporated Research Institutions for Seismology) stations BRVK and MAKZ from explosions detonated in India on May 11, 1998, and Pakistan on May 28, 1998; these stations are similar in location and recording instrumentation characteristics to the IMS stations BVAR and MKAR. This test resulted in correct recognition of the source type and thereby directly confirmed the validity of the proposed calibration method of stations with the use of natural seismicity data. It is shown that the calibration coefficients b Δ for traces similar in the conditions of signal propagation (e.g., the traces from Iran to the stations EIL and ASF) are comparable for nearly all diagnostic parameters. We arrive at the conclusion that the method of dynamic calibration of stations using natural seismicity data in a region where no explosions were detonated can be significant for a rapid and inexpensive calibration of IMS stations. The DCM can also be used for recognition of industrial chemical explosions that are sometimes erroneously classified in regional catalogs as earthquakes.

  5. Costs Associated With Compressed Natural Gas Vehicle Fueling Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, M.; Gonzales, J.

    2014-09-01

    This document is designed to help fleets understand the cost factors associated with fueling infrastructure for compressed natural gas (CNG) vehicles. It provides estimated cost ranges for various sizes and types of CNG fueling stations and an overview of factors that contribute to the total cost of an installed station. The information presented is based on input from professionals in the natural gas industry who design, sell equipment for, and/or own and operate CNG stations.

  6. Spatial Trends in Evapotranspiration Components over Africa between 1979 and 2012 and Their Relative Influence on Crop Water Use

    NASA Astrophysics Data System (ADS)

    Estes, L. D.; Chaney, N.; Herrera-Estrada, J.; Caylor, K. K.; Sheffield, J.; Wood, E. F.

    2013-12-01

    Understanding how climate change will affect crop water use (evapotranspiration) is fundamental to understanding food security. This is particularly true in sub-Saharan Africa, where crops are largely grown in dryland systems, and agricultural production is expected to expand dramatically this century. Yet analyzing how climate change has impacted crop evapotranspiration (ET) has been hampered by the lack of long-term and spatially continuous meteorological data. Here we use a newly developed, spatio-temporally corrected meteorological dataset to 1) identify trends in individual ET components [rainfall (RF), temperature (T), specific humidity (SH), windspeed (WS), long- and shortwave radiation (LWR, SWR)] in Africa since 1979 and 2) determine the impact of these trends on crop water use. The meteorological data was developed from the Princeton University global meteorological dataset (PGF), which merges gridded station data, satellite retrievals and reanalysis to create a 1.0° resolution, 3-hourly weather dataset for the years 1948-2012. The PGF was downscaled to 0.25° resolution using bilinear interpolation, correcting T, SH, and LWR for elevation, and then merged (using state-space estimation) with meteorological station data (~1000, obtained from the Global Summary of Day database) and corrected for temporal inhomogeneities (due to instrument changes, etc) and gap-filled for missing days. This resulted in a bias-corrected gridded set of daily observations for the variables of interest over southern, East, and West Africa (Central Africa was excluded because of insufficient station observations) for the period 1979-2012, focusing on the satellite period. Using Kendall-Theil Robust line and Mann-Kendall tests, we identify and map significant (p<0.05) trends in ET components in each 0.25° cell over the time period. To estimate the crop water use impact of significant changes in ET components, we undertook a series of crop modeling experiments to isolate the impact of each component. The experiments were based on generated meteorological datasets representing average weather during the first (1979-1989, hereafter '1985') and last 10 years of the period (2002-2012, or '2008'). For each ET component showing significant trends, we created a counterfactual dataset representing average weather for 2002-2012, but with the trend in that component removed (we adjusted other variables' climatologies to preserve covariances). We used these three datasets, together with gridded estimates of crop planting dates and soil texture, to estimate maize evapotranspiration using the FAO-56 method, where reference ET is modified by maize specific coefficients representing a) maize potential transpiration and b) maize water stress. We used the ET estimates resulting from the 1985 and 2008 datasets to map maize water use changes for the study period. To isolate the impact of individual ET components on overall maize water use, we calculated the differences between the ET estimates from the 2008 dataset and from each counterfactual weather scenario. This analysis demonstrates how improved forcing data can improve understanding of the impact that understudied global change factors (e.g. changing WS) have on crop response.

  7. Accuracy and coverage of the modernized Polish Maritime differential GPS system

    NASA Astrophysics Data System (ADS)

    Specht, Cezary

    2011-01-01

    The DGPS navigation service augments The NAVSTAR Global Positioning System by providing localized pseudorange correction factors and ancillary information which are broadcast over selected marine reference stations. The DGPS service position and integrity information satisfy requirements in coastal navigation and hydrographic surveys. Polish Maritime DGPS system has been established in 1994 and modernized (in 2009) to meet the requirements set out in IMO resolution for a future GNSS, but also to preserve backward signal compatibility of user equipment. Having finalized installation of the new technology L1, L2 reference equipment performance tests were performed.The paper presents results of the coverage modeling and accuracy measuring campaign based on long-term signal analyses of the DGPS reference station Rozewie, which was performed for 26 days in July 2009. Final results allowed to verify the coverage area of the differential signal from reference station and calculated repeatable and absolute accuracy of the system, after the technical modernization. Obtained field strength level area and position statistics (215,000 fixes) were compared to past measurements performed in 2002 (coverage) and 2005 (accuracy), when previous system infrastructure was in operation.So far, no campaigns were performed on differential Galileo. However, as signals, signal processing and receiver techniques are comparable to those know from DGPS. Because all satellite differential GNSS systems use the same transmission standard (RTCM), maritime DGPS Radiobeacons are standardized in all radio communication aspects (frequency, binary rate, modulation), then the accuracy results of differential Galileo can be expected as a similar to DGPS.Coverage of the reference station was calculated based on unique software, which calculate the signal strength level based on transmitter parameters or field signal strength measurement campaign, done in the representative points. The software works based on Baltic sea vector map, ground electric parameters and models atmospheric noise level in the transmission band.

  8. Concentrating Solar Power Projects - Colorado Integrated Solar Project |

    Science.gov Websites

    Energy's Cameo Station's Unit 2 (approximately 2 MWe equivalent) in order to decrease the overall MW Status: Currently Non-Operational Start Year: 2010 Do you have more information, corrections, or comments? Background Technology: Parabolic trough Status: Currently Non-Operational Country: United States

  9. 47 CFR 73.319 - FM multiplex subcarrier technical standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... specific authorization from the FCC provided the generator can be connected to the transmitter without... cease operation. The licensee may be required to verify the corrective measures with supporting data. Such data must be retained at the station and be made available to the FCC upon request. [48 FR 28455...

  10. Forests of Wisconsin, 2016

    Treesearch

    Cassandra M. Kurtz

    2017-01-01

    Publication updated February 9, 2018 to correct the number of forest field plots (page 1). This resource update provides an overview of forest resources in Wisconsin based on an inventory conducted by the U.S. Forest Service, Forest Inventory and Analysis (FIA) program at the Northern Research Station in cooperation with the Wisconsin Department of...

  11. GPS Monitor Station Upgrade Program at the Naval Research Laboratory

    NASA Technical Reports Server (NTRS)

    Galysh, Ivan J.; Craig, Dwin M.

    1996-01-01

    One of the measurements made by the Global Positioning System (GPS) monitor stations is to measure the continuous pseudo-range of all the passing GPS satellites. The pseudo-range contains GPS and monitor station clock errors as well as GPS satellite navigation errors. Currently the time at the GPS monitor station is obtained from the GPS constellation and has an inherent inaccuracy as a result. Improved timing accuracy at the GPS monitoring stations will improve GPS performance. The US Naval Research Laboratory (NRL) is developing hardware and software for the GPS monitor station upgrade program to improve the monitor station clock accuracy. This upgrade will allow a method independent of the GPS satellite constellation of measuring and correcting monitor station time to US Naval Observatory (USNO) time. THe hardware consists of a high performance atomic cesium frequency standard (CFS) and a computer which is used to ensemble the CFS with the two CFS's currently located at the monitor station by use of a dual-mixer system. The dual-mixer system achieves phase measurements between the high-performance CFS and the existing monitor station CFS's to within 400 femtoseconds. Time transfer between USNO and a given monitor station is achieved via a two way satellite time transfer modem. The computer at the monitor station disciplines the CFS based on a comparison of one pulse per second sent from the master site at USNO. The monitor station computer is also used to perform housekeeping functions, as well as recording the health status of all three CFS's. This information is sent to the USNO through the time transfer modem. Laboratory time synchronization results in the sub nanosecond range have been observed and the ability to maintain the monitor station CFS frequency to within 3.0 x 10 (sup minus 14) of the master site at USNO.

  12. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  13. The indirect effects on the computation of geoid undulations

    NASA Technical Reports Server (NTRS)

    Wichiencharoen, C.

    1982-01-01

    The indirect effects on the geoid computation due to the second method of Helmert's condensation were studied. when Helmert's anomalies are used in Stokes's equation, there are three types of corrections to the free air geoid. The first correction, the indirect effect on geoid undulation due to the potential change in Helmert's reduction, had a maximum value of 0.51 meters in the test area covering the United States. The second correction, the attraction change effect on geoid undulation, had a maximum value of 9.50 meters when the 10 deg cap was used in Stokes' equation. The last correction, the secondary indirect effect on geoid undulatin, was found negligible in the test area. The corrections were applied to uncorrected free air geoid undulations at 65 Doppler stations in the test area and compared with the Doppler undulations. Based on the assumption that the Doppler coordinate system has a z shift of 4 meters with respect to the geocenter, these comparisons showed that the corrections presented in this study yielded improved values of gravimetric undulations.

  14. Effect of baseline corrections on displacements and response spectra for several recordings of the 1999 Chi-Chi, Taiwan, earthquake

    USGS Publications Warehouse

    Boore, D.M.

    2001-01-01

    Displacements derived from many of the accelerogram recordings of the 1999 Chi-Chi, Taiwan, earthquake show drifts when only a simple baseline derived from the pre-event portion of the record is removed from the records. The appearance of the velocity and displacement records suggests that changes in the zero level of the acceleration are responsible for these drifts. The source of the shifts in zero level are unknown, but in at least one case it is almost certainly due to tilting of the ground. This article illustrates the effect on the ground velocity, ground displacement, and response spectra of several schemes for accounting for these baseline shifts. A wide range of final displacements can be obtained for various choices of baseline correction, and comparison with nearby GPS stations (none of which are colocated with the accelerograph stations) do not help in choosing the appropriate baseline correction. The results suggest that final displacements estimated from the records should be used with caution. The most important conclusion for earthquake engineering purposes, however, is that the response spectra for periods less than about 20 sec are usually unaffected by the baseline correction. Although limited to the analysis of only a small number of recordings, the results may have more general significance both for the many other recordings of this earthquake and for data that will be obtained in the future from similar high-quality accelerograph networks now being installed or soon to be installed in many parts of the world.

  15. Improved Phase Corrections for Transoceanic Tsunami Data in Spatial and Temporal Source Estimation: Application to the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Ho, Tung-Cheng; Satake, Kenji; Watada, Shingo

    2017-12-01

    Systemic travel time delays of up to 15 min relative to the linear long waves for transoceanic tsunamis have been reported. A phase correction method, which converts the linear long waves into dispersive waves, was previously proposed to consider seawater compressibility, the elasticity of the Earth, and gravitational potential change associated with tsunami motion. In the present study, we improved this method by incorporating the effects of ocean density stratification, actual tsunami raypath, and actual bathymetry. The previously considered effects amounted to approximately 74% for correction of the travel time delay, while the ocean density stratification, actual raypath, and actual bathymetry, contributed to approximately 13%, 4%, and 9% on average, respectively. The improved phase correction method accounted for almost all the travel time delay at far-field stations. We performed single and multiple time window inversions for the 2011 Tohoku tsunami using the far-field data (>3 h travel time) to investigate the initial sea surface displacement. The inversion result from only far-field data was similar to but smoother than that from near-field data and all stations, including a large sea surface rise increasing toward the trench followed by a migration northward along the trench. For the forward simulation, our results showed good agreement between the observed and computed waveforms at both near-field and far-field tsunami gauges, as well as with satellite altimeter data. The present study demonstrates that the improved method provides a more accurate estimate for the waveform inversion and forward prediction of far-field data.

  16. Amplification Factors for Spectral Acceleration Using Borehole Seismic Array in Taiwan

    NASA Astrophysics Data System (ADS)

    Lai, T. S.; Yih-Min, W.; Chao, W. A.; Chang, C. H.

    2017-12-01

    In order to reduce the noise from surface to get the high-quality seismic recordings, there are 54 borehole seismic arrays have been installed in Taiwan deployed by Central Weather Bureau (CWB) until the end of 2016. Each array includes two force balance accelerometers, one at the surface and other inside the borehole, as well as one broadband seismometer inside the borehole. The downhole instruments are placed at a depth between 120 and 400 m. The background noise level are lower at the borehole stations, but the amplitudes recorded by borehole stations are smaller than surface stations for the same earthquake due to the different geology conditions. Therefore, the earthquake magnitude estimated by borehole station is smaller than surface station. So far, CWB only use the surface stations in the magnitude determination due to this situation. In this study, we investigate the site effects between surface and downhole for borehole seismic arrays. Using the spectral ratio derived by the two-station spectral method as the transfer function, simulated the waveform recorded by borehole stations to the surface stations. In the future, through the transfer function, the borehole stations will be included in the estimation of earthquake magnitude and the results of amplification factors can provide the information of near-surface site effects for the ground motion simulation applications.

  17. VLBI height corrections due to gravitational deformation of antenna structures

    NASA Astrophysics Data System (ADS)

    Sarti, P.; Negusini, M.; Abbondanza, C.; Petrov, L.

    2009-12-01

    From an analysis of regional European VLBI data we evaluate the impact of a VLBI signal path correction model developed to account for gravitational deformations of the antenna structures. The model was derived from a combination of terrestrial surveying methods applied to telescopes at Medicina and Noto in Italy. We find that the model corrections shift the derived height components of these VLBI telescopes' reference points downward by 14.5 and 12.2 mm, respectively. No other parameter estimates nor other station positions are affected. Such systematic height errors are much larger than the formal VLBI random errors and imply the possibility of significant VLBI frame scale distortions, of major concern for the International Terrestrial Reference Frame (ITRF) and its applications. This demonstrates the urgent need to investigate gravitational deformations in other VLBI telescopes and eventually correct them in routine data analysis.

  18. Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2015-12-01

    The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.

  19. Resource sharing on CSMA/CD networks in the presence of noise. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dinschel, Duane Edward

    1987-01-01

    Resource sharing on carrier sense multiple access with collision detection (CSMA/CD) networks can be accomplished by using window-control algorithms for bus contention. The window-control algorithms are designed to grant permission to transmit to the station with the minimum contention parameter. Proper operation of the window-control algorithm requires that all stations sense the same state of the newtork in each contention slot. Noise causes the state of the network to appear as a collision. False collisions can cause the window-control algorithm to terminate without isolating any stations. A two-phase window-control protocol and approximate recurrence equation with noise as a parameter to improve the performance of the window-control algorithms in the presence of noise are developed. The results are compared through simulation, with the approximate recurrence equation yielding the best overall performance. Noise is even a bigger problem when it is not detected by all stations. In such cases it is possible for the window boundaries of the contending stations to become out of phase. Consequently, it is possible to isolate a station other than the one with the minimum contention parameter. To guarantee proper isolation of the minimum, a broadcast phase must be added after the termination of the algorithm. The protocol required to correct the window-control algorithm when noise is not detected by all stations is discussed.

  20. Based new WiMax simulation model to investigate Qos with OPNET modeler in sheduling environment

    NASA Astrophysics Data System (ADS)

    Saini, Sanju; Saini, K. K.

    2012-11-01

    WiMAX stands for World Interoperability for Microwave Access. It is considered a major part of broadband wireless network having the IEEE 802.16 standard. WiMAX provides innovative, fixed as well as mobile platforms for broadband internet access anywhere anytime with different transmission modes. The results show approximately equal load and throughput while the delay values vary among the different Base Stations Introducing the various type of scheduling algorithm, like FIFO,PQ,WFQ, for comparison of four type of scheduling service, with its own QoS needs and also introducing OPNET modeler support for Worldwide Interoperability for Microwave Access (WiMAX) network. The simulation results indicate the correctness and the effectiveness of this algorithm. This paper presents a WiMAX simulation model designed with OPNET modeler 14 to measure the delay, load and the throughput performance factors.

  1. Control of the probe influence on the flow field in LP steam turbine

    NASA Astrophysics Data System (ADS)

    Kolovratník, Michal; Yun, Kukchol; Bartoš, Ondřej

    For measuring the fine droplets properties in the wet steam expanding in the steam turbines the light extinction probes are usually used. The paper presents CFD modelling of the extinction probe influence on the wet steam flow field at the measurement position. The aim is to get a basic information about the influence of the flow field deviation on the measured data, in other words, of necessity to correct the measured data. The basic modelling procedure is described, as well as the supposed simplifications and the factor considering the change in the steam density in the measuring slot of the probe. The model is based on the experimental data that were achieved during the developmental measurements in the steam turbine 1090 MW in the power station Temelín. The experimental measurement was done in the cooperation with the Doosan Škoda Power s.r.o.

  2. Effects of miso- and mesoscale obstructions on PAM winds obtained during project NIMROD. [Portable Automated Mesonet

    NASA Technical Reports Server (NTRS)

    Fujita, T. T.; Wakimoto, R. M.

    1982-01-01

    Data from 27 PAM (Portable Automated Mesonet) stations, operational as a phase of project NIMROD (Northern Illinois Meteorological Research on Downburst), are presented. It was found that PAM-measured winds are influenced by the mesoscale obstruction of the Chicago metropolitan area, as well as by the misoscale obstruction of identified trees and buildings. The mesoscale obstruction was estimated within the range of near zero to 50%, increasing toward the city limits, while the misoscale obstruction was estimated as being as large as 58% near obstructing trees which were empirically calculated to cause a wind speed deficit 50-80 times their height. Despite a statistical analysis based on one-million PAM winds, wind speed and stability transmission factors could not be accurately calculated; thus, in order to calculate the airflow free from obstacle, PAM-measured winds must be corrected.

  3. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  4. Accuracy and Availability of Egnos - Results of Observations

    NASA Astrophysics Data System (ADS)

    Felski, Andrzej; Nowak, Aleksander; Woźniak, Tomasz

    2011-01-01

    According to SBAS concept the user should receive timely the correct information about the system integrity and corrections to the pseudoranges measurements, which leads to better accuracy of coordinates. In theory the whole system is permanently monitored by RIMS stations, so it is impossible to deliver the faulty information to the user. The quality of the system is guaranteed inside the border of the system coverage however in the east part of Poland lower accuracy and availability of the system is still observed. This was the impulse to start an observation and analysis of real accuracy and availability of EGNOS service in the context of support air-operations in local airports and as the supplementation in hydrographic operations on the Polish Exclusive Zone. A registration has been conducted on three PANSA stations situated on airports in Warsaw, Krakow and Rzeszow and on PNA station in Gdynia. Measurements on PANSA stations have been completed permanently during each whole month up to end of September 2011. These stations are established on Septentrio PolaRx2e receivers and have been engaged into EGNOS Data Collection Network performed by EUROCONTROL. The advantage of these registrations is the uniformity of receivers. Apart from these registrations additional measurements in Gdynia have been provided with different receivers, mainly dedicated sea-navigation: CSI Wireless 1, NOVATEL OEMV, Sperry Navistar, Crescent V-100 and R110 as well as Magellan FX420. The main object of analyses was the accuracy and availability of EGNOS service in each point and for different receivers. Accuracy has been analyzed separately for each coordinate. Finally the temporarily and spatial correlations of coordinates, its availability and accuracy has been investigated. The findings prove that present accuracy of EGNOS service is about 1,5m (95%), but availability of the service is controversial. The accuracy of present EGNOS service meets the parameters of APV I and even APV II requirements, as well as any maritime and hydrography needs. However introducing this service into the practice demands better availability, because the gaps in receiving the proper information from the system appear too often and are too long at the moment. Additionally it was noticed very random character of availability and no correlation of this parameter in the different point of observations. In spite the correct EGNOS work the accuracy of the coordinates is not predictable in the local conditions. So in authors' opinion Local Airport Monitoring should be deployed if EGNOS would have to serve to the local airport service.

  5. Custom auroral electrojet indices calculated by using MANGO value-added services

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.; Moore, W. B.; King, T. A.

    2009-12-01

    A set of computational routines called MANGO, Magnetogram Analysis for the Network of Geophysical Observatories, is utilized to calculate customized versions of the auroral electrojet indices, AE, AL, and AU. MANGO is part of an effort to enhance data services available to users of the Heliophysics VxOs, specifically for the Virtual Magnetospheric Observatory (VMO). The MANGO value-added service package is composed of a set of IDL routines that decompose ground magnetic field observations to isolate secular, diurnal, and disturbance variations of magnetic field disturbance, station-by-station. Each MANGO subroutine has been written in modular fashion to allow "plug and play"-style flexibility and each has been designed to account for failure modes and noisy data so that the programs will run to completion producing as much derived data as possible. The capabilities of the MANGO service package will be demonstrated through their application to the study of auroral electrojet current flow during magnetic substorms. Traditionally, the AE indices are calculated by using data from about twelve ground stations located at northern auroral zone latitudes spread longitudinally around the world. Magnetogram data are corrected for secular variation prior to calculating the standard version of the indices but the data are not corrected for diurnal variations. A custom version of the AE indices will be created by using the MANGO routines including a step to subtract diurnal curves from the magnetic field data at each station. The custom AE indices provide more accurate measures of auroral electrojet activity due to isolation of the sunstorm electrojet magnetic field signiture. The improvements in the accuracy of the custom AE indices over the tradition indices are largest during the northern hemisphere summer when the range of diurnal variation reaches its maximum.

  6. Measuring Error Identification and Recovery Skills in Surgical Residents.

    PubMed

    Sternbach, Joel M; Wang, Kevin; El Khoury, Rym; Teitelbaum, Ezra N; Meyerson, Shari L

    2017-02-01

    Although error identification and recovery skills are essential for the safe practice of surgery, they have not traditionally been taught or evaluated in residency training. This study validates a method for assessing error identification and recovery skills in surgical residents using a thoracoscopic lobectomy simulator. We developed a 5-station, simulator-based examination containing the most commonly encountered cognitive and technical errors occurring during division of the superior pulmonary vein for left upper lobectomy. Successful completion of each station requires identification and correction of these errors. Examinations were video recorded and scored in a blinded fashion using an examination-specific rating instrument evaluating task performance as well as error identification and recovery skills. Evidence of validity was collected in the categories of content, response process, internal structure, and relationship to other variables. Fifteen general surgical residents (9 interns and 6 third-year residents) completed the examination. Interrater reliability was high, with an intraclass correlation coefficient of 0.78 between 4 trained raters. Station scores ranged from 64% to 84% correct. All stations adequately discriminated between high- and low-performing residents, with discrimination ranging from 0.35 to 0.65. The overall examination score was significantly higher for intermediate residents than for interns (mean, 74 versus 64 of 90 possible; p = 0.03). The described simulator-based examination with embedded errors and its accompanying assessment tool can be used to measure error identification and recovery skills in surgical residents. This examination provides a valid method for comparing teaching strategies designed to improve error recognition and recovery to enhance patient safety. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Russian State Time and Earth Rotation Service: Observations, Eop Series, Prediction

    NASA Astrophysics Data System (ADS)

    Kaufman, M.; Pasynok, S.

    2010-01-01

    Russian State Time, Frequency and Earth Rotation Service provides the official EOP data and time for use in scientific, technical and metrological works in Russia. The observations of GLONASS and GPS on 30 stations in Russia, and also the Russian and worldwide observations data of VLBI (35 stations) and SLR (20 stations) are used now. To these three series of EOP the data calculated in two other Russian analysis centers are added: IAA (VLBI, GPS and SLR series) and MCC (SLR). Joint processing of these 7 series is carried out every day (the operational EOP data for the last day and the predicted values for 50 days). The EOP values are weekly refined and systematic errors of every individual series are corrected. The combined results become accessible on the VNIIFTRI server (ftp.imvp.ru) approximately at 6h UT daily.

  8. The compilation of the instrumental seismic catalogue of Italy: 1975-1984

    NASA Astrophysics Data System (ADS)

    Giardini, D.; Velonà, M. A.; Boschi, E.

    1992-12-01

    We compile a homogeneous and complete catalogue of the seismicity of the Italian region for 1975-1984, the period marking the transition from standard analogue seismometry to the new digital era. The work is developed in three phases: (1) the creation of a uniform digital databank of all seismic station readings, unifying the database available at the Istituto Nazionale di Geofisica with the catalogue of the International Seismological Centre; (2) the preparation of numerical procedures for automatic association of arrival data and for hypocentre location, using arrivals from local, regional and teleseismic stations in a spherical geometry; (3) the introduction of lateral heterogeneity by calibrating regional travel-time curves and station corrections. The first two phases have been completed, providing a new instrumental catalogue obtained using a spherical Earth model; the third phase is presented here in a preliminary stage.

  9. Disaster warning satellite study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The Disaster Warning Satellite System is described. It will provide NOAA with an independent, mass communication system for the purpose of warning the public of impending disaster and issuing bulletins for corrective action to protect lives and property. The system consists of three major segments. The first segment is the network of state or regional offices that communicate with the central ground station; the second segment is the satellite that relays information from ground stations to home receivers; the third segment is composed of the home receivers that receive information from the satellite and provide an audio output to the public. The ground stations required in this system are linked together by two, separate, voice bandwidth communication channels on the Disaster Warning Satellites so that a communications link would be available in the event of disruption of land line service.

  10. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  11. Removing the impact of water abstractions on flow duration curves

    NASA Astrophysics Data System (ADS)

    Masoero, Alessandro; Ganora, Daniele; Galeati, Giorgio; Laio, Francesco; Claps, Pierluigi

    2015-04-01

    Changes and interactions between human system and water cycle are getting increased attention in the scientific community. Commonly discharge data needed for water resources studies were collected close to urban or industrial settlements, thus in environments where the interest for surveying was not merely scientific, but also for socio-economical purposes. Working in non-natural environments we must take into account human impacts, like the one due to water intakes for irrigation or hydropower generation, while assessing the actual water availability and variability in a river. This can became an issue in alpine areas, where hydropower exploitation is heavy and it is common to have water abstraction before a gauge station. To have a gauge station downstream a water intake can be useful to survey the environmental flow release and to record the maximum flood values, which should not be affected by the water abstraction. Nevertheless with this configuration we are unable to define properly the water volumes available in the river, information crucial to assess low flows and investigate drought risk. This situation leads to a substantial difference between observed data (affected by the human impact) and natural data (as would have been without abstraction). A main issue is how to correct these impacts and restore the natural streamflow values. The most obvious and reliable solution would be to ask for abstraction data to water users, but these data are hard to collect. Usually they are not available, because not public or not even collected by the water exploiters. A solution could be to develop a rainfall-run-off model of the basin upstream the gauge station, but this approach needs a great number of data and parameters Working in a regional framework and not on single case studies, our goal is to provide a consistent estimate of the non-impacted statistics of the river (i.e. mean value, L-moments of variation and skewness). We proposed a parsimonious method, based on few easy-access parameters, of correction of the water abstraction impact. The model, based on an exponential form of the river Flow Duration Curve (FDC), allows completely analytical solutions. Hence the method can be applied extensively. This is particularly relevant when working on a general outlook on water resources (regional or basin scale), given the high number of water abstractions that should be considered. The correction method developed is based on only two hard data that can be easily found: i) the design maximum discharge of the water intake and ii) the days of exercise, between a year. Following the same correction hypothesis also the abstracted discharge statistics have been reconstructed analytically and combined with the statistics of the receiving reach, that can be different from the original one. This information can be useful when we are assessing water availability in a river network interconnected by derivation channels. The goodness of the correction method proposed is proven by the application to a case study in North-West Italy, along a second order tributary of the Po River. Flow values recorded at the river gauge station were affected, significantly, by the presence of a 5 MW hydropower plant. Knowing the amount of water abstracted daily by the power plant we are able to reconstruct, empirically, the natural discharge on the river and compare its main statistics with the ones computed analytically using the proposed correction model. An extremely low difference between empirical and analytical reconstructed mean discharge and L-moment of variation was founded. Also, the importance of the day of exercise information was highlighted. The correction proposed in this work is able to give a correct indication of the non-impacted natural streamflows characteristics, especially in alpine regions where water abstraction impact is a main issue.

  12. A calibrated, high-resolution goes satellite solar insolation product for a climatology of Florida evapotranspiration

    USGS Publications Warehouse

    Paech, S.J.; Mecikalski, J.R.; Sumner, D.M.; Pathak, C.S.; Wu, Q.; Islam, S.; Sangoyomi, T.

    2009-01-01

    Estimates of incoming solar radiation (insolation) from Geostationary Operational Environmental Satellite observations have been produced for the state of Florida over a 10-year period (1995-2004). These insolation estimates were developed into well-calibrated half-hourly and daily integrated solar insolation fields over the state at 2 km resolution, in addition to a 2-week running minimum surface albedo product. Model results of the daily integrated insolation were compared with ground-based pyranometers, and as a result, the entire dataset was calibrated. This calibration was accomplished through a three-step process: (1) comparison with ground-based pyranometer measurements on clear (noncloudy) reference days, (2) correcting for a bias related to cloudiness, and (3) deriving a monthly bias correction factor. Precalibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m-2/day (13%). Calibration reduced errors to 1.7 MJ m -2/day (10%), and also removed temporal-related, seasonal-related, and satellite sensor-related biases. The calibrated insolation dataset will subsequently be used by state of Florida Water Management Districts to produce statewide, 2-km resolution maps of estimated daily reference and potential evapotranspiration for water management-related activities. ?? 2009 American Water Resources Association.

  13. South Vietnamese Rural Mothers' Knowledge, Attitude, and Practice in Child Health Care

    PubMed Central

    Thac, Dinh; Pedersen, Freddy Karup; Thuong, Tang Chi; Lien, Le Bich; Ngoc Anh, Nguyen Thi; Phuc, Nguyen Ngoc

    2016-01-01

    A study of 600 rural under-five mothers' knowledge, attitude, and practice (KAP) in child care was performed in 4 southern provinces of Vietnam. The mothers were randomly selected and interviewed about sociodemographic factors, health seeking behaviour, and practice of home care of children and neonates. 93.2% of the mothers were literate and well-educated, which has been shown to be important for child health care. 98.5% were married suggesting a stable family, which is also of importance for child health. Only 17.3% had more than 2 children in their family. The mother was the main caretaker in 77.7% of the families. Only 1% would use quacks as their first health contact, but 25.2% would use a private clinic, which therefore eases the burden on the government system. Nearly 69% had given birth in a hospital, 27% in a commune health station, and only 2.7% at home without qualified assistance. 89% were giving exclusive breast feeding at 6 months, much more frequent than in the cities. The majority of the mothers could follow IMCI guideline for home care, although 25.2% did not deal correctly with cough and 38.7% did not deal correctly with diarrhoea. Standard information about Integrated Management of Childhood Illnesses (IMCI) based home care is still needed. PMID:26881233

  14. Reliability of telecommunications systems following a major disaster: survey of secondary and tertiary emergency institutions in Miyagi Prefecture during the acute phase of the 2011 Great East Japan Earthquake.

    PubMed

    Kudo, Daisuke; Furukawa, Hajime; Nakagawa, Atsuhiro; Abe, Yoshiko; Washio, Toshikatsu; Arafune, Tatsuhiko; Sato, Dai; Yamanouchi, Satoshi; Ochi, Sae; Tominaga, Teiji; Kushimoto, Shigeki

    2014-04-01

    Telecommunication systems are important for sharing information among health institutions to successfully provide medical response following disasters. The aim of this study was to clarify the problems associated with telecommunication systems in the acute phase of the Great East Japan Earthquake (March 11, 2011). All 72 of the secondary and tertiary emergency hospitals in Miyagi Prefecture were surveyed to evaluate the telecommunication systems in use during the 2011 Great Japan Earthquake, including satellite mobile phones, multi-channel access (MCA) wireless systems, mobile phones, Personal Handy-phone Systems (PHS), fixed-line phones, and the Internet. Hospitals were asked whether the telecommunication systems functioned correctly during the first four days after the earthquake, and, if not, to identify the cause of the malfunction. Each telecommunication system was considered to function correctly if the hospital staff could communicate at least once in every three calls. Valid responses were received from 53 hospitals (73.6%). Satellite mobile phones functioned correctly at the highest proportion of the equipped hospitals, 71.4%, even on Day 0. The MCA wireless system functioned correctly at the second highest proportion of the equipped hospitals. The systems functioned correctly at 72.0% on Day 0 and at 64.0% during Day 1 through Day 3. The main cause of malfunction of the MCA wireless systems was damage to the base station or communication lines (66.7%). Ordinary (personal or general communication systems) mobile phones did not function correctly at any hospital until Day 2, and PHS, fixed-line phones, and the Internet did not function correctly at any area hospitals that were severely damaged by the tsunami. Even in mildly damaged areas, these systems functioned correctly at <40% of the hospitals during the first three days. The main causes of malfunction were a lack of electricity (mobile phones, 25.6%; the Internet, 54.8%) and damage to the base stations or communication lines (the Internet, 38.1%; mobile phones, 56.4%). Results suggest that satellite mobile phones and MCA wireless systems are relatively reliable and ordinary systems are less reliable in the acute period of a major disaster. It is important to distribute reliable disaster communication equipment to hospitals and plan for situations in which hospital telecommunications systems do not function.

  15. Space Station Human Factors Research Review. Volume 4: Inhouse Advanced Development and Research

    NASA Technical Reports Server (NTRS)

    Tanner, Trieve (Editor); Clearwater, Yvonne A. (Editor); Cohen, Marc M. (Editor)

    1988-01-01

    A variety of human factors studies related to space station design are presented. Subjects include proximity operations and window design, spatial perceptual issues regarding displays, image management, workload research, spatial cognition, virtual interface, fault diagnosis in orbital refueling, and error tolerance and procedure aids.

  16. Fuzzy Control/Space Station automation

    NASA Technical Reports Server (NTRS)

    Gersh, Mark

    1990-01-01

    Viewgraphs on fuzzy control/space station automation are presented. Topics covered include: Space Station Freedom (SSF); SSF evolution; factors pointing to automation & robotics (A&R); astronaut office inputs concerning A&R; flight system automation and ground operations applications; transition definition program; and advanced automation software tools.

  17. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  18. Gravity data of Nevada

    USGS Publications Warehouse

    Ponce, David A.

    1997-01-01

    Gravity data for the entire state of Nevada and adjacent parts of California, Utah, and Arizona are available on this CD-ROM. About 80,000 gravity stations were compiled primarily from the National Geophysical Data Center and the U.S. Geological Survey. Gravity data was reduced to the Geodetic Reference System of 1967 and adjusted to the Gravity Standardization Net 1971 gravity datum. Data were processed to complete Bouguer and isostatic gravity anomalies by applying standard gravity corrections including terrain and isostatic corrections. Selected principal fact references and a list of sources for data from the National Geophysical Data Center are included.

  19. Space station architectural elements model study

    NASA Technical Reports Server (NTRS)

    Taylor, T. C.; Spencer, J. S.; Rocha, C. J.; Kahn, E.; Cliffton, E.; Carr, C.

    1987-01-01

    The worksphere, a user controlled computer workstation enclosure, was expanded in scope to an engineering workstation suitable for use on the Space Station as a crewmember desk in orbit. The concept was also explored as a module control station capable of enclosing enough equipment to control the station from each module. The concept has commercial potential for the Space Station and surface workstation applications. The central triangular beam interior configuration was expanded and refined to seven different beam configurations. These included triangular on center, triangular off center, square, hexagonal small, hexagonal medium, hexagonal large and the H beam. Each was explored with some considerations as to the utilities and a suggested evaluation factor methodology was presented. Scale models of each concept were made. The models were helpful in researching the seven beam configurations and determining the negative residual (unused) volume of each configuration. A flexible hardware evaluation factor concept is proposed which could be helpful in evaluating interior space volumes from a human factors point of view. A magnetic version with all the graphics is available from the author or the technical monitor.

  20. Causes of the sharp increase in the time series of surface solar radiation in China between 1990 and 1993

    NASA Astrophysics Data System (ADS)

    Wang, Yawen; Wild, Martin

    2017-02-01

    During 1990-1993, a nation-wide replacement of the instruments measuring surface solar radiation (SSR) and a restructuring of SSR stations took place in China. Meanwhile, a sudden upward jump was noted in published composite time series of observed SSR records in this period. This study clarifies that about 1/3 of the magnitude of the SSR jump in China was accidentally caused by the abandonment/establishment of 51 stations (˜39% of total) during the period of 1990-1993. The remaining 2/3 of the SSR jump was only caused by 22 stations detected by the methods of the accumulated deviation curve and the Mann-Whitney U test. Out of these 22 stations, about 1/4 of the SSR jump were caused by 6 stations due to natural factors, as similar variations were recorded by sunshine duration. The other 3/4 were caused by the remaining 16 stations as a result of artificial factors such as instrument replacement, changes in the classification or location of stations, or potential operational errors.

  1. Spatial distribution of aerosol hygroscopicity and its effect on PM2.5 retrieval in East China

    NASA Astrophysics Data System (ADS)

    He, Qianshan; Zhou, Guangqiang; Geng, Fuhai; Gao, Wei; Yu, Wei

    2016-03-01

    The hygroscopic properties of aerosol particles have strong impact on climate as well as visibility in polluted areas. Understanding of the scattering enhancement due to water uptake is of great importance in linking dry aerosol measurements with relevant ambient measurements, especially for satellite retrievals. In this study, an observation-based algorithm combining meteorological data with the particulate matter (PM) measurement was introduced to estimate spatial distribution of indicators describing the integrated humidity effect in East China and the main factors impacting the hygroscopicity were explored. Investigation of 1 year data indicates that the larger mass extinction efficiency αext values (> 9.0 m2/g) located in middle and northern Jiangsu Province, which might be caused by particulate organic material (POM) and sulfate aerosol from industries and human activities. The high level of POM in Jiangsu Province might also be responsible for the lower growth coefficient γ value in this region. For the inland junction provinces of Jiangsu and Anhui, a considerable higher hygroscopic growth region in East China might be attributed to more hygroscopic particles mainly comprised of inorganic salts (e.g., sulfates and nitrates) from several large-scale industrial districts distributed in this region. Validation shows good agreement of calculated PM2.5 mass concentrations with in situ measurements in most stations with correlative coefficients of over 0.85, even if several defective stations induced by station location or seasonal variation of aerosol properties in this region. This algorithm can be used for more accurate surface level PM2.5 retrieval from satellite-based aerosol optical depth (AOD) with combination of the vertical correction for aerosol profile.

  2. H2FIRST Reference Station Design Task: Project Deliverable 2-2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, Joseph; Terlip, Danny; Ainscough, Chris

    2015-04-20

    This report presents near-term station cost results and discusses cost trends of different station types. It compares various vehicle rollout scenarios and projects realistic near-term station utilization values using the station infrastructure rollout in California as an example. It describes near-term market demands and matches those to cost-effective station concepts. Finally, the report contains detailed designs for five selected stations, which include piping and instrumentation diagrams, bills of materials, and several site-specific layout studies that incorporate the setbacks required by NFPA 2, the National Fire Protection Association Hydrogen Technologies Code. This work identified those setbacks as a significant factor affectingmore » the ability to site a hydrogen station, particularly liquid stations at existing gasoline stations. For all station types, utilization has a large influence on the financial viability of the station.« less

  3. 75 FR 61219 - Entergy Operations, Inc.; River Bend Station, Unit 1; Environmental Assessment and Finding of No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-04

    ... Emergencies,'' for repair and corrective actions states that two individuals, one Mechanical Maintenance... actions will be taken to ensure basic electrical/l&C tasks can be performed by Mechanical Maintenance personnel. Mechanical Maintenance personnel will receive training in basic electrical and I&C tasks to...

  4. Correct county areas with sidebars for Virginia

    Treesearch

    Joseph M. McCollum; Dale Gormanson; John Coulston

    2009-01-01

    Historically, Forest Inventory and Analysis (FIA) has processed field inventory data at the county level and county estimates of land area were constrained to equal those reported by the Census Bureau. Currently, the Southern Research Station FIA unit processes field inventory data at the survey unit level (groups of counties with similar ecological characteristics)....

  5. Corrections and clarifications.

    PubMed

    1994-07-15

    The News & Comment article "NSF eyes new South Pole station" by Jeffrey Mervis (24 June, p. 1836) mentioned the principal investigators of three of the four teams making up the Center for Astrophysical Research in Antartica. The fourth is Mark Hereld, senior research associate at the University of Chicago, who is responsible for the South Pole Infrared Explorer telescope.

  6. Biogeochemical sensor performance in the SOCCOM profiling float array

    NASA Astrophysics Data System (ADS)

    Johnson, Kenneth S.; Plant, Joshua N.; Coletti, Luke J.; Jannasch, Hans W.; Sakamoto, Carole M.; Riser, Stephen C.; Swift, Dana D.; Williams, Nancy L.; Boss, Emmanuel; Haëntjens, Nils; Talley, Lynne D.; Sarmiento, Jorge L.

    2017-08-01

    The Southern Ocean Carbon and Climate Observations and Modeling (SOCCOM) program has begun deploying a large array of biogeochemical sensors on profiling floats in the Southern Ocean. As of February 2016, 86 floats have been deployed. Here the focus is on 56 floats with quality-controlled and adjusted data that have been in the water at least 6 months. The floats carry oxygen, nitrate, pH, chlorophyll fluorescence, and optical backscatter sensors. The raw data generated by these sensors can suffer from inaccurate initial calibrations and from sensor drift over time. Procedures to correct the data are defined. The initial accuracy of the adjusted concentrations is assessed by comparing the corrected data to laboratory measurements made on samples collected by a hydrographic cast with a rosette sampler at the float deployment station. The long-term accuracy of the corrected data is compared to the GLODAPv2 data set whenever a float made a profile within 20 km of a GLODAPv2 station. Based on these assessments, the fleet average oxygen data are accurate to 1 ± 1%, nitrate to within 0.5 ± 0.5 µmol kg-1, and pH to 0.005 ± 0.007, where the error limit is 1 standard deviation of the fleet data. The bio-optical measurements of chlorophyll fluorescence and optical backscatter are used to estimate chlorophyll a and particulate organic carbon concentration. The particulate organic carbon concentrations inferred from optical backscatter appear accurate to with 35 mg C m-3 or 20%, whichever is larger. Factors affecting the accuracy of the estimated chlorophyll a concentrations are evaluated.Plain Language SummaryThe ocean science community must move toward greater use of autonomous platforms and sensors if we are to extend our knowledge of the effects of climate driven change within the ocean. Essential to this shift in observing strategies is an understanding of the performance that can be obtained from biogeochemical sensors on platforms deployed for years and the procedures used to process data. This is the subject of the manuscript. We show the performance of oxygen, nitrate, pH, and bio-optical sensors that have been deployed on robotic profiling floats in the Southern Ocean for time periods up to 32 months.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.4676T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.4676T"><span>Use of RTIGS data streams for validating the performance of the IGS Ultra-Rapid products</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thaler, Gottfried; Weber, Robert</p> <p>2010-05-01</p> <p>The IGS (International GNSS Service) Real-Time Working Group (RTIGS) disseminates for several years raw observation data of a globally distributed steady growing station network in real-time via the internet. This observation data can be used for validating the performance of the IGS predicted orbits and clocks (Ultra-Rapid (IGU)). Therefore, based on pre-processed ITRF- station coordinates, clock corrections w.r.t GPS-Time for GPS-satellites and site-receivers as well as satellite orbits are calculated in quasi real-time and compared to the IGU solutions. The Institute for "Geodesy and Geophysics" of the Technical University of Vienna develops based on the software RTIGS Multicast Receive (RTIGSMR) provided by National Resources Canada (NRCan) the software RTIGU-Control. Using Code-smoothed observations RTIGU-Control calculates in a first step by means of a linear KALMAN-Filter and based on the orbit information of the IGUs real-time clock corrections and clock drifts w.r.t GPS-Time for the GPS-satellites and stations. The second extended KALMAN-Filter (kinematic approach) uses again the Code-smoothed observations corrected for the clock corrections of step 1 to calculate the positions and velocities of the satellites. The calculation interval is set to 30 seconds. The results and comparisons to IGU-products are displayed online but also stored as clock-RINEX- and SP3-files on the ftp-server of the institute, e.g. for validation of the performance of the IGU predicted products. A comparison to the more precise but delayed issued IGS Rapid products (IGR) allows also to validate the performance of RTIGU-Control. To carry out these comparisons the MatLab routine RTIGU-Analyse was established. This routine is for example able to import and process standard clock-RINEX-files of several sources and delivers a variety of comparisons both in graphical or numerical form. Results will become part of this presentation. Another way to analyse the quality and consistency of the RTIGU-Control products is to use them for positioning in post-processing mode. Preliminary results are already available and will also be presented. Further investigations will deal with upgrading RTIGU-Control to become independent of the IGU products. This means to initialize the KALMAN-Filter process using the orbits (and also clocks) from IGU but to use for all further calculation steps the own established orbits. This procedure results in totally independent satellite orbit and clock corrections which could be used for example instead of the broadcast ephemerides in a large number of real-time PPP applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29459196','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29459196"><span>A Student Assessment Tool for Standardized Patient Simulations (SAT-SPS): Psychometric analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Castro-Yuste, Cristina; García-Cabanillas, María José; Rodríguez-Cornejo, María Jesús; Carnicer-Fuentes, Concepción; Paloma-Castro, Olga; Moreno-Corral, Luis Javier</p> <p>2018-05-01</p> <p>The evaluation of the level of clinical competence acquired by the student is a complex process that must meet various requirements to ensure its quality. The psychometric analysis of the data collected by the assessment tools used is a fundamental aspect to guarantee the student's competence level. To conduct a psychometric analysis of an instrument which assesses clinical competence in nursing students at simulation stations with standardized patients in OSCE-format tests. The construct of clinical competence was operationalized as a set of observable and measurable behaviors, measured by the newly-created Student Assessment Tool for Standardized Patient Simulations (SAT-SPS), which was comprised of 27 items. The categories assigned to the items were 'incorrect or not performed' (0), 'acceptable' (1), and 'correct' (2). 499 nursing students. Data were collected by two independent observers during the assessment of the students' performance at a four-station OSCE with standardized patients. Descriptive statistics were used to summarize the variables. The difficulty levels and floor and ceiling effects were determined for each item. Reliability was analyzed using internal consistency and inter-observer reliability. The validity analysis was performed considering face validity, content and construct validity (through exploratory factor analysis), and criterion validity. Internal reliability and inter-observer reliability were higher than 0.80. The construct validity analysis suggested a three-factor model accounting for 37.1% of the variance. These three factors were named 'Nursing process', 'Communication skills', and 'Safe practice'. A significant correlation was found between the scores obtained and the students' grades in general, as well as with the grades obtained in subjects with clinical content. The assessment tool has proven to be sufficiently reliable and valid for the assessment of the clinical competence of nursing students using standardized patients. This tool has three main components: the nursing process, communication skills, and safety management. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AtmEn..69..345B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AtmEn..69..345B"><span>Remote sensing of exposure to NO2: Satellite versus ground-based measurement in a large urban area</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bechle, Matthew J.; Millet, Dylan B.; Marshall, Julian D.</p> <p>2013-04-01</p> <p>Remote sensing may be a useful tool for exploring spatial variability of air pollution exposure within an urban area. To evaluate the extent to which satellite data from the Ozone Monitoring Instrument (OMI) can resolve urban-scale gradients in ground-level nitrogen dioxide (NO2) within a large urban area, we compared estimates of surface NO2 concentrations derived from OMI measurements and US EPA ambient monitoring stations. OMI, aboard NASA's Aura satellite, provides daily afternoon (˜13:30 local time) measurements of NO2 tropospheric column abundance. We used scaling factors (surface-to-column ratios) to relate satellite column measurements to ground-level concentrations. We compared 4138 sets of paired data for 25 monitoring stations in the South Coast Air Basin of California for all of 2005. OMI measurements include more data gaps than the ground monitors (60% versus 5% of available data, respectively), owing to cloud contamination and imposed limits on pixel size. The spatial correlation between OMI columns and corrected in situ measurements is strong (r = 0.93 for annual average data), indicating that the within-urban spatial signature of surface NO2 is well resolved by the satellite sensor. Satellite-based surface estimates employing scaling factors from an urban model provide a reliable measure (annual mean bias: -13%; seasonal mean bias: <1% [spring] to -22% [fall]) of fine-scale surface NO2. We also find that OMI provides good spatial density in the study region (average area [km2] per measurement: 730 for the satellite sensor vs. 1100 for the monitors). Our findings indicate that satellite observations of NO2 from the OMI sensor provide a reliable measure of spatial variability in ground-level NO2 exposure for a large urban area.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23201605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23201605"><span>Metal and physico-chemical variations at a hydroelectric reservoir analyzed by Multivariate Analyses and Artificial Neural Networks: environmental management and policy/decision-making tools.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cavalcante, Y L; Hauser-Davis, R A; Saraiva, A C F; Brandão, I L S; Oliveira, T F; Silveira, A M</p> <p>2013-01-01</p> <p>This paper compared and evaluated seasonal variations in physico-chemical parameters and metals at a hydroelectric power station reservoir by applying Multivariate Analyses and Artificial Neural Networks (ANN) statistical techniques. A Factor Analysis was used to reduce the number of variables: the first factor was composed of elements Ca, K, Mg and Na, and the second by Chemical Oxygen Demand. The ANN showed 100% correct classifications in training and validation samples. Physico-chemical analyses showed that water pH values were not statistically different between the dry and rainy seasons, while temperature, conductivity, alkalinity, ammonia and DO were higher in the dry period. TSS, hardness and COD, on the other hand, were higher during the rainy season. The statistical analyses showed that Ca, K, Mg and Na are directly connected to the Chemical Oxygen Demand, which indicates a possibility of their input into the reservoir system by domestic sewage and agricultural run-offs. These statistical applications, thus, are also relevant in cases of environmental management and policy decision-making processes, to identify which factors should be further studied and/or modified to recover degraded or contaminated water bodies. Copyright © 2012 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990111585&hterms=habitability&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dhabitability','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990111585&hterms=habitability&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dhabitability"><span>Space Station Habitability Research</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Clearwater, Yvonne A.</p> <p>1988-01-01</p> <p>The purpose and scope of the Habitability Research Group within the Space Human Factors Office at the NASA/Ames Research Center is described. Both near-term and long-term research objectives in the space human factors program pertaining to the U.S. manned Space Station are introduced. The concept of habitability and its relevancy to the U.S. space program is defined within a historical context. The relationship of habitability research to the optimization of environmental and operational determinants of productivity is discussed. Ongoing habitability research efforts pertaining to living and working on the Space Station are described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870028797&hterms=habitability&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dhabitability','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870028797&hterms=habitability&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dhabitability"><span>Space Station habitability research</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Clearwater, Y. A.</p> <p>1986-01-01</p> <p>The purpose and scope of the Habitability Research Group within the Space Human Factors Office at the NASA/Ames Research Cente is described. Both near-term and long-term research objectives in the space human factors program pertaining to the U.S. manned Space Station are introduced. The concept of habitability and its relevancy to the U.S. space program is defined within a historical context. The relationship of habitability research to the optimization of environmental and operational determinants of productivity is discussed. Ongoing habitability research efforts pertaining to living and working on the Space Station are described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11542427','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11542427"><span>Space Station habitability research.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clearwater, Y A</p> <p>1988-02-01</p> <p>The purpose and scope of the Habitability Research Group within the Space Human Factors Office at the NASA/Ames Research Center is described. Both near-term and long-term research objectives in the space human factors program pertaining to the U.S. manned Space Station are introduced. The concept of habitability and its relevancy to the U.S. space program is defined within a historical context. The relationship of habitability research to the optimization of environmental and operational determinants of productivity is discussed. Ongoing habitability research efforts pertaining to living and working on the Space Station are described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.1941M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.1941M"><span>Global Application of TaiWan Ionospheric Model to Single-Frequency GPS Positioning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Macalalad, E.; Tsai, L. C.; Wu, J.</p> <p>2012-04-01</p> <p>Ionospheric delay is one the major sources of error in GPS positioning and navigation. This error in both pseudorange and phase ranges vary depending on the location of observation, local time, season, solar cycle and geomagnetic activity. For single-frequency receivers, this delay is usually removed using ionospheric models. Two of them are the Klobuchar, or broadcast, model and the global ionosphere map (GIM) provided by the International GNSS Service (IGS). In this paper, a three dimensional ionospheric electron (ne) density model derived from FormoSat3/COSMIC GPS Radio Occultation measurements, called the TaiWan Ionosphere Model, is used. It was used to calculate the slant total electron content (STEC) between receiver and GPS satellites to correct the pseudorange single-frequency observations. The corrected pseudorange for every epoch was used to determine a more accurate position of the receiver. Observations were made in July 2, 2011(Kp index = 0-2) in five randomly selected sites across the globe, four of which are IGS stations (station ID: cnmr, coso, irkj and morp) while the other is a low-cost single-frequency receiver located in Chungli City, Taiwan (ID: isls). It was illustrated that TEC maps generated using TWIM exhibited a detailed structure of the ionosphere, whereas Klobuchar and GIM only provided the basic diurnal and geographic features of the ionosphere. Also, it was shown that for single-frequency static point positioning TWIM provides more accurate and more precise positioning than the Klobuchar and GIM models for all stations. The average %error of the corrections made by Klobuchar, GIM and TWIM in DRMS are 3.88%, 0.78% and 17.45%, respectively. While the average %error in VRMS for Klobuchar, GIM and TWIM are 53.55%, 62.09%, 66.02%, respectively. This shows the capability of TWIM to provide a good global 3-dimensional ionospheric model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AGUFM.G51D..06S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AGUFM.G51D..06S"><span>Consistent Long-Time Series of GPS Satellite Antenna Phase Center Corrections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Steigenberger, P.; Schmid, R.; Rothacher, M.</p> <p>2004-12-01</p> <p>The current IGS processing strategy disregards satellite antenna phase center variations (pcvs) depending on the nadir angle and applies block-specific phase center offsets only. However, the transition from relative to absolute receiver antenna corrections presently under discussion necessitates the consideration of satellite antenna pcvs. Moreover, studies of several groups have shown that the offsets are not homogeneous within a satellite block. Manufacturer specifications seem to confirm this assumption. In order to get best possible antenna corrections, consistent ten-year time series (1994-2004) of satellite-specific pcvs and offsets were generated. This challenging effort became possible as part of the reprocessing of a global GPS network currently performed by the Technical Universities of Munich and Dresden. The data of about 160 stations since the official start of the IGS in 1994 have been reprocessed, as today's GPS time series are mostly inhomogeneous and inconsistent due to continuous improvements in the processing strategies and modeling of global GPS solutions. An analysis of the signals contained in the time series of the phase center offsets demonstrates amplitudes on the decimeter level, at least one order of magnitude worse than the desired accuracy. The periods partly arise from the GPS orbit configuration, as the orientation of the orbit planes with regard to the inertial system repeats after about 350 days due to the rotation of the ascending nodes. In addition, the rms values of the X- and Y-offsets show a high correlation with the angle between the orbit plane and the direction to the sun. The time series of the pcvs mainly point at the correlation with the global terrestrial scale. Solutions with relative and absolute phase center corrections, with block- and satellite-specific satellite antenna corrections demonstrate the effect of this parameter group on other global GPS parameters such as the terrestrial scale, station velocities, the geocenter position or the tropospheric delays. Thus, deeper insight into the so-called `Bermuda triangle' of several highly correlated parameters is given.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009EGUGA..11.7031F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009EGUGA..11.7031F"><span>Monitoring of stability of ASG-EUPOS network coordinates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Figurski, M.; Szafranek, K.; Wrona, M.</p> <p>2009-04-01</p> <p>ASG-EUPOS (Active Geodetic Network - European Position Determination System) is the national system of precise satellite positioning in Poland, which increases a density of regional and global GNSS networks and is widely used by public administration, national institutions, entrepreneurs and citizens (especially surveyors). In near future ASG-EUPOS is to take role of main national network. Control of proper activity of stations and realization of ETRS'89 is a necessity. User of the system needs to be sure that observations quality and coordinates accuracy are high enough. Coordinates of IGS (International GNSS Service) and EPN (European Permanent Network) stations are precisely determined and any changes are monitored all the time. Observations are verified before they are archived in regional and global databases. The same applies to ASG-EUPOS. This paper concerns standardization of GNSS observations from different stations (uniform adjustment), examination of solutions correctness according to IGS and EPN standards and stability of solutions and sites activity</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12902283','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12902283"><span>Influence of seasonal environmental variables on the distribution of presumptive fecal Coliforms around an Antarctic research station.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hughes, Kevin A</p> <p>2003-08-01</p> <p>Factors affecting fecal microorganism survival and distribution in the Antarctic marine environment include solar radiation, water salinity, temperature, sea ice conditions, and fecal input by humans and local wildlife populations. This study assessed the influence of these factors on the distribution of presumptive fecal coliforms around Rothera Point, Adelaide Island, Antarctic Peninsula during the austral summer and winter of February 1999 to September 1999. Each factor had a different degree of influence depending on the time of year. In summer (February), although the station population was high, presumptive fecal coliform concentrations were low, probably due to the biologically damaging effects of solar radiation. However, summer algal blooms reduced penetration of solar radiation into the water column. By early winter (April), fecal coliform concentrations were high, due to increased fecal input by migrant wildlife, while solar radiation doses were low. By late winter (September), fecal coliform concentrations were high near the station sewage outfall, as sea ice formation limited solar radiation penetration into the sea and prevented wind-driven water circulation near the outfall. During this study, environmental factors masked the effect of station population numbers on sewage plume size. If sewage production increases throughout the Antarctic, environmental factors may become less significant and effective sewage waste management will become increasingly important. These findings highlight the need for year-round monitoring of fecal coliform distribution in Antarctic waters near research stations to produce realistic evaluations of sewage pollution persistence and dispersal.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22642273-su-correspondence-factor-correction-coefficient-commissioning-leipzig-valencia-applicators-standard-imaging-ivb','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22642273-su-correspondence-factor-correction-coefficient-commissioning-leipzig-valencia-applicators-standard-imaging-ivb"><span>SU-F-T-23: Correspondence Factor Correction Coefficient for Commissioning of Leipzig and Valencia Applicators with the Standard Imaging IVB 1000</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Donaghue, J; Gajdos, S</p> <p></p> <p>Purpose: To determine the correction factor of the correspondence factor for the Standard Imaging IVB 1000 well chamber for commissioning of Elekta’s Leipzig and Valencia skin applicators. Methods: The Leipzig and Valencia applicators are designed to treat small skin lesions by collimating irradiation to the treatment area. Published output factors are used to calculate dose rates for clinical treatments. To validate onsite applicators, a correspondence factor (CFrev) is measured and compared to published values. The published CFrev is based on well chamber model SI HDR 1000 Plus. The CFrev is determined by correlating raw values of the source calibration setupmore » (Rcal,raw) and values taken when each applicator is mounted on the same well chamber with an adapter (Rapp,raw). The CFrev is calculated by using the equation CFrev =Rapp,raw/Rcal,raw. The CFrev was measured for each applicator in both the SI HDR 1000 Plus and the SI IVB 1000. A correction factor, CFIVB for the SI IVB 1000 was determined by finding the ratio of CFrev (SI IVB 1000) and CFrev (SI HDR 1000 Plus). Results: The average correction factors at dwell position 1121 were found to be 1.073, 1.039, 1.209, 1.091, and 1.058 for the Valencia V2, Valencia V3, Leipzig H1, Leipzig H2, and Leipzig H3 respectively. There were no significant variations in the correction factor for dwell positions 1119 through 1121. Conclusion: By using the appropriate correction factor, the correspondence factors for the Leipzig and Valencia surface applicators can be validated with the Standard Imaging IVB 1000. This allows users to correlate their measurements with the Standard Imaging IVB 1000 to the published data. The correction factor is included in the equation for the CFrev as follows: CFrev= Rapp,raw/(CFIVB*Rcal,raw). Each individual applicator has its own correction factor, so care must be taken that the appropriate factor is used.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017GeoJI.211.1613G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017GeoJI.211.1613G"><span>Locating seismicity on the Arctic plate boundary using multiple-event techniques and empirical signal processing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gibbons, S. J.; Harris, D. B.; Dahl-Jensen, T.; Kværna, T.; Larsen, T. B.; Paulsen, B.; Voss, P. H.</p> <p>2017-12-01</p> <p>The oceanic boundary separating the Eurasian and North American plates between 70° and 84° north hosts large earthquakes which are well recorded teleseismically, and many more seismic events at far lower magnitudes that are well recorded only at regional distances. Existing seismic bulletins have considerable spread and bias resulting from limited station coverage and deficiencies in the velocity models applied. This is particularly acute for the lower magnitude events which may only be constrained by a small number of Pn and Sn arrivals. Over the past two decades there has been a significant improvement in the seismic network in the Arctic: a difficult region to instrument due to the harsh climate, a sparsity of accessible sites (particularly at significant distances from the sea), and the expense and difficult logistics of deploying and maintaining stations. New deployments and upgrades to stations on Greenland, Svalbard, Jan Mayen, Hopen, and Bjørnøya have resulted in a sparse but stable regional seismic network which results in events down to magnitudes below 3 generating high-quality Pn and Sn signals on multiple stations. A catalogue of several hundred events in the region since 1998 has been generated using many new phase readings on stations on both sides of the spreading ridge in addition to teleseismic P phases. A Bayesian multiple event relocation has resulted in a significant reduction in the spread of hypocentre estimates for both large and small events. Whereas single event location algorithms minimize vectors of time residuals on an event-by-event basis, the Bayesloc program finds a joint probability distribution of origins, hypocentres, and corrections to traveltime predictions for large numbers of events. The solutions obtained favour those event hypotheses resulting in time residuals which are most consistent over a given source region. The relocations have been performed with different 1-D velocity models applicable to the Arctic region and hypocentres obtained using Bayesloc have been shown to be relatively insensitive to the specified velocity structure in the crust and upper mantle, even for events only constrained by regional phases. The patterns of time residuals resulting from the multiple-event location procedure provide well-constrained time correction surfaces for single-event location estimates and are sufficiently stable to identify a number of picking errors and instrumental timing anomalies. This allows for subsequent quality control of the input data and further improvement in the location estimates. We use the relocated events to form narrowband empirical steering vectors for wave fronts arriving at the SPITS array on Svalbard for azimuth and apparent velocity estimation. We demonstrate that empirical matched field parameter estimation determined by source region is a viable supplement to planewave f-k analysis, mitigating bias and obviating the need for Slowness and Azimuth Station Corrections. A database of reference events and phase arrivals is provided to facilitate further refinement of event locations and the construction of empirical signal detectors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AGUFM.S13E..02W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AGUFM.S13E..02W"><span>Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.</p> <p>2008-12-01</p> <p>Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013OcDyn..63..823Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013OcDyn..63..823Z"><span>Improved water-level forecasting for the Northwest European Shelf and North Sea through direct modelling of tide, surge and non-linear interaction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zijl, Firmijn; Verlaan, Martin; Gerritsen, Herman</p> <p>2013-07-01</p> <p>In real-time operational coastal forecasting systems for the northwest European shelf, the representation accuracy of tide-surge models commonly suffers from insufficiently accurate tidal representation, especially in shallow near-shore areas with complex bathymetry and geometry. Therefore, in conventional operational systems, the surge component from numerical model simulations is used, while the harmonically predicted tide, accurately known from harmonic analysis of tide gauge measurements, is added to forecast the full water-level signal at tide gauge locations. Although there are errors associated with this so-called astronomical correction (e.g. because of the assumption of linearity of tide and surge), for current operational models, astronomical correction has nevertheless been shown to increase the representation accuracy of the full water-level signal. The simulated modulation of the surge through non-linear tide-surge interaction is affected by the poor representation of the tide signal in the tide-surge model, which astronomical correction does not improve. Furthermore, astronomical correction can only be applied to locations where the astronomic tide is known through a harmonic analysis of in situ measurements at tide gauge stations. This provides a strong motivation to improve both tide and surge representation of numerical models used in forecasting. In the present paper, we propose a new generation tide-surge model for the northwest European Shelf (DCSMv6). This is the first application on this scale in which the tidal representation is such that astronomical correction no longer improves the accuracy of the total water-level representation and where, consequently, the straightforward direct model forecasting of total water levels is better. The methodology applied to improve both tide and surge representation of the model is discussed, with emphasis on the use of satellite altimeter data and data assimilation techniques for reducing parameter uncertainty. Historic DCSMv6 model simulations are compared against shelf wide observations for a full calendar year. For a selection of stations, these results are compared to those with astronomical correction, which confirms that the tide representation in coastal regions has sufficient accuracy, and that forecasting total water levels directly yields superior results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70024887','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70024887"><span>Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Hay, L.E.; Clark, M.P.</p> <p>2003-01-01</p> <p>This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950033756&hterms=rain+storm&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Drain%2Bstorm','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950033756&hterms=rain+storm&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Drain%2Bstorm"><span>Regional and seasonal estimates of fractional storm coverage based on station precipitation observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gong, Gavin; Entekhabi, Dara; Salvucci, Guido D.</p> <p>1994-01-01</p> <p>Simulated climates using numerical atmospheric general circulation models (GCMs) have been shown to be highly sensitive to the fraction of GCM grid area assumed to be wetted during rain events. The model hydrologic cycle and land-surface water and energy balance are influenced by the parameter bar-kappa, which is the dimensionless fractional wetted area for GCM grids. Hourly precipitation records for over 1700 precipitation stations within the contiguous United States are used to obtain observation-based estimates of fractional wetting that exhibit regional and seasonal variations. The spatial parameter bar-kappa is estimated from the temporal raingauge data using conditional probability relations. Monthly bar-kappa values are estimated for rectangular grid areas over the contiguous United States as defined by the Goddard Institute for Space Studies 4 deg x 5 deg GCM. A bias in the estimates is evident due to the unavoidably sparse raingauge network density, which causes some storms to go undetected by the network. This bias is corrected by deriving the probability of a storm escaping detection by the network. A Monte Carlo simulation study is also conducted that consists of synthetically generated storm arrivals over an artificial grid area. It is used to confirm the bar-kappa estimation procedure and to test the nature of the bias and its correction. These monthly fractional wetting estimates, based on the analysis of station precipitation data, provide an observational basis for assigning the influential parameter bar-kappa in GCM land-surface hydrology parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090025261','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090025261"><span>Investigation of Performance of Axial-Flow Compressor of XT-46 Turbine-Propeller Engine. I - Preliminary Investigation at 50-,70-, and 100-Percent Design Equivalent Speed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Creagh, John W.R.; Sandercrock, Donald M.</p> <p>1950-01-01</p> <p>An investigation is being conducted to determine the performance of the 12-stage axial-flow compressor of the XT-46 turbine-propeller engine. This compressor was designed to produce a pressure ratio of 9 at an adiabatic efficiency of 0.86. The design pressure ratios per stage were considerably greater than any employed in current aircraft gas-turbine engines using this type of compressor. The compressor performance was evaluated at two stations. The station near the entrance section of the combustors indicated a peak pressure ratio of 6.3 at an adiabatic efficiency of 0.63 for a corrected weight flow of 23.1 pounds per second. The other, located one blade-chord downstream of the last stator row, indicated a peak pressure ratio of 6.97 at an adiabatic efficiency of 0.81 for a corrected weight flow of 30.4 pounds per second. The difference in performance obtained at the two stations is attributed to shock waves in the vicinity of the last stator row. These shock waves and the accompanying flow choking, together with interstage circulatory flows, shift the compressor operating curves into the region where surge would normally occur. The inability of the compressor to meet design pressure ratio is probably due to boundary-layer buildup in the last stages, which cause axial velocities greater than design values that, in turn, adversely affect the angles of attack and turning angles in these blade rows.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19720048403&hterms=time+synchronization&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dtime%2Bsynchronization','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19720048403&hterms=time+synchronization&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dtime%2Bsynchronization"><span>Time synchronization via lunar radar.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Higa, W. H.</p> <p>1972-01-01</p> <p>The advent of round-trip radar measurements has permitted the determination of the ranges to the nearby planets with greater precision than was previously possible. When the distances to the planets are known with high precision, the propagation delay for electromagnetic waves reflected by the planets may be calculated and used to synchronize remotely located clocks. Details basic to the operation of a lunar radar indicate a capability for clock synchronization to plus or minus 20 microsec. One of the design goals for this system was to achieve a simple semiautomatic receiver for remotely located tracking stations. The lunar radar system is in operational use for deep space tracking at Jet Propulsion Laboratory and synchronizes five world-wide tracking stations with a master clock at Goldstone, Calif. Computers are programmed to correct the Goldstone transmissions for transit time delay and Doppler shifts so as to be received on time at the tracking stations; this dictates that only one station can be synchronized at a given time period and that the moon must be simultaneously visible to both the transmitter and receiver for a minimum time of 10 min.-</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JAGeo..11..131E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JAGeo..11..131E"><span>Object tracking with robotic total stations: Current technologies and improvements based on image data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ehrhart, Matthias; Lienhart, Werner</p> <p>2017-09-01</p> <p>The importance of automated prism tracking is increasingly triggered by the rising automation of total station measurements in machine control, monitoring and one-person operation. In this article we summarize and explain the different techniques that are used to coarsely search a prism, to precisely aim at a prism, and to identify whether the correct prism is tracked. Along with the state-of-the-art review, we discuss and experimentally evaluate possible improvements based on the image data of an additional wide-angle camera which is available for many total stations today. In cases in which the total station's fine aiming module loses the prism, the tracked object may still be visible to the wide-angle camera because of its larger field of view. The theodolite angles towards the target can then be derived from its image coordinates which facilitates a fast reacquisition of the prism. In experimental measurements we demonstrate that our image-based approach for the coarse target search is 4 to 10-times faster than conventional approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2010-02-03/pdf/2010-2190.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2010-02-03/pdf/2010-2190.pdf"><span>75 FR 5536 - Pipeline Safety: Control Room Management/Human Factors, Correction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2010-02-03</p> <p>... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors, Correction AGENCY: Pipeline and Hazardous Materials Safety... following correcting amendments: PART 192--TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.B33A0620B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.B33A0620B"><span>FluxSuite: a New Scientific Tool for Advanced Network Management and Cross-Sharing of Next-Generation Flux Stations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Burba, G. G.; Johnson, D.; Velgersdyk, M.; Beaty, K.; Forgione, A.; Begashaw, I.; Allyn, D.</p> <p>2015-12-01</p> <p>Significant increases in data generation and computing power in recent years have greatly improved spatial and temporal flux data coverage on multiple scales, from a single station to continental flux networks. At the same time, operating budgets for flux teams and stations infrastructure are getting ever more difficult to acquire and sustain. With more stations and networks, larger data flows from each station, and smaller operating budgets, modern tools are needed to effectively and efficiently handle the entire process. This would help maximize time dedicated to answering research questions, and minimize time and expenses spent on data processing, quality control and station management. Cross-sharing the stations with external institutions may also help leverage available funding, increase scientific collaboration, and promote data analyses and publications. FluxSuite, a new advanced tool combining hardware, software and web-service, was developed to address these specific demands. It automates key stages of flux workflow, minimizes day-to-day site management, and modernizes the handling of data flows: Each next-generation station measures all parameters needed for flux computations Field microcomputer calculates final fully-corrected flux rates in real time, including computation-intensive Fourier transforms, spectra, co-spectra, multiple rotations, stationarity, footprint, etc. Final fluxes, radiation, weather and soil data are merged into a single quality-controlled file Multiple flux stations are linked into an automated time-synchronized network Flux network manager, or PI, can see all stations in real time, including fluxes, supporting data, automated reports, and email alerts PI can assign rights, allow or restrict access to stations and data: selected stations can be shared via rights-managed access internally or with external institutions Researchers without stations could form "virtual networks" for specific projects by collaborating with PIs from different actual networks This presentation provides detailed examples of FluxSuite currently utilized by two large flux networks in China (National Academy of Sciences & Agricultural Academy of Sciences), and smaller networks with stations in the USA, Germany, Ireland, Malaysia and other locations around the globe.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1811607D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1811607D"><span>Validation of crowdsourced automatic rain gauge measurements in Amsterdam</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>de Vos, Lotte; Leijnse, Hidde; Overeem, Aart; Uijlenhoet, Remko</p> <p>2016-04-01</p> <p>The increasing number of privately owned weather stations and the facilitating role the internet to make this data publicly available, has led to several online platforms that collect and visualize crowdsourced weather data. This has resulted in ever increasing freely available datasets of weather measurements generated by amateur weather enthusiasts. Because of the lack of quality control and the frequent absence of metadata, these measurements are often considered as unreliable. Given the often large variability of weather variables in space and time, and the generally low number of official weather stations, this growing quantity of crowdsourced data may become an important additional source of information. Amateur weather observations have become more frequent over the past decade due to weather stations becoming more user-friendly and affordable. The variables measured by these weather stations are temperature, pressure and dew point, and in some cases wind and rainfall. Meteorological data from crowdsourced automatic weather stations in cities have primarily been used to examine the urban heat island effect. Thus far, these studies have focused on the comparison of the crowdsourced station temperature measurements with a nearby WMO-standard weather station, which is often located in a rural area or the outskirts of a city, generally not being representative of the city center. Instead of temperature, the rainfall measurements by the stations are examined. This research focuses on the combined ability of a large number of privately owned weather stations in an urban setting to correctly monitor rainfall. A set of 64 automatic weather stations distributed over Amsterdam (The Netherlands) that have at least 3 months of precipitation measurement during one year are evaluated. Precipitation measurements from stations are compared to a merged radar-gauge precipitation product. Disregarding sudden jumps in station measured precipitation, the accumulative rainfall over time in most stations showed an underestimation of rainfall compared to the accumulative values found in the corresponding radar pixel of the reference. Special consideration is given to the identification of faulty measurements without the need to obtain additional meta-data, such as setup and surroundings. This validation will show the potential of crowdsourced automatic weather stations for future urban rainfall monitoring.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22624311-su-small-field-correction-factors-microdiamond-detector-gamma-knife-model-derived-using-monte-carlo-methods','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22624311-su-small-field-correction-factors-microdiamond-detector-gamma-knife-model-derived-using-monte-carlo-methods"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C</p> <p></p> <p>Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012NIMPA.668...71T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012NIMPA.668...71T"><span>Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.</p> <p>2012-03-01</p> <p>In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28333764','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28333764"><span>Determinants of Rotavirus Transmission: A Lag Nonlinear Time Series Analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Gaalen, Rolina D; van de Kassteele, Jan; Hahné, Susan J M; Bruijning-Verhagen, Patricia; Wallinga, Jacco</p> <p>2017-07-01</p> <p>Rotavirus is a common viral infection among young children. As in many countries, the infection dynamics of rotavirus in the Netherlands are characterized by an annual winter peak, which was notably low in 2014. Previous study suggested an association between weather factors and both rotavirus transmission and incidence. From epidemic theory, we know that the proportion of susceptible individuals can affect disease transmission. We investigated how these factors are associated with rotavirus transmission in the Netherlands, and their impact on rotavirus transmission in 2014. We used available data on birth rates and rotavirus laboratory reports to estimate rotavirus transmission and the proportion of individuals susceptible to primary infection. Weather data were directly available from a central meteorological station. We developed an approach for detecting determinants of seasonal rotavirus transmission by assessing nonlinear, delayed associations between each factor and rotavirus transmission. We explored relationships by applying a distributed lag nonlinear regression model with seasonal terms. We corrected for residual serial correlation using autoregressive moving average errors. We inferred the relationship between different factors and the effective reproduction number from the most parsimonious model with low residual autocorrelation. Higher proportions of susceptible individuals and lower temperatures were associated with increases in rotavirus transmission. For 2014, our findings suggest that relatively mild temperatures combined with the low proportion of susceptible individuals contributed to lower rotavirus transmission in the Netherlands. However, our model, which overestimated the magnitude of the peak, suggested that other factors were likely instrumental in reducing the incidence that year.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28681448','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28681448"><span>An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E</p> <p>2017-07-01</p> <p>The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.T23B2578B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.T23B2578B"><span>Preliminary Results of P & S-wave Teleseismic Tomography of the Superior Region</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bollmann, T. A.; van der Lee, S.; Frederiksen, A. W.</p> <p>2013-12-01</p> <p>In continental North America, the Midcontinent Rift System (MRS) is the most prominent feature in gravity and magnetic anomaly maps. These anomalies are associated with large amount of igneous material deposited there around 1.1 Ga. Preliminary evidence from ambient seismic noise analysis of the area has found that the MRS crustal structure has a low velocity along its axis. A major question remains as to whether any structural evidence for the MRS' rifting episodes or its failure were retained in the lithospheric mantle beneath it. To this end we measured teleseismic P and S travel times at Earthscope seismic stations from the Flexible Array SPREE, the Transportable Array, and several US ANSS Backbone stations. These measurements constitute a major resource for upgrading an existing teleseismic, pre-SPREE tomography model for the region, (Frederiksen et al., 2013) as well as longer wavelength regional models, such as NA07 (Bedle and Van der Lee, 2007). We measured the delay times of about 25 thousand teleseismic P arrivals from over a hundred events with magnitudes of 5.5 and greater, and about half as many for S arrivals. Nearly half of these teleseismic events are to the NNW (Alaska-Japan) and about one third are from Central and South America to the SSE. We inverted the P delays for common station-side delays and common event-side delays. Station-side P delays vary by about 1.5 s over the region, with the Archean Superior Craton recording earlier arrivals than Proterozoic terrains in Wisconsin. SPREE stations show later arrivals closer to the rift axis compare to earlier onesfurther away from the rift, but a correlation with the large rift-related gravity anomaly is not obvious. To examine whether the mantle has any rift-related structures, for example from meltdepletion, we are measuring delay times from additional events recorded and recovered during the spring SPREE service run, applying corrections to the delay times for topography and crustal structure, and will invert the corrected delay times for 3D mantle structure. The average P-wave delay time from all events for each station fit to a surface. An interpolated surface of these values is shown in the background. The gray lines are accretionary provinces from Whitmeyer & Karlstrom (2007).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720022936','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720022936"><span>Improved navigation by combining VOR/DME information with air or inertial data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bobick, J. C.; Bryson, A. E., Jr.</p> <p>1972-01-01</p> <p>The improvement was determined in navigational accuracy obtainable by combining VOR/DME information (from one or two stations) with air data (airspeed and heading) or with data from an inertial navigation system (INS) by means of a maximum-likelihood filter. It was found that the addition of air data to the information from one VOR/DME station reduces the RMS position error by a factor of about 2, whereas the addition of inertial data from a low-quality INS reduces the RMS position error by a factor of about 3. The use of information from two VOR/DME stations with air or inertial data yields large factors of improvement in RMS position accuracy over the use of a single VOR/DME station, roughly 15 to 20 for the air-data case and 25 to 35 for the inertial-data case. As far as position accuracy is concerned, at most one VOR station need be used. When continuously updating an INS with VOR/DME information, the use of a high-quality INS (0.01 deg/hr gyro drift) instead of a low-quality INS (1.0 deg/hr gyro drift) does not substantially improve position accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA136432','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA136432"><span>Adaptation of Flux-Corrected Transport Algorithms for Modeling Dusty Flows.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1983-12-20</p> <p>Defense Comunications Agency Olcy Attn XLA Washington, DC 20305 01cy Attn nTW-2 (ADR CNW D I: Attn Code 240 for) Olcy Attn NL-STN O Library Olcy Attn...Library Olcy Attn TIC-Library Olcy Attn R Welch Olcy Attn M Johnson Los Alamos National Scientific Lab. Mail Station 5000 Information Science, Inc. P</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2013-10-02/pdf/2013-24040.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2013-10-02/pdf/2013-24040.pdf"><span>78 FR 60804 - Airworthiness Directives; The Boeing Company Airplanes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2013-10-02</p> <p>... detailed and eddy current inspections to detect cracking of the frame web around the cutout for the doorstop intercostal strap at the aft side of the station (STA) 291.5 frame at stringer 16R, and corrective... various locations of the STA 277 to STA 291.5 frames and intercostals, including webs, chords, clips, and...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA025349','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA025349"><span>Use of Source-Region-Station Time Corrections at NTS for Depth Estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1975-07-15</p> <p>Pahute Mesa events. 4 Tracings of P or PKP arrivals from NTS at RKON, 19 BUL, PRE. 5 Core-phase travel times, from Qamar (1973). 21 6...19-36. Qamar , 1973. Revised velocities in the earth’s core. Bull. Seism. Soc. Am. vol. 63, no. 3, p. 1073-1106. Richter, C F., 1958. Elementary</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA621156','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA621156"><span>Numerical Sedimentation Study of Shoaling on the Ohio River near Mound City, Illinois</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-08-01</p> <p>from Lock and Dam 53 to just south of Cairo, IL. The water surface profile data on the Ohio River were collected using an Applanix POS_MV system...User Service (OPUS). The Applanix software package “POSPAC” was used to generate solution files by applying corrections from the base station data</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title40-vol1/pdf/CFR-2010-title40-vol1-sec49-24.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title40-vol1/pdf/CFR-2010-title40-vol1-sec49-24.pdf"><span>40 CFR 49.24 - Federal Implementation Plan Provisions for Navajo Generating Station, Navajo Nation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-07-01</p> <p>... observations, and any corrective actions taken shall be noted in a log. (f) Reporting and recordkeeping... Environmental Protection Agency, P.O. Box 339, Window Rock, Arizona 86515, (928) 871 -7692, (928) 871-7996... Protection Agency, by mail to: P.O. Box 339, Window Rock, Arizona 86515, or by facsimile to: (928) 871-7996...</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1852h0003G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1852h0003G"><span>KASCADE-Grande energy reconstruction based on the lateral density distribution using the QGSJet-II.04 interaction model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gherghel-Lascu, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertania, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.</p> <p>2017-06-01</p> <p>The charged particle densities obtained from CORSIKA simulated EAS, using the QGSJet-II.04 hadronic interaction model are used for primary energy reconstruction. Simulated data are reconstructed by using Lateral Energy Correction Functions computed with a new realistic model of the Grande stations implemented in Geant4.10.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA375481','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA375481"><span>Regional Attenuation at PIDC Stations and the Transportability of the S/P Discriminant</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1998-03-31</p> <p>amplitude ratios using a variety of frequency bands subject to the constraint that the P-wave frequency is greater than or equal to the 5 - wave frequency... the 5 - wave frequency. First, we discuss our distance and source corrections and show that we were able to remove these dependencies in the data</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2010-02-25/pdf/2010-3469.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2010-02-25/pdf/2010-3469.pdf"><span>75 FR 8465 - Airworthiness Directives; McDonnell Douglas Corporation Model MD-90-30 Airplanes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2010-02-25</p> <p>... Administration (FAA), DOT. ACTION: Final rule. SUMMARY: We are adopting a new airworthiness directive (AD) for... frames at stations 883, 902, 924, 943, and 962, left and right sides, and corrective actions if necessary.... Department of Transportation, Docket Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20821090','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20821090"><span>The perturbation correction factors for cylindrical ionization chambers in high-energy photon beams.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yoshiyama, Fumiaki; Araki, Fujio; Ono, Takeshi</p> <p>2010-07-01</p> <p>In this study, we calculated perturbation correction factors for cylindrical ionization chambers in high-energy photon beams by using Monte Carlo simulations. We modeled four Farmer-type cylindrical chambers with the EGSnrc/Cavity code and calculated the cavity or electron fluence correction factor, P (cav), the displacement correction factor, P (dis), the wall correction factor, P (wall), the stem correction factor, P (stem), the central electrode correction factor, P (cel), and the overall perturbation correction factor, P (Q). The calculated P (dis) values for PTW30010/30013 chambers were 0.9967 +/- 0.0017, 0.9983 +/- 0.0019, and 0.9980 +/- 0.0019, respectively, for (60)Co, 4 MV, and 10 MV photon beams. The value for a (60)Co beam was about 1.0% higher than the 0.988 value recommended by the IAEA TRS-398 protocol. The P (dis) values had a substantial discrepancy compared to those of IAEA TRS-398 and AAPM TG-51 at all photon energies. The P (wall) values were from 0.9994 +/- 0.0020 to 1.0031 +/- 0.0020 for PTW30010 and from 0.9961 +/- 0.0018 to 0.9991 +/- 0.0017 for PTW30011/30012, in the range of (60)Co-10 MV. The P (wall) values for PTW30011/30012 were around 0.3% lower than those of the IAEA TRS-398. Also, the chamber response with and without a 1 mm PMMA water-proofing sleeve agreed within their combined uncertainty. The calculated P (stem) values ranged from 0.9945 +/- 0.0014 to 0.9965 +/- 0.0014, but they are not considered in current dosimetry protocols. The values were no significant difference on beam qualities. P (cel) for a 1 mm aluminum electrode agreed within 0.3% with that of IAEA TRS-398. The overall perturbation factors agreed within 0.4% with those for IAEA TRS-398.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1847b0014S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1847b0014S"><span>Measuring market share of petrol stations using conditional probability approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sharif, Shamshuritawati; Lwee, Xue Yin</p> <p>2017-05-01</p> <p>Oil and gas production is the strength of Malaysia's growth over past decades. It is one of the most strategic economic branches in the world. Since the oil industry is essential for the economic growth of a country, only a few undertakings have been achieved to establish. It is a very risky business. Therefore the dealer must have some information in hand before setting up a new business plan. Understanding the current business situation is an important strategy to avoid risky ventures. In this study, the aim is to deliver a very simple but essential way to identify the market share based on customer's choice factors. This approach is presented to encourage the non-statisticians to use it easily in helping their business performance. From this study, the most important factors differ from one station to another station. The results show that the factors of customer's choice for BHPetrol, Caltex, PETRON, PETRONAS and SHELL are site location, service quality, service quality, size of the petrol station, and brand image, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Bootstrap+AND+Methods+AND+Statistics&id=EJ880998','ERIC'); return false;" href="https://eric.ed.gov/?q=Bootstrap+AND+Methods+AND+Statistics&id=EJ880998"><span>Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong</p> <p>2010-01-01</p> <p>This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29254593','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29254593"><span>Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I</p> <p>2017-12-01</p> <p>The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22339975-su-determination-comparison-correction-factors-obtained-tlds-small-field-lung-heterogenous-phantom-using-acuros-xb-egsnrc','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22339975-su-determination-comparison-correction-factors-obtained-tlds-small-field-lung-heterogenous-phantom-using-acuros-xb-egsnrc"><span>SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Soh, R; Lee, J; Harianto, F</p> <p></p> <p>Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.G31D0936P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.G31D0936P"><span>Using GPS RO L1 data for calibration of the atmospheric path delay model for data reduction of the satellite altimetery observations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Petrov, L.</p> <p>2017-12-01</p> <p>Processing satellite altimetry data requires the computation of path delayin the neutral atmosphere that is used for correcting ranges. The path delayis computed using numerical weather models and the accuracy of its computationdepends on the accuracy of numerical weather models. Accuracy of numerical modelsof numerical weather models over Antarctica and Greenland where there is a very sparse network of ground stations, is not well known. I used the dataset of GPS RO L1 data, computed predicted path delay for ROobservations using the numerical whether model GEOS-FPIT, formed the differences with observed path delay and used these differences for computationof the corrections to the a priori refractivity profile. These profiles wereused for computing corrections to the a priori zenith path delay. The systematic patter of these corrections are used for de-biasing of the the satellite altimetry results and for characterization of the systematic errorscaused by mismodeling atmosphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26010475','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26010475"><span>A multivariate study for characterizing particulate matter (PM(10), PM(2.5), and PM(1)) in Seoul metropolitan subway stations, Korea.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kwon, Soon-Bark; Jeong, Wootae; Park, Duckshin; Kim, Ki-Tae; Cho, Kyung Hwa</p> <p>2015-10-30</p> <p>Given that around eight million commuters use the Seoul Metropolitan Subway (SMS) each day, the indoor air quality (IAQ) of its stations has attracted much public attention. We have monitored the concentration of particulate matters (PMx) (i.e., PM10, PM2.5, and PM1) in six major transfer stations per minute for three weeks during the summer, autumn, and winter in 2014 and 2015. The data were analyzed to investigate the relationship between PMx concentration and multivariate environmental factors using statistical methods. The average PM concentration observed was approximately two or three times higher than outdoor PM10 concentration, showing similar temporal patterns at concourses and platforms. This implies that outdoor PM10 is the most significant factor in controlling indoor PM concentration. In addition, the station depth and number of trains passing through stations were found to be additional influences on PMx. Principal component analysis (PCA) and self-organizing map (SOM) were employed, through which we found that the number of trains influences PM concentration in the vicinity of platforms only, and PMx hotspots were determined. This study identifies the external and internal factors affecting PMx characteristics in six SMS stations, which can assist in the development of effective IAQ management plans to improve public health. Copyright © 2015 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29254849','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29254849"><span>Factors Associated With Early Loss of Hallux Valgus Correction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shibuya, Naohiro; Kyprios, Evangelos M; Panchani, Prakash N; Martin, Lanster R; Thorud, Jakob C; Jupiter, Daniel C</p> <p></p> <p>Recurrence is common after hallux valgus corrective surgery. Although many investigators have studied the risk factors associated with a suboptimal hallux position at the end of long-term follow-up, few have evaluated the factors associated with actual early loss of correction. We conducted a retrospective cohort study to identify the predictors of lateral deviation of the hallux during the postoperative period. We evaluated the demographic data, preoperative severity of the hallux valgus, other angular measurements characterizing underlying deformities, amount of hallux valgus correction, and postoperative alignment of the corrected hallux valgus for associations with recurrence. After adjusting for the covariates, the only factor associated with recurrence was the postoperative tibial sesamoid position. The recurrence rate was ~50% and ~60% when the postoperative tibial sesamoid position was >4 and >5 on the 7-point scale, respectively. Published by Elsevier Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1048854','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1048854"><span>Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Anderson, Dale; Selby, Neil</p> <p>2012-08-14</p> <p>Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850007940','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850007940"><span>Characterizing the scientific potential of satellite sensors. [San Francisco, California</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1984-01-01</p> <p>Analytical and programming support is to be provided to characterize the potential of the LANDSAT thematic mapper (TM) digital imagery for scientific investigations in the Earth sciences and in terrestrial physics. In addition, technical support to define lower atmospheric and terrestrial surface experiments for the space station and technical support to the Research Optical Sensor (ROS) study scientist for advanced studies in remote sensing are to be provided. Eleven radiometric calibration and correction programs are described. Coherent noise and bright target saturation correction are discussed along with image processing on the LAS/VAX and Hp-300/IDIMS. An image of San Francisco, California from TM band 2 is presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980004095','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980004095"><span>Assimilation of Satellite Data in Regional Air Quality Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mcnider, Richard T.; Norris, William B.; Casey, Daniel; Pleim, Jonathan E.; Roselle, Shawn J.; Lapenta, William M.</p> <p>1997-01-01</p> <p>In terms of important uncertainty in regional-scale air-pollution models, probably no other aspect ranks any higher than the current ability to specify clouds and soil moisture on the regional scale. Because clouds in models are highly parameterized, the ability of models to predict the correct spatial and radiative characteristics is highly suspect and subject to large error. The poor representation of cloud fields from point measurements at National Weather Services stations and the almost total absence of surface moisture availability observations has made assimilation of these variables difficult to impossible. Yet, the correct inclusion of clouds and surface moisture are of first-order importance in regional-scale photochemistry.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001AGUFM.S12D0641G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001AGUFM.S12D0641G"><span>Local Earthquake Tomography in the Eifel Region, Middle Europe</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gaensicke, H.</p> <p>2001-12-01</p> <p>The aim of the Eifel Plume project is to verify the existence of an assumed mantle plume responsible for the Tertiary and Quaternary volcanism in the Eifel region of midwest Germany. During a large passive and semi-active seismological experiment (November 1997 - June 1998) about 160 mobil broadband and short period stations were operated in addition to about 100 permanent stations in the area of interest. The stations registered teleseismic and local events. Local events are used to obtain a threedimensional tomographic model of seismic velocities in the crust. Since local earthquake tomography requires a large set of crustal travel paths, seismograms of local events recorded from July 1998 to June 2001 by permanent stations were added to the Eifel Plume data set. In addition to travel time corrections for the teleseismic tomography of the upper mantle, the new 3D velocity model should improve the precision for location of local events. From a total of 832 local seismic events, 172 were identified as tectonic earthquakes. The other events were either quarry blasts or shallow mine-induced seismic events. The locations of 60 quarry blasts are known and for 30 of them the firing time was measured during the field experiment. Since the origin time and location of these events are known with high precision, they are used to validate inverted velocity models. Station corrections from simultaneous 1D-inversion of local earthquake traveltimes and hypocenters are in good agreement with travel time residuals calculated from teleseismic rays. A strong azimuthal dependency of travel time residuals resulting from a 1D velocity model was found for quarry blasts with hypocenters in the volcanic field in the center of the Eifel. Simultaneous 3D-inversion calculations show strong heterogeneities in the upper crust and a negative anomaly for p-wave velocities in the lower crust. The latter either could indicate a low velocity zone close to the Moho or subsidence of the Moho. We present preliminary results obtained by simultaneous inversion of earthquake and velocity parameters constrained by known geological parameters and the controlled source information from calibrated quarry blasts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JGeod..88..789S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JGeod..88..789S"><span>Contribution of Starlette, Stella, and AJISAI to the SLR-derived global reference frame</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sośnica, Krzysztof; Jäggi, Adrian; Thaller, Daniela; Beutler, Gerhard; Dach, Rolf</p> <p>2014-08-01</p> <p>The contribution of Starlette, Stella, and AJISAI is currently neglected when defining the International Terrestrial Reference Frame, despite a long time series of precise SLR observations and a huge amount of available data. The inferior accuracy of the orbits of low orbiting geodetic satellites is the main reason for this neglect. The Analysis Centers of the International Laser Ranging Service (ILRS ACs) do, however, consider including low orbiting geodetic satellites for deriving the standard ILRS products based on LAGEOS and Etalon satellites, instead of the sparsely observed, and thus, virtually negligible Etalons. We process ten years of SLR observations to Starlette, Stella, AJISAI, and LAGEOS and we assess the impact of these Low Earth Orbiting (LEO) SLR satellites on the SLR-derived parameters. We study different orbit parameterizations, in particular different arc lengths and the impact of pseudo-stochastic pulses and dynamical orbit parameters on the quality of the solutions. We found that the repeatability of the East and North components of station coordinates, the quality of polar coordinates, and the scale estimates of the reference are improved when combining LAGEOS with low orbiting SLR satellites. In the multi-SLR solutions, the scale and the component of geocenter coordinates are less affected by deficiencies in solar radiation pressure modeling than in the LAGEOS-1/2 solutions, due to substantially reduced correlations between the geocenter coordinate and empirical orbit parameters. Eventually, we found that the standard values of Center-of-mass corrections (CoM) for geodetic LEO satellites are not valid for the currently operating SLR systems. The variations of station-dependent differential range biases reach 52 and 25 mm for AJISAI and Starlette/Stella, respectively, which is why estimating station-dependent range biases or using station-dependent CoM, instead of one value for all SLR stations, is strongly recommended. This clearly indicates that the ILRS effort to produce CoM corrections for each satellite, which are site-specific and depend on the system characteristics at the time of tracking, is very important and needs to be implemented in the SLR data analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840019720','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840019720"><span>Space station needs, attributes, and architectural options: Technology development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Robert, A. C.</p> <p>1983-01-01</p> <p>The technology development of the space station is examined as it relates to space station growth and equipment requirements for future missions. Future mission topics are refined and used to establish a systems data base. Technology for human factors engineering, space maintenance, satellite design, and laser communications and tracking is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840025383','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840025383"><span>Space stations: Living in zero gravity, developmental task for psychologists and space environmental experts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ludwig, E.</p> <p>1984-01-01</p> <p>The recent advances in the psychological aspects of space station design are discussed, including the impact of the increase in awareness of both the public in general as well as space environmental experts of the importance of psychological factors when designing space stations and training astronauts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SoPh..291.2917L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SoPh..291.2917L"><span>Investigation of Sunspot Area Varying with Sunspot Number</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, K. J.; Li, F. Y.; Zhang, J.; Feng, W.</p> <p>2016-11-01</p> <p>The statistical relationship between sunspot area (SA) and sunspot number (SN) is investigated through analysis of their daily observation records from May 1874 to April 2015. For a total of 1607 days, representing 3 % of the total interval considered, either SA or SN had a value of zero while the other parameter did not. These occurrences most likely reflect the report of short-lived spots by a single observatory and subsequent averaging of zero values over multiple stations. The main results obtained are as follows: i) The number of spotless days around the minimum of a solar cycle is statistically negatively correlated with the maximum strength of solar activity of that cycle. ii) The probability distribution of SA generally decreases monotonically with SA, but the distribution of SN generally increases first, then it decreases as a whole. The different probability distribution of SA and SN should strengthen their non-linear relation, and the correction factor [k] in the definition of SN may be one of the factors that cause the non-linearity. iii) The non-linear relation of SA and SN indeed exists statistically, and it is clearer during the maximum epoch of a solar cycle.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JHyd..556..100S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JHyd..556..100S"><span>An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.</p> <p>2018-01-01</p> <p>Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940011458','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940011458"><span>Permanent change of station: The NASA employee's guide to an easier move</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1993-01-01</p> <p>This guide is for the NASA employee preparing to make a permanent change of station. Whether a transferee or a new appointee, this guide contains information that will help a Government-authorized move go more smoothly from start to finish. The guide outlines the allowances and expense reimbursements one is entitled to under Federal Travel Regulations (FTR). It provides samples of forms one may need to fill out to start the transfer rolling and to claim reimbursements. However, it is important to note that this guide is not a copy of the FTR. Information in the FTR and the NASA Travel Regulations, FMM 9760, is far more detailed and is always updated and correct.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://images.nasa.gov/#/details-iss043e194350.html','SCIGOVIMAGE-NASA'); return false;" href="https://images.nasa.gov/#/details-iss043e194350.html"><span>Earth observation taken by the Expedition 43 crew</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://images.nasa.gov/">NASA Image and Video Library</a></p> <p></p> <p>2015-05-15</p> <p>ISS043E194350 (05/15/2015) --- NASA astronaut Scott Kelly on the International Space Station tweeted this image out of an Earth observation image as part of his Space Geo trivia contest. Scott tweeted this comment and clue: "#SpaceGeo Four international borders in one photo from the International @Space_Station. Name them"! Two winners! Congrats to @TeacherWithTuba & @PC101!. The correct answer is :#SpaceGeo A: #Denmark #Norway #Sweden #Germany & #Poland. The winners will receive an autographed copy of this image when Scott returns to Earth in March 2016. Learn more about #SpaceGeo and play along every Wednesday for your chance to win: www.nasa.gov/feature/where-over-the-world-is-astronaut-sc...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150019783','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150019783"><span>Simultaneous Laser Ranging and Communication from an Earth-Based Satellite Laser Ranging Station to the Lunar Reconnaissance Orbiter in Lunar Orbit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sun, Xiaoli; Skillman, David R.; Hoffman, Evan D.; Mao, Dandan; McGarry, Jan F.; Neumann, Gregory A.; McIntire, Leva; Zellar, Ronald S.; Davidson, Frederic M.; Fong, Wai H.; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20150019783'); toggleEditAbsImage('author_20150019783_show'); toggleEditAbsImage('author_20150019783_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20150019783_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20150019783_hide"></p> <p>2013-01-01</p> <p>We report a free space laser communication experiment from the satellite laser ranging (SLR) station at NASA Goddard Space Flight Center (GSFC) to the Lunar Reconnaissance Orbiter (LRO) in lunar orbit through the on board one-way Laser Ranging (LR) receiver. Pseudo random data and sample image files were transmitted to LRO using a 4096-ary pulse position modulation (PPM) signal format. Reed-Solomon forward error correction codes were used to achieve error free data transmission at a moderate coding overhead rate. The signal fading due to the atmosphere effect was measured and the coding gain could be estimated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19860037235&hterms=optical+computers&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Doptical%2Bcomputers','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19860037235&hterms=optical+computers&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Doptical%2Bcomputers"><span>Optical processing for future computer networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Husain, A.; Haugen, P. R.; Hutcheson, L. D.; Warrior, J.; Murray, N.; Beatty, M.</p> <p>1986-01-01</p> <p>In the development of future data management systems, such as the NASA Space Station, a major problem represents the design and implementation of a high performance communication network which is self-correcting and repairing, flexible, and evolvable. To obtain the goal of designing such a network, it will be essential to incorporate distributed adaptive network control techniques. The present paper provides an outline of the functional and communication network requirements for the Space Station data management system. Attention is given to the mathematical representation of the operations being carried out to provide the required functionality at each layer of communication protocol on the model. The possible implementation of specific communication functions in optics is also considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JSCUS..68....1Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JSCUS..68....1Y"><span>RESEARCH INTO PSYCHOLOGICAL EVALUATION METHOD OF UNDERGROUND SPACE - CENTERING ON THE TOKYO METRO -</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yoshimoto, Naomi; Wake, Tenji; Mita, Takeshi; Wake, Hiromi</p> <p></p> <p>This research is concerned with developing evaluation methods that can be useful for environmental design from the psychological perspective of QOL, that is comfort, in underground space. For this research, eight stations on the Tokyo Metro, including the Fukutoshin line, were selected and two types of questionnaires were carried out after respondents had walked through the station areas and the walkways connecting the stations. From the results of the first questionnaire, four factors, comfort/convenience, insecurity, visibility/noticeability, brightness/ease of walking, were extracted. From the results of the second questionnaire, three factors were extracted: visibility of noticeboards, overall atmosphere of underground space, visibility of fare chart/subway map. There was a strong correlation between the factors comfort/convenience and insecurity, extracted from the first questionnaire, and the overall atmosphere factor extracted from the second questionnaire. For visibility/noticeability, there was a strong correlation with notices, fares chart, and subway map.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15.9450G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15.9450G"><span>Travel-time source-specific station correction improves location accuracy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo</p> <p>2013-04-01</p> <p>Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22402297-su-bre-rapid-method-determine-upper-limit-radiation-detector-correction-factor-during-qa-imrt-plans','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22402297-su-bre-rapid-method-determine-upper-limit-radiation-detector-correction-factor-during-qa-imrt-plans"><span>SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kamio, Y; Bouchard, H</p> <p>2014-06-15</p> <p>Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140010931','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140010931"><span>The Development of Human Factor Guidelines for Unmanned Aircraft System Control Stations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hobbs, Alan</p> <p>2014-01-01</p> <p>Despite being referred to as unmanned some of the major challenges confronting unmanned aircraft systems (UAS) relate to human factors. NASA is conducting research to address the human factors relevant to UAS access to non-segregated airspace. This work covers the issues of pilot performance, interaction with ATC, and control station design. A major outcome of this research will be recommendations for human factors design guidelines for UAS control stations to support routine beyond-line-of-sight operations in the US national airspace system (NAS). To be effective, guidelines must be relevant to a wide range of systems, must not be overly prescriptive, and must not impose premature standardization on evolving technologies. In developing guidelines, we recognize that existing regulatory and guidance material may already provide adequate coverage of certain issues. In other cases suitable guidelines may be found in existing military or industry human factors standards. In cases where appropriate existing standards cannot be identified, original guidelines will be proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1913698C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1913698C"><span>Simple statistical bias correction techniques greatly improve moderate resolution air quality forecast at station level</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Curci, Gabriele; Falasca, Serena</p> <p>2017-04-01</p> <p>Deterministic air quality forecast is routinely carried out at many local Environmental Agencies in Europe and throughout the world by means of eulerian chemistry-transport models. The skill of these models in predicting the ground-level concentrations of relevant pollutants (ozone, nitrogen dioxide, particulate matter) a few days ahead has greatly improved in recent years, but it is not yet always compliant with the required quality level for decision making (e.g. the European Commission has set a maximum uncertainty of 50% on daily values of relevant pollutants). Post-processing of deterministic model output is thus still regarded as a useful tool to make the forecast more reliable. In this work, we test several bias correction techniques applied to a long-term dataset of air quality forecasts over Europe and Italy. We used the WRF-CHIMERE modelling system, which provides operational experimental chemical weather forecast at CETEMPS (http://pumpkin.aquila.infn.it/forechem/), to simulate the years 2008-2012 at low resolution over Europe (0.5° x 0.5°) and moderate resolution over Italy (0.15° x 0.15°). We compared the simulated dataset with available observation from the European Environmental Agency database (AirBase) and characterized model skill and compliance with EU legislation using the Delta tool from FAIRMODE project (http://fairmode.jrc.ec.europa.eu/). The bias correction techniques adopted are, in order of complexity: (1) application of multiplicative factors calculated as the ratio of model-to-observed concentrations averaged over the previous days; (2) correction of the statistical distribution of model forecasts, in order to make it similar to that of the observations; (3) development and application of Model Output Statistics (MOS) regression equations. We illustrate differences and advantages/disadvantages of the three approaches. All the methods are relatively easy to implement for other modelling systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Organizational+AND+Culture+AND+Leadership%2c&pg=5&id=EJ814388','ERIC'); return false;" href="https://eric.ed.gov/?q=Organizational+AND+Culture+AND+Leadership%2c&pg=5&id=EJ814388"><span>Factors Influencing the Design, Establishment, Administration, and Governance of Correctional Education for Females</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ellis, Johnica; McFadden, Cheryl; Colaric, Susan</p> <p>2008-01-01</p> <p>This article summarizes the results of a study conducted to investigate factors influencing the organizational design, establishment, administration, and governance of correctional education for females. The research involved interviews with correctional and community college administrators and practitioners representing North Carolina female…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.A24B..07B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.A24B..07B"><span>Improving satellite retrievals of NO2 in biomass burning regions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.</p> <p>2010-12-01</p> <p>The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1613169E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1613169E"><span>Evaluation of reanalysis near-surface winds over northern Africa in Boreal summer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Engelstaedter, Sebastian; Washington, Richard</p> <p>2014-05-01</p> <p>The emission of dust from desert surfaces depends on the combined effects of surface properties such as surface roughness, soil moisture, soil texture and particle size (erodibility) and wind speed (erosivity). In order for dust cycle models to realistically simulate dust emissions for the right reasons, it is essential that erosivity and erodibility controlling factors are represented correctly. There has been a focus on improving dust emission schemes or input fields of soil distribution and texture even though it has been shown that the use of wind fields from different reanalysis datasets to drive the same model can result in significant differences in the dust emissions. Here we evaluate the representation of near-surface wind speed from three different reanalysis datasets (ERA-Interim, CFSR and MERRA) over the North African domain. Reanalysis 10m wind speeds are compared with observations from SYNOP and METAR reports available from the UK Meteorological Office Integrated Data Archive System (MIDAS) Land and Marine Surface Stations Dataset. We compare 6-hourly observations of 10m wind speed between 1 January 1989 and 31 December 2009 from more the 500 surface stations with the corresponding reanalysis values. A station data based mean wind speed climatology for North Africa is presented. Overall, the representation of 10m winds is relatively poor in all three reanalysis datasets with stations in the northern parts of the Sahara still being better simulated (correlation coefficients ~ 0.5) than stations in the Sahel (correlation coefficients < 0.3) which points at the reanalyses not being able to realistically capture the Sahel dynamics systems. All three reanalyses have a systematic bias towards overestimating wind speed below 3-4 m/s and underestimating wind speed above 4 m/s. This bias becomes larger with increasing wind speed but is independent of the time of day. For instance, 14 m/s observed wind speeds are underestimated on average by 6 m/s in the ERA-Interim reanalysis. Given the cubic relationship between wind speed and dust emission this large underestimation is expected to significantly impact the simulation of dust emissions. A negative relationship between observed and ERA-Interim wind speed is found for winds above 14 m/s indicating that high wind speed generating processes are not well (if at all) represented in the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA619603','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA619603"><span>Utilizing an Energy Management System with Distributed Resources to Manage Critical Loads and Reduce Energy Costs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2014-09-01</p> <p>peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a system during...photovoltaic arrays during islanding, and power factor correction, the implementation of the ESS by itself is likely to prove cost prohibitive. The DOD...These functions include peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160006104','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160006104"><span>Pilot Critical Incident Reports as a Means to Identify Human Factors of Remotely Piloted Aircraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hobbs, Alan; Cardoza, Colleen; Null, Cynthia</p> <p>2016-01-01</p> <p>It has been estimated that aviation accidents are typically preceded by numerous minor incidents arising from the same causal factors that ultimately produced the accident. Accident databases provide in-depth information on a relatively small number of occurrences, however incident databases have the potential to provide insights into the human factors of Remotely Piloted Aircraft System (RPAS) operations based on a larger volume of less-detailed reports. Currently, there is a lack of incident data dealing with the human factors of unmanned aircraft systems. An exploratory study is being conducted to examine the feasibility of collecting voluntary critical incident reports from RPAS pilots. Twenty-three experienced RPAS pilots volunteered to participate in focus groups in which they described critical incidents from their own experience. Participants were asked to recall (1) incidents that revealed a system flaw, or (2) highlighted a case where the human operator contributed to system resilience or mission success. Participants were asked to only report incidents that could be included in a public document. During each focus group session, a note taker produced a de-identified written record of the incident narratives. At the end of the session, participants reviewed each written incident report, and made edits and corrections as necessary. The incidents were later analyzed to identify contributing factors, with a focus on design issues that either hindered or assisted the pilot during the events. A total of 90 incidents were reported. Human factor issues included the impact of reduced sensory cues, traffic separation in the absence of an out-the-window view, control latencies, vigilance during monotonous and ultra-long endurance flights, control station design considerations, transfer of control between control stations, the management of lost link procedures, and decision-making during emergencies. Pilots participated willingly and enthusiastically in the study, and generally had little difficulty recalling critical incidents. The results suggest that pilot interviews can be a productive method of gathering information on incidents that might not otherwise be reported. Some of the issues described in the reports have received significant attention in the literature, or are analogous to human factors of manned aircraft. In other cases, incident reports involved human factors that are poorly understood, and have not yet been the subject of extensive study. Although many of the reported incidents were related to pilot error, the participants also provided examples of the positive contribution that humans make to the operation of highly-automated systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22626785-su-correction-factor-computations-nist-ritz-free-air-chamber-medium-energy-rays','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22626785-su-correction-factor-computations-nist-ritz-free-air-chamber-medium-energy-rays"><span>SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bergstrom, P</p> <p></p> <p>Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ArtSa..52...71S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ArtSa..52...71S"><span>Evaluation of Integration Degree of the ASG-EUPOS Polish Reference Networks With Ukrainian GeoTerrace Network Stations in the Border Area</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Siejka, Zbigniew</p> <p>2017-09-01</p> <p>GNSS systems are currently the basic tools for determination of the highest precision station coordinates (e.g. basic control network stations or stations used in the networks for geodynamic studies) as well as for land, maritime and air navigation. All of these tasks are carried out using active, large scale, satellite geodetic networks which are complex, intelligent teleinformatic systems offering post processing services along with corrections delivered in real-time for kinematic measurements. Many countries in the world, also in Europe, have built their own multifunctional networks and enhance them with their own GNSS augmentation systems. Nowadays however, in the era of international integration, there is a necessity to consider collective actions in order to build a unified system, covering e.g. the whole Europe or at least some of its regions. Such actions have already been undertaken in many regions of the world. In Europe such an example is the development for EUPOS which consists of active national networks built in central eastern European countries. So far experience and research show, that the critical areas for connecting these networks are border areas, in which the positioning accuracy decreases (Krzeszowski and Bosy, 2011). This study attempts to evaluate the border area compatibility of Polish ASG-EUPOS (European Position Determination System) reference stations and Ukrainian GeoTerrace system reference stations in the context of their future incorporation into the EUPOS. The two networks analyzed in work feature similar hardware parameters. In the ASG-EUPOS reference stations network, during the analyzed period, 2 stations (WLDW and CHEL) used only one system (GPS), while, in the GeoTerrace network, all the stations were equipped with both GPS and GLONASS receivers. The ASG EUPOS reference station network (95.6%) has its average completeness greater by about 6% when compared to the GeoTerrace network (89.8%).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5113247','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5113247"><span>Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan</p> <p>2016-01-01</p> <p>Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130011353','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130011353"><span>Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kandula, Max; Haddad, George</p> <p>2007-01-01</p> <p>This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AGUFM.A31C0084Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AGUFM.A31C0084Z"><span>Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.</p> <p>2004-12-01</p> <p>Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://pubs.usgs.gov/of/2016/1156/ofr20161156.pdf','USGSPUBS'); return false;" href="http://pubs.usgs.gov/of/2016/1156/ofr20161156.pdf"><span>Hydropower assessment of Bolivia—A multisource satellite data and hydrologic modeling approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Velpuri, Naga Manohar; Pervez, Shahriar; Cushing, W. Matthew</p> <p>2016-11-28</p> <p>This study produced a geospatial database for use in a decision support system by the Bolivian authorities to investigate further development and investment potentials in sustainable hydropower in Bolivia. The study assessed theoretical hydropower of all 1-kilometer (km) stream segments in the country using multisource satellite data and a hydrologic modeling approach. With the assessment covering the 2 million square kilometer (km2) region influencing Bolivia’s drainage network, the potential hydropower figures are based on theoretical yield assuming that the systems generating the power are 100 percent efficient. There are several factors to consider when determining the real-world or technical power potential of a hydropower system, and these factors can vary depending on local conditions. Since this assessment covers a large area, it was necessary to reduce these variables to the two that can be modeled consistently throughout the region, streamflow or discharge, and elevation drop or head. First, the Shuttle Radar Topography Mission high-resolution 30-meter (m) digital elevation model was used to identify stream segments with greater than 10 km2 of upstream drainage. We applied several preconditioning processes to the 30-m digital elevation model to reduce errors and improve the accuracy of stream delineation and head height estimation. A total of 316,500 1-km stream segments were identified and used in this study to assess the total theoretical hydropower potential of Bolivia. Precipitation observations from a total of 463 stations obtained from the Bolivian Servicio Nacional de Meteorología e Hidrología (Bolivian National Meteorology and Hydrology Service) and the Brazilian Agência Nacional de Águas (Brazilian National Water Agency) were used to validate six different gridded precipitation estimates for Bolivia obtained from various sources. Validation results indicated that gridded precipitation estimates from the Tropical Rainfall Measuring Mission (TRMM) reanalysis product (3B43) had the highest accuracies. The coarse-resolution (25-km) TRMM data were disaggregated to 5-km pixels using climatology information obtained from the Climate Hazards Group Infrared Precipitation with Stations dataset. About a 17-percent bias was observed in the disaggregated TRMM estimates, which was corrected using the station observations. The bias-corrected, disaggregated TRMM precipitation estimate was used to compute stream discharge using a regionalization approach. In regionalization approach, required homogeneous regions for Bolivia were derived from precipitation patterns and topographic characteristics using a k-means clustering approach. Using the discharge and head height estimates for each 1-km stream segment, we computed hydropower potential for 316,490 stream segments within Bolivia and that share borders with Bolivia. The total theoretical hydropower potential (TTHP) of these stream segments was found to be 212 gigawatts (GW). Out of this total, 77.4 GW was within protected areas where hydropower projects cannot be developed; hence, the remaining total theoretical hydropower in Bolivia (outside the protected areas) was estimated as 135 GW. Nearly 1,000 1-km stream segments, however, were within the boundaries of existing hydropower projects. The TTHP of these stream segments was nearly 1.4 GW, so the residual TTHP of the streams in Bolivia was estimated as 133 GW. Care should be exercised to understand and interpret the TTHP identified in this study because all the stream segments identified and assessed in this study cannot be harnessed to their full capacity; furthermore, factors such as required environmental flows, efficiency, economics, and feasibility need to be considered to better identify a more real-world hydropower potential. If environmental flow requirements of 20–40 percent are considered, the total theoretical power available reduces by 60–80 percent. In addition, a 0.72 efficiency factor further reduces the estimation by another 28 percent. This study provides the base theoretical hydropower potential for Bolivia, the next step is to identify optimal hydropower plant locations and factor in the principles to appraise a real-world power potential in Bolivia.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5421696','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5421696"><span>The Additional Secondary Phase Correction System for AIS Signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen</p> <p>2017-01-01</p> <p>This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22545249-su-anomalous-altitude-effect-permanent-implant-brachytherapy-seeds','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22545249-su-anomalous-altitude-effect-permanent-implant-brachytherapy-seeds"><span>SU-E-T-123: Anomalous Altitude Effect in Permanent Implant Brachytherapy Seeds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Watt, E; Spencer, DP; Meyer, T</p> <p></p> <p>Purpose: Permanent seed implant brachytherapy procedures require the measurement of the air kerma strength of seeds prior to implant. This is typically accomplished using a well-type ionization chamber. Previous measurements (Griffin et al., 2005; Bohm et al., 2005) of several low-energy seeds using the air-communicating HDR 1000 Plus chamber have demonstrated that the standard temperature-pressure correction factor, P{sub TP}, may overcompensate for air density changes induced by altitude variations by up to 18%. The purpose of this work is to present empirical correction factors for two clinically-used seeds (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) for which empiricalmore » altitude correction factors do not yet exist in the literature when measured with the HDR 1000 Plus chamber. Methods: An in-house constructed pressure vessel containing the HDR 1000 Plus well chamber and a digital barometer/thermometer was pumped or evacuated, as appropriate, to a variety of pressures from 725 to 1075 mbar. Current measurements, corrected with P{sub TP}, were acquired for each seed at these pressures and normalized to the reading at ‘standard’ pressure (1013.25 mbar). Results: Measurements in this study have shown that utilization of P{sub TP} can overcompensate in the corrected current reading by up to 20% and 17% for the IsoAid Pd-103 and the Nucletron I-125 seed respectively. Compared to literature correction factors for other seed models, the correction factors in this study diverge by up to 2.6% and 3.0% for iodine (with silver) and palladium respectively, indicating the need for seed-specific factors. Conclusion: The use of seed specific altitude correction factors can reduce uncertainty in the determination of air kerma strength. The empirical correction factors determined in this work can be applied in clinical quality assurance measurements of air kerma strength for two previously unpublished seed designs (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) with the HDR 1000 Plus well chamber.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JQSRT.139...82S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JQSRT.139...82S"><span>New device for monitoring the colors of the night</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Spoelstra, Henk</p> <p>2014-05-01</p> <p>The introduction of LED lighting in the outdoor environment may increase the amount of blue light in the night sky color spectrum. This can cause more light pollution due to Rayleigh scattering of the shorter wavelengths. Blue light may also have an impact on circadian rhythm of humans due to the suppression of melatonin. At present no long-term data sets of the color spectrum of the night sky are available. In order to facilitate the monitoring of levels and variations in the night sky spectrum, a low cost multi-filter instrument has been developed. Design considerations are described as well as the choice of suitable filters, which are critical - especially in the green wavelength band from 500 to 600 nm. Filters from the optical industry were chosen for this band because available astronomical filters exclude some or all of the low and high-pressure sodium lines from lamps, which are important in light pollution research. Correction factors are calculated to correct for the detector response and filter transmissions. Results at a suburban monitoring station showed that the light levels between 500 and 600 nm are dominant during clear and cloudy skies. The relative contribution of blue light increases with a clear moonless night sky. The change in color spectrum of the night sky under moonlit skies is more complex and is still under study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19860065527&hterms=Social&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DSocial','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19860065527&hterms=Social&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DSocial"><span>Space Station in the 21st century - A social perspective</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bluth, B. J.</p> <p>1986-01-01</p> <p>A human factors and sociological consideration of Space Station crew facilities and interactions is presented which attempts to place the experiences of astronaut communities in the larger context of late 20th century industrial, economic, and cultural trends. Attention is given to the relationship of Space Station communities to 'Information Society' - related historical developments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.loc.gov/pictures/collection/hh/item/fl0069.photos.220636p/','SCIGOV-HHH'); return false;" href="https://www.loc.gov/pictures/collection/hh/item/fl0069.photos.220636p/"><span>Photocopy of drawing (this photograph is an 8" x 10" ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.loc.gov/pictures/collection/hh/">Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey</a></p> <p></p> <p></p> <p>Photocopy of drawing (this photograph is an 8" x 10" copy of an 8" x 10" negative; 1983 original architectural drawing located at Building No. 458, NAS Pensacola, Florida) CORRECT FIRE/SAFETY DEFICIENCIES, BUILDING NO. 1, SECTIONS AND DETAILS, SHEET 3 OF 3 - U.S. Naval Air Station, Ship Carpenter's Workshop, 368 South Avenue, Pensacola, Escambia County, FL</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.loc.gov/pictures/collection/hh/item/fl0069.photos.220635p/','SCIGOV-HHH'); return false;" href="https://www.loc.gov/pictures/collection/hh/item/fl0069.photos.220635p/"><span>Photocopy of drawing (this photograph is an 8" x 10" ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.loc.gov/pictures/collection/hh/">Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey</a></p> <p></p> <p></p> <p>Photocopy of drawing (this photograph is an 8" x 10" copy of an 8" x 10" negative; 1983 original architectural drawing located at Building No. 458, NAS Pensacola, Florida) CORRECT FIRE/SAFETY DEFICIENCIES, BUILDING NO. 1, FIRE PROTECTION CEILING PLANS, SHEET 2 OF 3 - U.S. Naval Air Station, Ship Carpenter's Workshop, 368 South Avenue, Pensacola, Escambia County, FL</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2010-02-09/pdf/2010-2779.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2010-02-09/pdf/2010-2779.pdf"><span>75 FR 6316 - Revisions to Rules Authorizing the Operation of Low Power Auxiliary Stations in the 698-806 MHz...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2010-02-09</p> <p>... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 2 [WT Docket No. 08-166, 08-167; ET Docket No. 10-24... Communications Commission. ACTION: Correcting amendments. SUMMARY: On January 15, 2010, the Commission released a... Communications Commission published a document amending part 2 in the Federal Register of January 22, 2010 (75 FR...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2011-07-08/pdf/2011-17267.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2011-07-08/pdf/2011-17267.pdf"><span>76 FR 40288 - Airworthiness Directives; The Boeing Company Model MD-90-30 Airplanes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2011-07-08</p> <p>... rulemaking (NPRM). SUMMARY: We propose to adopt a new airworthiness directive (AD) for all Model MD-90-30... cracking on the aft side of the left and right wing rear spar lower caps at station Xrs = 164.000, further.... We are proposing this AD to detect and correct cracking of the left and right rear spar lower caps...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3852554','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3852554"><span>Designing a handwashing station for infrastructure-restricted communities in Bangladesh using the integrated behavioural model for water, sanitation and hygiene interventions (IBM-WASH)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background In Bangladesh diarrhoeal disease and respiratory infections contribute significantly to morbidity and mortality. Handwashing with soap reduces the risk of infection; however, handwashing rates in infrastructure-restricted settings remain low. Handwashing stations – a dedicated, convenient location where both soap and water are available for handwashing – are associated with improved handwashing practices. Our aim was to identify a locally feasible and acceptable handwashing station that enabled frequent handwashing for two subsequent randomized trials testing the health effects of this behaviour. Methods We conducted formative research in the form of household trials of improved practices in urban and rural Bangladesh. Seven candidate handwashing technologies were tested by nine to ten households each during two iterative phases. We conducted interviews with participants during an introductory visit and two to five follow up visits over two to six weeks, depending on the phase. We used the Integrated Behavioural Model for Water, Sanitation and Hygiene (IBM-WASH) to guide selection of candidate handwashing stations and data analysis. Factors presented in the IBM-WASH informed thematic coding of interview transcripts and contextualized feasibility and acceptability of specific handwashing station designs. Results Factors that influenced selection of candidate designs were market availability of low cost, durable materials that were easy to replace or replenish in an infrastructure-restricted and shared environment. Water storage capacity, ease of use and maintenance, and quality of materials determined the acceptability and feasibility of specific handwashing station designs. After examining technology, psychosocial and contextual factors, we selected a handwashing system with two different water storage capacities, each with a tap, stand, basin, soapy water bottle and detergent powder for pilot testing in preparation for the subsequent randomized trials. Conclusions A number of contextual, psychosocial and technological factors influence use of handwashing stations at five aggregate levels, from habitual to societal. In interventions that require a handwashing station to facilitate frequent handwashing with soap, elements of the technology, such as capacity, durability and location(s) within the household are key to high feasibility and acceptability. More than one handwashing station per household may be required. IBM-WASH helped guide the research and research in-turn helped validate the framework. PMID:24060247</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018NuPhB.931..359Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018NuPhB.931..359Z"><span>Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhu, Ruilin</p> <p>2018-06-01</p> <p>We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19780004093','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19780004093"><span>On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lan, C. E.; Lamar, J. E.</p> <p>1977-01-01</p> <p>A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1440905-power-corrections-tmd-factorization-boson-production','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1440905-power-corrections-tmd-factorization-boson-production"><span>Power corrections to TMD factorization for Z-boson production</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Balitsky, I.; Tarasov, A.</p> <p>2018-05-24</p> <p>A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1440905','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1440905"><span>Power corrections to TMD factorization for Z-boson production</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Balitsky, I.; Tarasov, A.</p> <p></p> <p>A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/761575','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/761575"><span>Post-Remediation Biomonitoring of Pesticides and Other Contaminants in Marine Waters and Sediment Near the United Heckathorn Superfund Site, Richmond, California</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>LD Antrim; NP Kohn</p> <p></p> <p>This report, PNNL-1 3059 Rev. 1, was published in July 2000 and replaces PNNL-1 3059 which is dated October 1999. The revision corrects tissue concentration units that were reported as dry weight but were actually wet weight, and updates conclusions based on the correct reporting units. Marine sediment remediation at the United Heckathorn Superfund Site was completed in April 1997. Water and mussel tissues were sampled in February 1999 from four stations near Lauritzen Canal in Richmond, California, for Year 2 of post-remediation monitoring of marine areas near the United Heckathom Site. Dieldrin and dichlorodiphenyl trichloroethane (DDT) were analyzed inmore » water samples, tissue samples from resident mussels, and tissue samples from transplanted mussels deployed for 4 months. Concentrations of dieldrin and total DDT in water and total DDT in tissue were compared with Year 1 of post-remediation monitoring, and with preremediation data from the California State Mussel Watch program (tissue s) and the Ecological Risk Assessment for the United Heckathorn Superfund Site (tissues and water). Mussel tissues were also analyzed for polychlorinated biphenyls (PCB), which were detected in sediment samples. Chlorinated pesticide concentrations in water samples were similar to preremediation levels and did not meet remediation goals. Mean dieldrin concentrations in water ranged from 0.62 ng/L to 12.5 ng/L and were higher than the remediation goal (0.14 ng/L) at all stations. Mean total DDT concentrations in water ranged from 14.4 ng/L to 62.3 ng/L and exceeded the remediation goal (0.59 ng/L) at all stations. The highest concentrations of both DDT and dieldrin were found at the Lauritzen Canal/End station. Despite exceedence of the remediation goals, chlorinated pesticide concentrations in Lauritzen Canal water samples were notably lower in 1999 than in 1998. PCBS were not detected in water samples in 1999.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70028458','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70028458"><span>One-way coupling of an atmospheric and a hydrologic model in Colorado</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.</p> <p>2006-01-01</p> <p>This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.2836A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.2836A"><span>Assessing the impact of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping</p> <p>2014-05-01</p> <p>The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22548439-su-evaluation-heterogeneity-corrections-made-raystation-treatment-planning','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22548439-su-evaluation-heterogeneity-corrections-made-raystation-treatment-planning"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Rodriguez, M; Bartolac, S; Rezaee, M</p> <p></p> <p>Purpose: To examine the agreement between absorbed doses calculated by RayStation treatment planning algorithm to those measured with gafchromic film and ion chamber when the photon beam is perturbed by attenuation or lateral scatter of lung material. Methods: A gafchromic EBT2 film was placed in the center of a 30×30×20 cm{sup 3} solid water phantom with a 5 cm lung slab placed at 10 cm depth. The film was irradiated at SSD = 100 cm with a 6 MV photon beam, 10×10 and 5×5 cm{sup 2} field sizes and with the beam parallel to the film and lung slab. Amore » CT was performed to the phantom arrangement for RayStation dose calculation. The films were scanned in an Epson 10000X flatbed scanner and analyzed using the red channel, 16 bits, 76 dpi. PDD curves at the central axis and profiles at dmax were also measured in water using a CC13 (0.13 CC) ion chamber. Measurements and calculation of PDD curves at the central axis and profiles at dmax and 20 cm depth were compared using the criteria suggested by the AAPM Task Group # 53. Results: The PDD curves measured with gafchromic film and those measured in water with ion chamber agree with the ones calculated by RayStation within the uncertainty of the measurements which is within 3%. The passing rate values of the measured and calculated profiles for the 2 field sizes are within 94% for both at dmax and at 20 cm depth. Conclusion: Raystation dose calculation engine models inhomogeneity corrections. Differences between the calculated PDD curves and profiles and those measured with gafchromic film are within the uncertainty of the measurements and inside of the agreement tolerance suggested by TG53. Therefore, RayStation treatment planning has an acceptable algorithm to correct dose delivered by photon beams perturbed by lung tissue.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017KARJ...29..281S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017KARJ...29..281S"><span>A comparative study of the effects of cone-plate and parallel-plate geometries on rheological properties under oscillatory shear flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu</p> <p>2017-11-01</p> <p>In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100031890','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100031890"><span>Human Factors and the International Space Station</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Peacock, Brian; Rajulu, Sudhakar; Novak, Jennifer; Rathjen, Thomas; Whitmore, Mihriban; Maida, James; Woolford, Barbara</p> <p>2001-01-01</p> <p>The purposes of this panel are to inform the human factors community regarding the challenges of designing the International Space Station (ISS) and to stimulate the broader human factors community into participating in the various basic and applied research opportunities associated with the ISS. This panel describes the variety of techniques used to plan and evaluate human factors for living and working in space. The panel members have contributed to many different aspects of the ISS design and operations. Architecture, equipment, and human physical performance requirements for various tasks have all been tailored to the requirements of operating in microgravity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGRB..122.9420H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGRB..122.9420H"><span>Impact and Implementation of Higher-Order Ionospheric Effects on Precise GNSS Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hadas, T.; Krypiak-Gregorczyk, A.; Hernández-Pajares, M.; Kaplon, J.; Paziewski, J.; Wielgosz, P.; Garcia-Rigo, A.; Kazmierski, K.; Sosnica, K.; Kwasniak, D.; Sierny, J.; Bosy, J.; Pucilowski, M.; Szyszko, R.; Portasiak, K.; Olivares-Pulido, G.; Gulyaeva, T.; Orus-Perez, R.</p> <p>2017-11-01</p> <p>High precision Global Navigation Satellite Systems (GNSS) positioning and time transfer require correcting signal delays, in particular higher-order ionospheric (I2+) terms. We present a consolidated model to correct second- and third-order terms, geometric bending and differential STEC bending effects in GNSS data. The model has been implemented in an online service correcting observations from submitted RINEX files for I2+ effects. We performed GNSS data processing with and without including I2+ corrections, in order to investigate the impact of I2+ corrections on GNSS products. We selected three time periods representing different ionospheric conditions. We used GPS and GLONASS observations from a global network and two regional networks in Poland and Brazil. We estimated satellite orbits, satellite clock corrections, Earth rotation parameters, troposphere delays, horizontal gradients, and receiver positions using global GNSS solution, Real-Time Kinematic (RTK), and Precise Point Positioning (PPP) techniques. The satellite-related products captured most of the impact of I2+ corrections, with the magnitude up to 2 cm for clock corrections, 1 cm for the along- and cross-track orbit components, and below 5 mm for the radial component. The impact of I2+ on troposphere products turned out to be insignificant in general. I2+ corrections had limited influence on the performance of ambiguity resolution and the reliability of RTK positioning. Finally, we found that I2+ corrections caused a systematic shift in the coordinate domain that was time- and region-dependent and reached up to -11 mm for the north component of the Brazilian stations during the most active ionospheric conditions.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20020066569&hterms=solar+intensity+measurement&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dsolar%2Bintensity%2Bmeasurement','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20020066569&hterms=solar+intensity+measurement&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dsolar%2Bintensity%2Bmeasurement"><span>Modeled and Empirical Approaches for Retrieving Columnar Water Vapor from Solar Transmittance Measurements in the 0.72, 0.82, and 0.94 Micrometer Absorption Bands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ingold, T.; Schmid, B.; Maetzler, C.; Demoulin, P.; Kaempfer, N.</p> <p>2000-01-01</p> <p>A Sun photometer (18 channels between 300 and 1024 nm) has been used for measuring the columnar content of atmospheric water vapor (CWV) by solar transmittance measurements in absorption bands with channels centered at 719, 817, and 946 nm. The observable is the band-weighted transmittance function defined by the spectral absorption of water vapor and the spectral features of solar irradiance and system response. The transmittance function is approximated by a three-parameter model. Its parameters are determined from MODTRAN and LBLRTM simulations or empirical approaches using CWV data of a dual-channel microwave radiometer (MWR) or a Fourier transform spectrometer (FTS). Data acquired over a 2-year period during 1996-1998 at two different sites in Switzerland, Bern (560 m above sea level (asl)) and Jungfraujoch (3580 m asl) were compared to MWR, radiosonde (RS), and FTS retrievals. At the low-altitude station with an average CWV amount of 15 mm the LBLRTM approach (based on recently corrected line intensities) leads to negligible biases at 719 and 946 nm if compared to an average of MWR, RS, and GPS retrievals. However, at 817 nm an overestimate of 2.7 to 4.3 mm (18-29%) remains. At the high-altitude station with an average CWV amount of 1.4 mm the LBLRTM approaches overestimate the CWV by 1.0, 1.4. and 0.1 mm (58, 76, and 3%) at 719, 817, and 946 nm, compared to the ITS instrument. At the low-altitude station, CWV estimates, based on empirical approaches, agree with the MWR within 0.4 mm (2.5% of the mean); at the high-altitude site with a factor of 10 less water vapor the agreement of the sun photometers (SPM) with the ITS is 0.0 to 0.2 mm (1 to 9% of the mean CWV there). Sensitivity analyses show that for the conditions met at the two stations with CWV ranging from 0.2 to 30 mm, the retrieval errors are smallest if the 946 nm channel is used.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70000326','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70000326"><span>Global daily reference evapotranspiration modeling and evaluation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.</p> <p>2008-01-01</p> <p>Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration’s Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ∼100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world. While the study revealed the potential of GDAS ETo for large-scale hydrological applications, site-specific use of GDAS ETo in complex hydro-climatic regions such as coastal areas and rugged terrain may require the application of bias correction and/or disaggregation of the GDAS ETo using downscaling techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AGUFM.G33A0026M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AGUFM.G33A0026M"><span>Improvement of Tidal Analysis Results by a Priori Rain Fall Modelling at the Vienna and Membach stations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meurers, B.; van Camp, M.; Petermans, T.</p> <p>2005-12-01</p> <p>We investigate how far tidal analysis results can be improved when a rain fall admittance model is applied on the superconducting gravity (SG) data. For that purpose both Vienna and Membach data have been analysed with and without a priori rain fall correction. In Membach the residual drop for most events (80%) can be explained by the rain water load, while in Vienna only 50% of all events fit the model in detail. In the other cases the Newtonian effect of vertical air mass redistribution (vertical density variation without air pressure change), predominantly connected with high vertical convection activity, e.g. thunderstorms, plays an essential role: short-term atmospheric signals show up steep gravity residual decreases of a few nms-2 within 10 - 60 min, well correlated with outdoor air temperature in most cases. However, even in those cases the water load model is able to explain the dominating part of the residual drop especially during heavy rain fall. In Vienna more than 110 events have been detected over 10 years. 84% of them are associated with heavy rain starting at or up to 10 min later than the residual drop while the rest (16%) shows no or only little rainfall. The magnitude of the gravity drop depends on the total amount of rainfall accumulated during the meteorological event. Step like signals deteriorate the frequency spectrum estimates. This even holds for tidal analysis. As the drops are of physical origin, they should not be eliminated blindly but corrected using water load modeling constrained by high temporal resolution (1 min) rain data. 3D modeling of the water mass load due to a rain event is based on the following assumptions: (1) Rain water intrudes into the uppermost soil layer (close to the topography surface) and remains there at least until rain has stopped. This is justified for a period of some hours after the rainfall as evapotranspiration is not yet effective. (2) No run-off except of sealed areas or building roofs, where water can not intrude into the soil but will drain off into the sewage water system instead. (3) Rainfall is equal everywhere in the station surroundings. (4) No surface deformation due to the water mass load Correcting for rain fall effects reduces by about 10% the standard deviation of the residuals after tidal parameter adjustment. Amplitude factor changes are in the order of 10-3 or less, phase lags change by 10-3 to 10-2: statistically, these variations are not significant as they lie within the error bars. However, it is worth noting that the amplitude factors of tidal constituents with high amplitude (O1, P1, K1) and even Ψ1 and Φ1 show similar variations in Vienna and Membach. Generally the tidal parameter variation is less in the SD than in the D band.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/27896','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/27896"><span>A review of the effects of station placement and observer bias in detections of marbled murrelets in forest stands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Brian P. O’Donnel</p> <p>1995-01-01</p> <p>A variety of factors influence the results of surveys conducted for Marbled Murrelets (Brachyramphus marmoratus) in the forest. In this paper we examine observer variability and survey station placement as factors influencing murrelet survey data. A training and evaluation protocol (Ralph and others 1993) was developed to insure high field abilities and comparability...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27655802','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27655802"><span>SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J</p> <p>2017-06-01</p> <p>This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950058595&hterms=atmospheric+pressure&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Datmospheric%2Bpressure','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950058595&hterms=atmospheric+pressure&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Datmospheric%2Bpressure"><span>Atmospheric pressure loading parameters from very long baseline interferometry observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Macmillan, D. S.; Gipson, John M.</p> <p>1994-01-01</p> <p>Atmospheric mass loading produces a primarily vertical displacement of the Earth's crust. This displacement is correlated with surface pressure and is large enough to be detected by very long baseline interferometry (VLBI) measurements. Using the measured surface pressure at VLBI stations, we have estimated the atmospheric loading term for each station location directly from VLBI data acquired from 1979 to 1992. Our estimates of the vertical sensitivity to change in pressure range from 0 to -0.6 mm/mbar depending on the station. These estimates agree with inverted barometer model calculations (Manabe et al., 1991; vanDam and Herring, 1994) of the vertical displacement sensitivity computed by convolving actual pressure distributions with loading Green's functions. The pressure sensitivity tends to be smaller for stations near the coast, which is consistent with the inverted barometer hypothesis. Applying this estimated pressure loading correction in standard VLBI geodetic analysis improves the repeatability of estimated lengths of 25 out of 37 baselines that were measured at least 50 times. In a root-sum-square (rss) sense, the improvement generally increases with baseline length at a rate of about 0.3 to 0.6 ppb depending on whether the baseline stations are close to the coast. For the 5998-km baseline from Westford, Massachusetts, to Wettzell, Germany, the rss improvement is about 3.6 mm out of 11.0 mm. The average rss reduction of the vertical scatter for inland stations ranges from 2.7 to 5.4 mm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21107733-rawhide-energy-station-fort-collins-colorado','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21107733-rawhide-energy-station-fort-collins-colorado"><span>Rawhide Energy Station, Fort Collins, Colorado</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Peltier, R.</p> <p>2008-10-15</p> <p>The staff of Platte River Power Authority's Rawhide Energy Station have been racking up operating stats and an environmental performance record that is the envy of other plant managers. In the past decade Rawhide has enjoyed an equivalent availability factor in the mid to high 90s and an average capacity factor approaching 90%. Still not content with this performance, Rawhide invested in new technology and equipment upgrades to further optimise performance, reduce emissions, and keep cost competitive. The Energy Station includes four GE France 7EA natural gas-fired turbines totalling 260 MW and a 274 MW coal-fired unit located in northeasternmore » Colorado. 7 figs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title49-vol5/pdf/CFR-2011-title49-vol5-sec325-79.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title49-vol5/pdf/CFR-2011-title49-vol5-sec325-79.pdf"><span>49 CFR 325.79 - Application of correction factors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-10-01</p> <p>... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title49-vol5/pdf/CFR-2010-title49-vol5-sec325-79.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title49-vol5/pdf/CFR-2010-title49-vol5-sec325-79.pdf"><span>49 CFR 325.79 - Application of correction factors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-10-01</p> <p>... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/913548','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/913548"><span>Regional Body-Wave Attenuation Using a Coda Source Normalization Method: Application to MEDNET Records of Earthquakes in Italy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Walter, W R; Mayeda, K; Malagnini, L</p> <p>2007-02-01</p> <p>We develop a new methodology to determine apparent attenuation for the regional seismic phases Pn, Pg, Sn, and Lg using coda-derived source spectra. The local-to-regional coda methodology (Mayeda, 1993; Mayeda and Walter, 1996; Mayeda et al., 2003) is a very stable way to obtain source spectra from sparse networks using as few as one station, even if direct waves are clipped. We develop a two-step process to isolate the frequency-dependent Q. First, we correct the observed direct wave amplitudes for an assumed geometrical spreading. Next, an apparent Q, combining path and site attenuation, is determined from the difference between themore » spreading-corrected amplitude and the independently determined source spectra derived from the coda methodology. We apply the technique to 50 earthquakes with magnitudes greater than 4.0 in central Italy as recorded by MEDNET broadband stations around the Mediterranean at local-to-regional distances. This is an ideal test region due to its high attenuation, complex propagation, and availability of many moderate sized earthquakes. We find that a power law attenuation of the form Q(f) = Q{sub 0}f{sup Y} fit all the phases quite well over the 0.5 to 8 Hz band. At most stations, the measured apparent Q values are quite repeatable from event to event. Finding the attenuation function in this manner guarantees a close match between inferred source spectra from direct waves and coda techniques. This is important if coda and direct wave amplitudes are to produce consistent seismic results.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGeod.tmp..468A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGeod.tmp..468A"><span>Satellite laser ranging to low Earth orbiters: orbit and network validation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arnold, Daniel; Montenbruck, Oliver; Hackel, Stefan; Sośnica, Krzysztof</p> <p>2018-04-01</p> <p>Satellite laser ranging (SLR) to low Earth orbiters (LEOs) provides optical distance measurements with mm-to-cm-level precision. SLR residuals, i.e., differences between measured and modeled ranges, serve as a common figure of merit for the quality assessment of orbits derived by radiometric tracking techniques. We discuss relevant processing standards for the modeling of SLR observations and highlight the importance of line-of-sight-dependent range corrections for the various types of laser retroreflector arrays. A 1-3 cm consistency of SLR observations and GPS-based precise orbits is demonstrated for a wide range of past and present LEO missions supported by the International Laser Ranging Service (ILRS). A parameter estimation approach is presented to investigate systematic orbit errors and it is shown that SLR validation of LEO satellites is not only able to detect radial but also along-track and cross-track offsets. SLR residual statistics clearly depend on the employed precise orbit determination technique (kinematic vs. reduced-dynamic, float vs. fixed ambiguities) but also reveal pronounced differences in the ILRS station performance. Using the residual-based parameter estimation approach, corrections to ILRS station coordinates, range biases, and timing offsets are derived. As a result, root-mean-square residuals of 5-10 mm have been achieved over a 1-year data arc in 2016 using observations from a subset of high-performance stations and ambiguity-fixed orbits of four LEO missions. As a final contribution, we demonstrate that SLR can not only validate single-satellite orbit solutions but also precise baseline solutions of formation flying missions such as GRACE, TanDEM-X, and Swarm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JASTP..68..629X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JASTP..68..629X"><span>InSAR tropospheric delay mitigation by GPS observations: A case study in Tokyo area</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Caijun; Wang, Hua; Ge, Linlin; Yonezawa, Chinatsu; Cheng, Pu</p> <p>2006-03-01</p> <p>Like other space geodetic techniques, interferometric synthetic aperture radar (InSAR) is limited by the variations of tropospheric delay noise. In this paper, we analyze the double-difference (DD) feature of tropospheric delay noise in SAR interferogram. By processing the ERS-2 radar pair, we find some tropospheric delay fringes, which have similar patterns with the GMS-5 visible-channel images acquired at almost the same epoch. Thirty-five continuous GPS (CGPS) stations are distributed in the radar scene. We analyze the GPS data by GIPSY-OASIS (II) software and extract the wet zenith delay (WZD) parameters at each station at the same epoch with the master and the slave image, respectively. A cosine mapping function is applied to transform the WZD to wet slant delay (WSD) in line-of-sight direction. Based on the DD WSD parameters, we establish a two-dimensional (2D) semi-variogram model, with the parameters 35.2, 3.6 and 0.88. Then we predict the DD WSD parameters by the kriging algorithm for each pixel of the interferogram, and subtract it from the unwrapped phase. Comparisons between CGPS and InSAR range changes in LOS direction show that the root of mean squares (RMS) decreased from 1.33 cm before correction to 0.87 cm after correction. From the result, we can conclude that GPS WZD parameters can be effectively used to identify and mitigate the large-scale InSAR tropospheric delay noise if the spatial resolution of GPS stations is dense enough.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990042335&hterms=personal+hygiene&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dpersonal%2Bhygiene','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990042335&hterms=personal+hygiene&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dpersonal%2Bhygiene"><span>Bio-Medical Factors and External Hazards in Space Station Design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Olling, E. H.</p> <p>1966-01-01</p> <p>The design of space-station configurations is influenced by many factors. Probably the most demanding and critical are the biomedical and external hazards requirements imposed to provide the proper environment and supporting facilities for the crew and the adequate protective measures necessary to provide a configuration'in which the crew can live and work efficiently in relative comfort and safety. The major biomedical factors, such as physiology, psychology, nutrition, personal hygiene, waste management, and recreation, all impose their own peculiar requirements. The commonality and integration of these requirements demand the utmost ingenuity and inventiveness be exercised in order to achieve effective configuration compliance. The relationship of biomedical factors for the internal space-station environment will be explored with respect to internal atmospheric constituency, atmospheric pressure levels, oxygen positive pressure, temperature, humidity, CO2 concentration, and atmospheric contamination. The range of these various parameters and the recommended levels for design use will be analyzed. Requirements and criteria for specific problem areas such as zero and artificial gravity and crew private quarters will be reviewed and the impact on the design of representative solutions will be presented. In the areas of external hazards, the impact of factors such as meteoroids, radiation, vacuum, temperature extremes, and cycling on station design will be evaluated. Considerations with respect to operational effectiveness and crew safety will be discussed. The impact of such factors on spacecraft design to achieve acceptable launch and reentry g levels, crew rotation intervals, etc., will be reviewed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26455419','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26455419"><span>Factors influencing workplace violence risk among correctional health workers: insights from an Australian survey.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cashmore, Aaron W; Indig, Devon; Hampton, Stephen E; Hegney, Desley G; Jalaludin, Bin B</p> <p>2016-11-01</p> <p>Little is known about the environmental and organisational determinants of workplace violence in correctional health settings. This paper describes the views of health professionals working in these settings on the factors influencing workplace violence risk. All employees of a large correctional health service in New South Wales, Australia, were invited to complete an online survey. The survey included an open-ended question seeking the views of participants about the factors influencing workplace violence in correctional health settings. Responses to this question were analysed using qualitative thematic analysis. Participants identified several factors that they felt reduced the risk of violence in their workplace, including: appropriate workplace health and safety policies and procedures; professionalism among health staff; the presence of prison guards and the quality of security provided; and physical barriers within clinics. Conversely, participants perceived workplace violence risk to be increased by: low health staff-to-patient and correctional officer-to-patient ratios; high workloads; insufficient or underperforming security staff; and poor management of violence, especially horizontal violence. The views of these participants should inform efforts to prevent workplace violence among correctional health professionals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23629423','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23629423"><span>Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A</p> <p>2013-05-21</p> <p>The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70024824','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70024824"><span>Comments on baseline correction of digital strong-motion data: Examples from the 1999 Hector Mine, California, earthquake</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Boore, D.M.; Stephens, C.D.; Joyner, W.B.</p> <p>2002-01-01</p> <p>Residual displacements for large earthquakes can sometimes be determined from recordings on modern digital instruments, but baseline offsets of unknown origin make it difficult in many cases to do so. To recover the residual displacement, we suggest tailoring a correction scheme by studying the character of the velocity obtained by integration of zeroth-order-corrected acceleration and then seeing if the residual displacements are stable when the various parameters in the particular correction scheme are varied. For many seismological and engineering purposes, however, the residual displacement are of lesser importance than ground motions at periods less than about 20 sec. These ground motions are often recoverable with simple baseline correction and low-cut filtering. In this largely empirical study, we illustrate the consequences of various correction schemes, drawing primarily from digital recordings of the 1999 Hector Mine, California, earthquake. We show that with simple processing the displacement waveforms for this event are very similar for stations separated by as much as 20 km. We also show that a strong pulse on the transverse component was radiated from the Hector Mine earthquake and propagated with little distortion to distances exceeding 170 km; this pulse leads to large response spectral amplitudes around 10 sec.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900042555&hterms=temperature+variability&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dtemperature%2Bvariability','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900042555&hterms=temperature+variability&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dtemperature%2Bvariability"><span>The variability of atmospheric equivalent temperature for radar altimeter range correction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Liu, W. Timothy; Mock, Donald</p> <p>1990-01-01</p> <p>Two sets of data were used to test the validity of the presently used approximation for radar altimeter range correction due to atmospheric water vapor. The approximation includes an assumption of constant atmospheric equivalent temperature. The first data set includes monthly, three-dimensional, gridded temperature and humidity fields over global oceans for a 10-year period, and the second is comprised of daily or semidaily rawinsonde data at 17 island stations for a 7-year period. It is found that the standard method underestimates the variability of the equivalent temperature, and the approximation could introduce errors of 2 cm for monthly means. The equivalent temperature is found to have a strong meridional gradient, and the highest temporal variabilities are found over western boundary currents. The study affirms that the atmospheric water vapor is a good predictor for both the equivalent temperature and the range correction. A relation is proposed to reduce the error.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IzAOP..53..996A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IzAOP..53..996A"><span>Specificity of Atmospheric Correction of Satellite Data on Ocean Color in the Far East</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aleksanin, A. I.; Kachur, V. A.</p> <p>2017-12-01</p> <p>Calculation errors in ocean-brightness coefficients in the Far Eastern are analyzed for two atmospheric correction algorithms (NIR and MUMM). The daylight measurements in different water types show that the main error component is systematic and has a simple dependence on the magnitudes of the coefficients. The causes of the error behavior are considered. The most probable explanation for the large errors in ocean-color parameters in the Far East is a high concentration of continental aerosol absorbing light. A comparison between satellite and in situ measurements at AERONET stations in the United States and South Korea has been made. It is shown the errors in these two regions differ by up to 10 times upon close water turbidity and relatively high aerosol optical-depth computation precision in the case of using the NIR correction of the atmospheric effect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..149a2071P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..149a2071P"><span>Bias correction of daily satellite precipitation data using genetic algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.</p> <p>2018-05-01</p> <p>Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22642315-su-correction-factors-monitor-unit-verification-clinical-electron-beams','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22642315-su-correction-factors-monitor-unit-verification-clinical-electron-beams"><span>SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Haywood, J</p> <p></p> <p>Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/1363186','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/1363186"><span>Factors affecting morbidity and mortality on-farm and on-station in the Ethiopian highland sheep.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bekele, T; Woldeab, T; Lahlou-Kassi, A; Sherington, J</p> <p>1992-12-01</p> <p>Factors affecting morbidity and mortality of the Ethiopian highland sheep were studied both on-farm and on-station at Debre Berhan between 1989 and 1990. Primary causes of infectious origin resulted in high proportional morbidity (88.4% on-farm) and mortality (72.9% on-farm and 71.8% on-station) rates. Nutritional and managemental factors were also responsible for mortalities in lambs. The most frequent secondary causes of morbidity and/or mortality were ectoparasites and nasal myiasis. Health management interventions on-station were not high enough to produce performance improvements above the on-farm levels. However, the occurrence of gastrointestinal parasites significantly (P < 0.05) differed between the two management systems. The frequency of some of the major causes of morbidity and mortality such as pneumonia, fasciolasis and enteritis were significantly (P < 0.01) affected by season and age of an animal. In order to alleviate the major health constraints identified in this study, a proper health management intervention involving vaccination, strategic anthelmintic treatment and feeding management are suggested.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97i6013H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97i6013H"><span>Radiative corrections to the η(') Dalitz decays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Husek, Tomáš; Kampf, Karol; Novotný, Jiří; Leupold, Stefan</p> <p>2018-05-01</p> <p>We provide the complete set of radiative corrections to the Dalitz decays η(')→ℓ+ℓ-γ beyond the soft-photon approximation, i.e., over the whole range of the Dalitz plot and with no restrictions on the energy of a radiative photon. The corrections inevitably depend on the η(')→ γ*γ(*) transition form factors. For the singly virtual transition form factor appearing, e.g., in the bremsstrahlung correction, recent dispersive calculations are used. For the one-photon-irreducible contribution at the one-loop level (for the doubly virtual form factor), we use a vector-meson-dominance-inspired model while taking into account the η -η' mixing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16873971','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16873971"><span>Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin</p> <p>2006-01-01</p> <p>To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1046463','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1046463"><span>Air Monitoring Network at Tonopah Test Range: Network Description, Capabilities, and Analytical Results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hartwell, William T.; Daniels, Jeffrey; Nikolich, George</p> <p>2012-01-01</p> <p>During the period April to June 2008, at the behest of the Department of Energy (DOE), National Nuclear Security Administration, Nevada Site Office (NNSA/NSO); the Desert Research Institute (DRI) constructed and deployed two portable environmental monitoring stations at the Tonopah Test Range (TTR) as part of the Environmental Restoration Project Soils Activity. DRI has operated these stations since that time. A third station was deployed in the period May to September 2011. The TTR is located within the northwest corner of the Nevada Test and Training Range (NTTR), and covers an area of approximately 725.20 km2 (280 mi2). The primarymore » objective of the monitoring stations is to evaluate whether and under what conditions there is wind transport of radiological contaminants from Soils Corrective Action Units (CAUs) associated with Operation Roller Coaster on TTR. Operation Roller Coaster was a series of tests, conducted in 1963, designed to examine the stability and dispersal of plutonium in storage and transportation accidents. These tests did not result in any nuclear explosive yield. However, the tests did result in the dispersal of plutonium and contamination of surface soils in the surrounding area.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AcAau..55..233K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AcAau..55..233K"><span>Russian system of countermeasures on board of the International Space Station (ISS): the first results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kozlovskaya, Inessa B.; Grigoriev, Anatoly I.</p> <p>2004-08-01</p> <p>The system of countermeasures used by Russian cosmonauts in space flights on board of International Space Station (ISS) was based on the developed and tested in flights on board of Russian space stations. It included as primary components: physical methods aimed to maintain the distribution of fluids at levels close to those experienced on Earth; physical exercises and loading suits aimed to load the musculoskeletal and the cardiovascular systems; measures that prevent the loss of fluids, mainly, water-salt additives which aid to maintain orthostatic tolerance and endurance to gravitational overloads during the return to Earth; well-balanced diet and medications directed to correct possible negative reactions of the body to weightlessness. Fulfillment of countermeasure's protocols inflight was thoroughly controlled. Efficacy of countermeasures used were assessed both in-and postflight. The results of studies showed that degrees of alterations recorded in different physiological systems after ISS space flights in Russian cosmonauts were significantly higher than those recorded after flights on the Russian space stations. This phenomenon was caused by the failure of the ISS crews to execute fully the prescribed countermeasures' protocols which was as a rule excused by technical imperfectness of exercise facilities, treadmill TVIS particularly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003PhDT........22X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003PhDT........22X"><span>Establishment of a high accuracy geoid correction model and geodata edge match</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xi, Ruifeng</p> <p></p> <p>This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23785811','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23785811"><span>[Development of innovative methods of electromagnetic field evaluation for portable radio-station].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rubtsova, N B; Perov, S Iu; Bogacheva, E V; Kuster, N</p> <p>2013-01-01</p> <p>The results of portable radio-station "Radiy-301" electromagnetic fields (EMF) emission measurement and specific absorption rate data evaluation has shown that workers' exposure EMF levels may elevate hygienic norms and hereupon can be health risk factor. Possible way of portable radio-station EMF dosimetry enhancement by means of domestic and international approaches harmonization is considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/32503','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/32503"><span>Revisiting Pearson's climate and forest type studies on the Fort Valley Experimental Forest</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Joseph E. Crouse; Margaret M. Moore; Peter Fule</p> <p>2008-01-01</p> <p>Five weather station sites were established in 1916 by Fort Valley personnel along an elevational gradient from the Experimental Station to near the top of the San Francisco Peaks to investigate the factors that controlled and limited forest types. The stations were located in the ponderosa pine, Douglas-fir, limber pine, Engelmann spruce, and Engelmann spruce/...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/30926','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/30926"><span>Revisiting Pearson's climate and forest type studies on the Fort Valley Experimental Forest (P-53)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Joseph E. Crouse; Margaret M. Moore; Peter Z. Fule</p> <p>2008-01-01</p> <p>Five weather station sites were established in 1916 by Fort Valley personnel along an elevational gradient from the Experimental Station to near the top of the San Francisco Peaks to investigate the factors that controlled and limited forest types. The stations were located in the ponderosa pine, Douglas-fir, limber pine, Engelmann spruce, and Engelmann spruce/...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1104732-accuracy-climate-models-simulated-season-lengths-effectiveness-grid-scale-correction-factors','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1104732-accuracy-climate-models-simulated-season-lengths-effectiveness-grid-scale-correction-factors"><span>The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Winterhalter, Wade E.</p> <p>2011-09-01</p> <p>Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890000706','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890000706"><span>A hierarchically distributed architecture for fault isolation expert systems on the space station</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Miksell, Steve; Coffer, Sue</p> <p>1987-01-01</p> <p>The Space Station Axiomatic Fault Isolating Expert Systems (SAFTIES) system deals with the hierarchical distribution of control and knowledge among independent expert systems doing fault isolation and scheduling of Space Station subsystems. On its lower level, fault isolation is performed on individual subsystems. These fault isolation expert systems contain knowledge about the performance requirements of their particular subsystem and corrective procedures which may be involved in repsonse to certain performance errors. They can control the functions of equipment in their system and coordinate system task schedules. On a higher level, the Executive contains knowledge of all resources, task schedules for all systems, and the relative priority of all resources and tasks. The executive can override any subsystem task schedule in order to resolve use conflicts or resolve errors that require resources from multiple subsystems. Interprocessor communication is implemented using the SAFTIES Communications Interface (SCI). The SCI is an application layer protocol which supports the SAFTIES distributed multi-level architecture.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015QSRv..112...78L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015QSRv..112...78L"><span>The impact of using different modern climate data sets in pollen-based paleoclimate reconstructions of North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ladd, M.; Way, R. G.; Viau, A. E.</p> <p>2015-03-01</p> <p>The use of different modern climate data sets is shown to impact a continental-scale pollen-based reconstruction of mean July temperature (TJUL) over the last 2000 years for North America. Data from climate stations, physically modeled from climate stations and reanalysis products are used to calibrate the reconstructions. Results show that the use of reanalysis products produces warmer and/or smoother reconstructions as compared to the use of station based data sets. The reconstructions during the period of 1050-1550 CE are shown to be more variable because of a high latitude cold-bias in the modern TJUL data. The ultra-high resolution WorldClim gridded data may only useful if the modern pollen sites have at least the same spatial precision as the gridded dataset. Hence we justify the use of the lapse-rate corrected University of East Anglia Climate Research Unit (CRU) based Whitmore modern climate data set for North American pollen-based climate reconstructions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850024859','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850024859"><span>Space station automation and robotics study. Operator-systems interface</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1984-01-01</p> <p>This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSAES..80..237L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSAES..80..237L"><span>Local magnitude scale for Valle Medio del Magdalena region, Colombia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Londoño, John Makario; Romero, Jaime A.</p> <p>2017-12-01</p> <p>A local Magnitude (ML) scale for Valle Medio del Magdalena (VMM) region was defined by using 514 high quality earthquakes located at VMM area and inversion of 2797 amplitude values of horizontal components of 17 stations seismic broad band stations, simulated in a Wood-Anderson seismograph. The derived local magnitude scale for VMM region was: ML =log(A) + 1.3744 ∗ log(r) + 0.0014776 ∗ r - 2.397 + S Where A is the zero-to-peak amplitude in nm in horizontal components, r is the hypocentral distance in km, and S is the station correction. Higher values of ML were obtained for VMM region compared with those obtained with the current formula used for ML determination, and with California formula. With this new scale ML values are adjusted to local conditions beneath VMM region leading to more realistic ML values. Moreover, with this new ML scale the seismicity caused by tectonic or fracking activity at VMM region can be monitored more accurately.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990110555&hterms=personal+hygiene&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dpersonal%2Bhygiene','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990110555&hterms=personal+hygiene&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dpersonal%2Bhygiene"><span>Bio-Medical Factors and External Hazards in Space Station Design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Olling, Edward H.</p> <p>1966-01-01</p> <p>The design of space-station configurations is influenced by many factors, Probably the most demanding and critical are the biomedical and external hazards requirements imposed to provide the proper environment and supporting facilities for the crew and the adequate protective measures necessary to provide a configuration in which the crew can live and work efficiently in relative comfort and safety. The major biomedical factors, such as physiology, psychology, nutrition, personal hygiene, waste management, and recreation, all impose their own peculiar requirements. The commonality and integration of these requirements demand the utmost ingenuity and inventiveness be exercised in order to achieve effective configuration compliance. The relationship of biomedical factors for the internal space-station environment will be explored with respect to internal atmospheric constituency, atmospheric pressure levels, oxygen positive pressure, temperature, humidity, CO2 concentration, and atmospheric contamination. The range of these various parameters and the recommended levels for design use will be analyzed. Requirements and criteria for specific problem areas such as zero and artificial gravity and crew private quarters will be reviewed and the impact on the design of representative solutions will be presented. In the areas of external hazards, the impact of factors such as meteoroids, radiation, vacuum, temperature extremes, and cycling on station design will be evaluated. Considerations with respect to operational effectiveness and crew safety will be discussed. The impact of such factors on spacecraft design to achieve acceptable launch and reentry g levels, crew rotation intervals, etc., will be reviewed. Examples of configurations, subsystems, and internal a arrangement and installations to comply with such biomedical factor requirements will ber presented. The effects of solutions to certain biomedical factors on configuration weight, operational convenience, and program costs will be compared.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790007850','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790007850"><span>GEOS-2 refraction program summary document. [ionospheric and tropospheric propagation errors in satellite tracking instruments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mallinckrodt, A. J.</p> <p>1977-01-01</p> <p>Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1994JAG....32..375G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1994JAG....32..375G"><span>Invariants for correcting field polarisation effect in MT-VLF resistivity mapping</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guérin, Roger; Tabbagh, Alain; Benderitter, Yves; Andrieux, Pierre</p> <p>1994-12-01</p> <p>MT-VLF resistivity mapping is well suited to perform hydrology and environment studies. However, the apparent anistropy generated by the polarisation of the primary field requires the use of two transmitters at a right angle to each other in order to prevent errors in interpretation. We propose a processing technique that uses approximate invariants derived from classical developments in tensor magnetotellurics. They consist of the calculation at each station of ?. Both synthetic and field cases show that they give identical results and correct perfectly for the apparent anisotropy generated by the polarisation of the transmitted field. They should be preferred to verticalization of the electric field which remains of interest when only transmitter data are available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.997a2024L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.997a2024L"><span>Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.</p> <p>2018-03-01</p> <p>This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001DDA....32.0404M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001DDA....32.0404M"><span>First Light for USNO 1.3-meter Telescope</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Monet, A. K. B.; Harris, F. H.; Harris, H. C.; Monet, D. G.; Stone, R. C.</p> <p>2001-11-01</p> <p>The US Naval Observatory Flagstaff Station has recently achieved first light with its newest telescope -- a 1.3--meter, f/4 modified Ritchey-Chretien,located on the grounds of the station. The instrument was designed to produce a well-corrected field 1.7--degrees in diameter, and is expected to provide wide-field imaging with excellent astrometric properties. A number of test images have been obtained, using a temporary CCD camera in both drift and stare mode, and the results have been quite encouraging. Several astrometric projects are planned for this instrument, which will be operated in fully automated fashion. This paper will describe the telescope and its planned large-format mosaic CCD camera, and will preview some of the research for which it will be employed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820025186','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820025186"><span>Channel coding in the space station data system network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Healy, T.</p> <p>1982-01-01</p> <p>A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>