Sample records for complete error budget

  1. Cost effectiveness of the stream-gaging program in South Carolina

    USGS Publications Warehouse

    Barker, A.C.; Wright, B.C.; Bennett, C.S.

    1985-01-01

    The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)

  2. Using sediment 'fingerprints' to assess sediment-budget errors, north Halawa Valley, Oahu, Hawaii, 1991-92

    USGS Publications Warehouse

    Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.

    1998-01-01

    Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.

  3. Wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope system

    NASA Astrophysics Data System (ADS)

    Wei, Kai; Zhang, Xuejun; Xian, Hao; Rao, Changhui; Zhang, Yudong

    2010-05-01

    We present the wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope. The error budget accounts for aberrations induced by optical design residual, manufacturing error, mounting effects, and misalignments. The initial error budget has been generated from the top-down. There will also be an ongoing effort to track the errors from the bottom-up. This will aid in identifying critical areas of concern. The resolution of conflicts will involve a continual process of review and comparison of the top-down and bottom-up approaches, modifying both as needed to meet the top level requirements in the end. As we all know, the adaptive optical system will correct for some of the telescope system imperfections but it cannot be assumed that all errors will be corrected. Therefore, two kinds of error budgets will be presented, one is non-AO top-down error budget and the other is with-AO system error budget. The main advantage of the method is that at the same time it describes the final performance of the telescope, and gives to the optical manufacturer the maximum freedom to define and possibly modify its own manufacturing error budget.

  4. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  5. Meteorological Error Budget Using Open Source Data

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using Open- Source Data by J Cogan, J Smith, P...needed. Do not return it to the originator. ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using...Error Budget Using Open-Source Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) J Cogan, J Smith, P Haines

  6. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  7. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  8. Predicting the thermal/structural performance of the atmospheric trace molecules spectroscopy /ATMOS/ Fourier transform spectrometer

    NASA Technical Reports Server (NTRS)

    Miller, J. M.

    1980-01-01

    ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.

  9. Calibration/Validation Error Budgets, Uncertainties, Traceability and Their Importance to Imaging Spectrometry

    NASA Technical Reports Server (NTRS)

    Thome, K.

    2016-01-01

    Knowledge of uncertainties and errors are essential for comparisons of remote sensing data across time, space, and spectral domains. Vicarious radiometric calibration is used to demonstrate the need for uncertainty knowledge and to provide an example error budget. The sample error budget serves as an example of the questions and issues that need to be addressed by the calibrationvalidation community as accuracy requirements for imaging spectroscopy data will continue to become more stringent in the future. Error budgets will also be critical to ensure consistency between the range of imaging spectrometers expected to be launched in the next five years.

  10. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  11. Uncertainty Propagation in an Ecosystem Nutrient Budget.

    EPA Science Inventory

    New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...

  12. Evaluating the design of satellite scanning radiometers for earth radiation budget measurements with system simulations. Part 1: Instantaneous estimates

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Ardanuy, Philip; Hucek, Richard; Abel, Peter; Jacobowitz, Herbert

    1991-01-01

    A set of system simulations was performed to evaluate candidate scanner configurations to fly as a part of the Earth Radiation Budget Instrument (ERBI) on the polar platforms during the 1990's. The simulation is considered of instantaneous sampling (without diurnal averaging) of the longwave and shortwave fluxes at the top of the atmosphere (TOA). After measurement and subsequent inversion to the TOA, the measured fluxes were compared to the reference fluxes for 2.5 deg lat/long resolution targets. The reference fluxes at this resolution are obtained by integrating over the 25 x 25 = 625 grid elements in each target. The differences between each of these two resultant spatially averaged sets of target measurements (errors) are taken and then statistically summarized. Five instruments are considered: (1) the Conically Scanning Radiometer (CSR); (2) the ERBE Cross Track Scanner; (3) the Nimbus-7 Biaxial Scanner; (4) the Clouds and Earth's Radiant Energy System Instrument (CERES-1); and (5) the Active Cavity Array (ACA). Identical studies of instantaneous error were completed for many days, two seasons, and several satellite equator crossing longitudes. The longwave flux errors were found to have the same space and time characteristics as for the shortwave fluxes, but the errors are only about 25 pct. of the shortwave errors.

  13. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    NASA Technical Reports Server (NTRS)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  14. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  15. X-band uplink ground systems development: Part 2

    NASA Technical Reports Server (NTRS)

    Johns, C. E.

    1987-01-01

    The prototype X-band exciter testing has been completed. Stability and single-sideband phase noise measurements have been made on the X-band exciter signal (7.145-7.235 GHz) and on the coherent X- and S-band receiver test signals (8.4-8.5 GHz and 2.29-2.3 GHz) generated within the exciter equipment. Outputs are well within error budgets.

  16. Cost-effectiveness of the Federal stream-gaging program in Virginia

    USGS Publications Warehouse

    Carpenter, D.H.

    1985-01-01

    Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  17. TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS

    EPA Science Inventory

    Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...

  18. Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin

    USGS Publications Warehouse

    Walker, J.F.; Osen, L.L.; Hughes, P.E.

    1987-01-01

    A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%. 

  19. Evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas

    USGS Publications Warehouse

    Medina, K.D.; Geiger, C.O.

    1984-01-01

    The results of an evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas are documented. Data uses and funding sources were identified for the 140 complete record streamflow-gaging stations operated in Kansas during 1983 with a budget of $793,780. As a result of the evaluation of the needs and uses of data from the stream-gaging program, it was found that the 140 gaging stations were needed to meet these data requirements. The average standard error of estimation of streamflow records was 20.8 percent, assuming the 1983 budget and operating schedule of 6-week interval visitations and based on 85 of the 140 stations. It was shown that this overall level of accuracy could be improved to 18.9 percent by altering the 1983 schedule of station visitations. A minimum budget of $760 ,000, with a corresponding average error of estimation of 24.9 percent, is required to operate the 1983 program. None of the stations investigated were suitable for the application of alternative methods for simulating discharge records. Improved instrumentation can have a very positive impact on streamflow uncertainties by decreasing lost record. (USGS)

  20. Developing Performance Estimates for High Precision Astrometry with TMT

    NASA Astrophysics Data System (ADS)

    Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana

    2013-12-01

    Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.

  1. Cost-effectiveness of the streamflow-gaging program in Wyoming

    USGS Publications Warehouse

    Druse, S.A.; Wahl, K.L.

    1988-01-01

    This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)

  2. Effect of slope errors on the performance of mirrors for x-ray free electron laser applications

    DOE PAGES

    Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P.

    2015-12-02

    In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help tomore » correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.« less

  3. Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.

    PubMed

    Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P

    2015-12-14

    In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.

  4. First-order error budgeting for LUVOIR mission

    NASA Astrophysics Data System (ADS)

    Lightsey, Paul A.; Knight, J. Scott; Feinberg, Lee D.; Bolcar, Matthew R.; Shaklan, Stuart B.

    2017-09-01

    Future large astronomical telescopes in space will have architectures that will have complex and demanding requirements to meet the science goals. The Large UV/Optical/IR Surveyor (LUVOIR) mission concept being assessed by the NASA/Goddard Space Flight Center is expected to be 9 to 15 meters in diameter, have a segmented primary mirror and be diffraction limited at a wavelength of 500 nanometers. The optical stability is expected to be in the picometer range for minutes to hours. Architecture studies to support the NASA Science and Technology Definition teams (STDTs) are underway to evaluate systems performance improvements to meet the science goals. To help define the technology needs and assess performance, a first order error budget has been developed. Like the JWST error budget, the error budget includes the active, adaptive and passive elements in spatial and temporal domains. JWST performance is scaled using first order approximations where appropriate and includes technical advances in telescope control.

  5. Geometric error characterization and error budgets. [thematic mapper

    NASA Technical Reports Server (NTRS)

    Beyer, E.

    1982-01-01

    Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.

  6. Cost effectiveness of the US Geological Survey's stream-gaging program in New York

    USGS Publications Warehouse

    Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.

    1986-01-01

    The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)

  7. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  8. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  9. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  10. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.

  11. General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.

    2011-01-01

    The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.

  12. The Terrestrial Planet Finder coronagraph dynamics error budget

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Marchen, Luis; Green, Joseph J.; Lay, Oliver P.

    2005-01-01

    The Terrestrial Planet Finder Coronagraph (TPF-C) demands extreme wave front control and stability to achieve its goal of detecting earth-like planets around nearby stars. We describe the performance models and error budget used to evaluate image plane contrast and derive engineering requirements for this challenging optical system.

  13. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  14. Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont

    USGS Publications Warehouse

    Smath, J.A.; Blackey, F.E.

    1986-01-01

    Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)

  15. A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.; Shaklan, Stuart B.

    2009-01-01

    This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.

  16. Wavefront error budget development for the Thirty Meter Telescope laser guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent

    2008-07-01

    This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.

  17. Balancing the books - a statistical theory of prospective budgets in Earth System science

    NASA Astrophysics Data System (ADS)

    O'Kane, J. Philip

    An honest declaration of the error in a mass, momentum or energy balance, ɛ, simply raises the question of its acceptability: "At what value of ɛ is the attempted balance to be rejected?" Answering this question requires a reference quantity against which to compare ɛ. This quantity must be a mathematical function of all the data used in making the balance. To deliver this function, a theory grounded in a workable definition of acceptability is essential. A distinction must be drawn between a retrospective balance and a prospective budget in relation to any natural space-filling body. Balances look to the past; budgets look to the future. The theory is built on the application of classical sampling theory to the measurement and closure of a prospective budget. It satisfies R.A. Fisher's "vital requirement that the actual and physical conduct of experiments should govern the statistical procedure of their interpretation". It provides a test, which rejects, or fails to reject, the hypothesis that the closing error on the budget, when realised, was due to sampling error only. By increasing the number of measurements, the discrimination of the test can be improved, controlling both the precision and accuracy of the budget and its components. The cost-effective design of such measurement campaigns is discussed briefly. This analysis may also show when campaigns to close a budget on a particular space-filling body are not worth the effort for either scientific or economic reasons. Other approaches, such as those based on stochastic processes, lack this finality, because they fail to distinguish between different types of error in the mismatch between a set of realisations of the process and the measured data.

  18. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert

    2011-01-01

    The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.

  19. Earth radiation budget measurement from a spinning satellite: Conceptual design of detectors

    NASA Technical Reports Server (NTRS)

    Sromovsky, L. A.; Revercomb, H. E.; Suomi, V. E.

    1975-01-01

    The conceptual design, sensor characteristics, sensor performance and accuracy, and spacecraft and orbital requirements for a spinning wide-field-of-view earth energy budget detector were investigated. The scientific requirements for measurement of the earth's radiative energy budget are presented. Other topics discussed include the observing system concept, solar constant radiometer design, plane flux wide FOV sensor design, fast active cavity theory, fast active cavity design and error analysis, thermopile detectors as an alternative, pre-flight and in-flight calibration plane, system error summary, and interface requirements.

  20. Cost effectiveness of the stream-gaging program in Ohio

    USGS Publications Warehouse

    Shindel, H.L.; Bartlett, W.P.

    1986-01-01

    This report documents the results of the cost effectiveness of the stream-gaging program in Ohio. Data uses and funding sources were identified for 107 continuous stream gages currently being operated by the U.S. Geological Survey in Ohio with a budget of $682,000; this budget includes field work for other projects and excludes stations jointly operated with the Miami Conservancy District. No stream gage were identified as having insufficient reason to continue their operation; nor were any station identified as having uses specifically only for short-term studies. All 107 station should be maintained in the program for the foreseeable future. The average standard error of estimation of stream flow records is 29.2 percent at its present level of funding. A minimum budget of $679,000 is required to operate the 107-gage program; a budget less than this does no permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 31.1 percent The maximum budget analyzed was $1,282,000, which resulted in an average standard error of 11.1 percent. A need for additional gages has been identified by the other agencies that cooperate in the program. It is suggested that these gage be installed as funds can be made available.

  1. Imaging phased telescope array study

    NASA Technical Reports Server (NTRS)

    Harvey, James E.

    1989-01-01

    The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.

  2. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    NASA Astrophysics Data System (ADS)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  3. Cost-effectiveness of the U.S. Geological Survey's stream-gaging programs in Massachusetts and Rhode Island

    USGS Publications Warehouse

    Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.

    1985-01-01

    The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)

  4. NFIRAOS in 2015: engineering for future integration of complex subsystems

    NASA Astrophysics Data System (ADS)

    Atwood, Jenny; Andersen, David; Byrnes, Peter; Densmore, Adam; Fitzsimmons, Joeleff; Herriot, Glen; Hill, Alexis

    2016-07-01

    The Narrow Field InfraRed Adaptive Optics System (NFIRAOS) will be the first-light facility Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). NFIRAOS will be able to host three science instruments that can take advantage of this high performance system. NRC Herzberg is leading the design effort for this critical TMT subsystem. As part of the final design phase of NFIRAOS, we have identified multiple subsystems to be sub-contracted to Canadian industry. The scope of work for each subcontract is guided by the NFIRAOS Work Breakdown Structure (WBS) and is divided into two phases: the completion of the final design and the fabrication, assembly and delivery of the final product. Integration of the subsystems at NRC will require a detailed understanding of the interfaces between the subsystems, and this work has begun by defining the interface physical characteristics, stability, local coordinate systems, and alignment features. In order to maintain our stringent performance requirements, the interface parameters for each subsystem are captured in multiple performance budgets, which allow a bottom-up error estimate. In this paper we discuss our approach for defining the interfaces in a consistent manner and present an example error budget that is influenced by multiple subsystems.

  5. Adverse effects in dual-feed interferometry

    NASA Astrophysics Data System (ADS)

    Colavita, M. Mark

    2009-11-01

    Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.

  6. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  7. Cost effectiveness of the stream-gaging program in Nevada

    USGS Publications Warehouse

    Arteaga, F.E.

    1990-01-01

    The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)

  8. Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana

    USGS Publications Warehouse

    Stewart, J.A.; Miller, R.L.; Butch, G.K.

    1986-01-01

    Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)

  9. How noise affects quantum detector tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Q., E-mail: wang@physics.leidenuniv.nl; Renema, J. J.; Exter, M. P.van

    2015-10-07

    We determine the full photon number response of a NbN superconducting nanowire single photon detector via quantum detector tomography, and the results show the separation of linear, effective absorption efficiency from the internal detection efficiencies. In addition, we demonstrate an error budget for the complete quantum characterization of the detector. We find that for short times, the dominant noise source is shot noise, while laser power fluctuations limit the accuracy for longer timescales. The combined standard uncertainty of the internal detection efficiency derived from our measurements is about 2%.

  10. Astrometry for New Reductions: The ANR method

    NASA Astrophysics Data System (ADS)

    Robert, Vincent; Le Poncin-Lafitte, Christophe

    2018-04-01

    Accurate positional measurements of planets and satellites are used to improve our knowledge of their orbits and dynamics, and to infer the accuracy of the planet and satellite ephemerides. With the arrival of the Gaia-DR1 reference star catalog and its complete release afterward, the methods for ground-based astrometry become outdated in terms of their formal accuracy compared to the catalog's which is used. Systematic and zonal errors of the reference stars are eliminated, and the astrometric process now dominates in the error budget. We present a set of algorithms for computing the apparent directions of planets, satellites and stars on any date to micro-arcsecond precision. The expressions are consistent with the ICRS reference system, and define the transformation between theoretical reference data, and ground-based astrometric observables.

  11. Performance of the Keck Observatory adaptive-optics system.

    PubMed

    van Dam, Marcos A; Le Mignant, David; Macintosh, Bruce A

    2004-10-10

    The adaptive-optics (AO) system at the W. M. Keck Observatory is characterized. We calculate the error budget of the Keck AO system operating in natural guide star mode with a near-infrared imaging camera. The measurement noise and bandwidth errors are obtained by modeling the control loops and recording residual centroids. Results of sky performance tests are presented: The AO system is shown to deliver images with average Strehl ratios of as much as 0.37 at 1.58 microm when a bright guide star is used and of 0.19 for a magnitude 12 star. The images are consistent with the predicted wave-front error based on our error budget estimates.

  12. Cost-effectiveness of the stream-gaging program in North Carolina

    USGS Publications Warehouse

    Mason, R.R.; Jackson, N.M.

    1985-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.

  13. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  14. Cost-effectiveness of the stream-gaging program in Maine; a prototype for nationwide implementation

    USGS Publications Warehouse

    Fontaine, Richard A.; Moss, M.E.; Smath, J.A.; Thomas, W.O.

    1984-01-01

    This report documents the results of a cost-effectiveness study of the stream-gaging program in Maine. Data uses and funding sources were identified for the 51 continuous stream gages currently being operated in Maine with a budget of $211,000. Three stream gages were identified as producing data no longer sufficiently needed to warrant continuing their operation. Operation of these stations should be discontinued. Data collected at three other stations were identified as having uses specific only to short-term studies; it is recommended that these stations be discontinued at the end of the data-collection phases of the studies. The remaining 45 stations should be maintained in the program for the foreseeable future. The current policy for operation of the 45-station program would require a budget of $180,300 per year. The average standard error of estimation of streamflow records is 17.7 percent. It was shown that this overall level of accuracy at the 45 sites could be maintained with a budget of approximately $170,000 if resources were redistributed among the gages. A minimum budget of $155,000 is required to operate the 45-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 25.1 percent. The maximum budget analyzed was $350,000, which resulted in an average standard error of 8.7 percent. Large parts of Maine's interior were identified as having sparse streamflow data. It was determined that this sparsity be remedied as funds become available.

  15. Comparison of direct and heterodyne detection optical intersatellite communication links

    NASA Technical Reports Server (NTRS)

    Chen, C. C.; Gardner, C. S.

    1987-01-01

    The performance of direct and heterodyne detection optical intersatellite communication links are evaluated and compared. It is shown that the performance of optical links is very sensitive to the pointing and tracking errors at the transmitter and receiver. In the presence of random pointing and tracking errors, optimal antenna gains exist that will minimize the required transmitter power. In addition to limiting the antenna gains, random pointing and tracking errors also impose a power penalty in the link budget. This power penalty is between 1.6 to 3 dB for a direct detection QPPM link, and 3 to 5 dB for a heterodyne QFSK system. For the heterodyne systems, the carrier phase noise presents another major factor of performance degradation that must be considered. In contrast, the loss due to synchronization error is small. The link budgets for direct and heterodyne detection systems are evaluated. It is shown that, for systems with large pointing and tracking errors, the link budget is dominated by the spatial tracking error, and the direct detection system shows a superior performance because it is less sensitive to the spatial tracking error. On the other hand, for systems with small pointing and tracking jitters, the antenna gains are in general limited by the launch cost, and suboptimal antenna gains are often used in practice. In which case, the heterodyne system has a slightly higher power margin because of higher receiver sensitivity.

  16. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  17. A Starshade Petal Error Budget for Exo-Earth Detection and Characterization

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Marchen, Luis; Lisman, P. Douglas; Cady, Eric; Martin, Stefan; Thomson, Mark; Dumont, Philip; Kasdin, N. Jeremy

    2011-01-01

    We present a starshade error budget with engineering requirements that are well within the current manufacturing and metrology capabilities. The error budget is based on an observational scenario in which the starshade spins about its axis on timescales short relative to the zodi-limited integration time, typically several hours. The scatter from localized petal errors is smoothed into annuli around the center of the image plane, resulting in a large reduction in the background flux variation while reducing thermal gradients caused by structural shadowing. Having identified the performance sensitivity to petal shape errors with spatial periods of 3-4 cycles/petal as the most challenging aspect of the design, we have adopted and modeled a manufacturing approach that mitigates these perturbations with 1-meter-long precision edge segments positioned using commercial metrology that readily meets assembly requirements. We have performed detailed thermal modeling and show that the expected thermal deformations are well within the requirements as well. We compare the requirements for four cases: a 32 meter diameter starshade with a 1.5 meter telescope, analyzed at 75 and 90 milliarcseconds, and a 40 meter diameter starshade with a 4 meter telescope, analyzed at 60 and 75 milliarcseconds.

  18. Oriented Scintillation Spectrometer Experiment (OSSE). Revision A. Volume 1

    DTIC Science & Technology

    1988-05-19

    SYSTEM-LEVEL ENVIRONMENTAL TESTS ................... 108 3.5.1 OPERATION REPORT, PROOF MODEL STRUCTURE TESTS.. .108 3.5.1.1 PROOF MODEL MODAL SURVEY...81 3-21 ALIGNMENT ERROR BUDGET, FOV, A4 ................ 82 3-22 ALIGNMENT ERROR BUDGET, ROTATION AXIS, A4 ...... 83 3-23 OSSE PROOF MODEL MODAL SURVEY...PROOF MODEL MODAL SURVEY .................. 112 3-27-1 OSSE PROOF MODEL STATIC LOAD TEST ............. 116 3-27-2 OSSE PROOF MODEL STATIC LOAD TEST

  19. Kinetic energy budgets in areas of intense convection

    NASA Technical Reports Server (NTRS)

    Fuelberg, H. E.; Berecek, E. M.; Ebel, D. M.; Jedlovec, G. J.

    1980-01-01

    A kinetic energy budget analysis of the AVE-SESAME 1 period which coincided with the deadly Red River Valley tornado outbreak is presented. Horizontal flux convergence was found to be the major kinetic energy source to the region, while cross contour destruction was the major sink. Kinetic energy transformations were dominated by processes related to strong jet intrusion into the severe storm area. A kinetic energy budget of the AVE 6 period also is presented. The effects of inherent rawinsonde data errors on widely used basic kinematic parameters, including velocity divergence, vorticity advection, and kinematic vertical motion are described. In addition, an error analysis was performed in terms of the kinetic energy budget equation. Results obtained from downward integration of the continuity equation to obtain kinematic values of vertical motion are described. This alternate procedure shows promising results in severe storm situations.

  20. A collaborative vendor-buyer production-inventory systems with imperfect quality items, inspection errors, and stochastic demand under budget capacity constraint: a Karush-Kuhn-Tucker conditions approach

    NASA Astrophysics Data System (ADS)

    Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.

    2017-01-01

    In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.

  1. Cost effectiveness of the stream-gaging program in Louisiana

    USGS Publications Warehouse

    Herbert, R.A.; Carlson, D.D.

    1985-01-01

    This report documents the results of a study of the cost effectiveness of the stream-gaging program in Louisiana. Data uses and funding sources were identified for the 68 continuous-record stream gages currently (1984) in operation with a budget of $408,700. Three stream gages have uses specific to a short-term study with no need for continued data collection beyond the study. The remaining 65 stations should be maintained in the program for the foreseeable future. In addition to the current operation of continuous-record stations, a number of wells, flood-profile gages, crest-stage gages, and stage stations, are serviced on the continuous-record station routes; thus, increasing the current budget to $423,000. The average standard error of estimate for data collected at the stations is 34.6%. Standard errors computed in this study are one measure of streamflow errors, and can be used as guidelines in comparing the effectiveness of alternative networks. By using the routes and number of measurements prescribed by the ' Traveling Hydrographer Program, ' the standard error could be reduced to 31.5% with the current budget of $423,000. If the gaging resources are redistributed, the 34.6% overall level of accuracy at the 68 continuous-record sites and the servicing of the additional wells or gages could be maintained with a budget of approximately $410,000. (USGS)

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, M A; Mignant, D L; Macintosh, B A

    In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitudemore » 12 star.« less

  3. Cost-effectiveness of the stream-gaging program in New Jersey

    USGS Publications Warehouse

    Schopp, R.D.; Ulery, R.L.

    1984-01-01

    The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)

  4. Space shuttle navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.

    1976-01-01

    A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.

  5. Zonal average earth radiation budget measurements from satellites for climate studies

    NASA Technical Reports Server (NTRS)

    Ellis, J. S.; Haar, T. H. V.

    1976-01-01

    Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.

  6. Intuitive Tools for the Design and Analysis of Communication Payloads for Satellites

    NASA Technical Reports Server (NTRS)

    Culver, Michael R.; Soong, Christine; Warner, Joseph D.

    2014-01-01

    In an effort to make future communications satellite payload design more efficient and accessible, two tools were created with intuitive graphical user interfaces (GUIs). The first tool allows payload designers to graphically design their payload by using simple drag and drop of payload components onto a design area within the program. Information about each picked component is pulled from a database of common space-qualified communication components sold by commerical companies. Once a design is completed, various reports can be generated, such as the Master Equipment List. The second tool is a link budget calculator designed specifically for ease of use. Other features of this tool include being able to access a database of NASA ground based apertures for near Earth and Deep Space communication, the Tracking and Data Relay Satellite System (TDRSS) base apertures, and information about the solar system relevant to link budget calculations. The link budget tool allows for over 50 different combinations of user inputs, eliminating the need for multiple spreadsheets and the user errors associated with using them. Both of the aforementioned tools increase the productivity of space communication systems designers, and have the colloquial latitude to allow non-communication experts to design preliminary communication payloads.

  7. Kinetic energy budget during strong jet stream activity over the eastern United States

    NASA Technical Reports Server (NTRS)

    Fuelberg, H. E.; Scoggins, J. R.

    1980-01-01

    Kinetic energy budgets are computed during a cold air outbreak in association with strong jet stream activity over the eastern United States. The period is characterized by large generation of kinetic energy due to cross-contour flow. Horizontal export and dissipation of energy to subgrid scales of motion constitute the important energy sinks. Rawinsonde data at 3 and 6 h intervals during a 36 h period are used in the analysis and reveal that energy fluctuations on a time scale of less than 12 h are generally small even though the overall energy balance does change considerably during the period in conjunction with an upper level trough which moves through the region. An error analysis of the energy budget terms suggests that this major change in the budget is not due to random errors in the input data but is caused by the changing synoptic situation. The study illustrates the need to consider the time and space scales of associated weather phenomena in interpreting energy budgets obtained through use of higher frequency data.

  8. 76 FR 55139 - Order Making Fiscal Year 2012 Annual Adjustments to Registration Fee Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-06

    ... Congressional Budget Office (``CBO'') and Office of Management and Budget (``OMB'') to project the aggregate... given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n-step...

  9. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  10. Assessing and measuring wetland hydrology

    USGS Publications Warehouse

    Rosenberry, Donald O.; Hayashi, Masaki; Anderson, James T.; Davis, Craig A.

    2013-01-01

    Virtually all ecological processes that occur in wetlands are influenced by the water that flows to, from, and within these wetlands. This chapter provides the “how-to” information for quantifying the various source and loss terms associated with wetland hydrology. The chapter is organized from a water-budget perspective, with sections associated with each of the water-budget components that are common in most wetland settings. Methods for quantifying the water contained within the wetland are presented first, followed by discussion of each separate component. Measurement accuracy and sources of error are discussed for each of the methods presented, and a separate section discusses the cumulative error associated with determining a water budget for a wetland. Exercises and field activities will provide hands-on experience that will facilitate greater understanding of these processes.

  11. Error Budgets for the Exoplanet Starshade (exo-s) Probe-Class Mission Study

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin

    2015-01-01

    Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 millimeters starshade co-launched with a 1.1 millimeter commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 millimeter starshade intended to work with a 2.4 millimeters telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 millimeter starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.

  12. Error budgets for the Exoplanet Starshade (Exo-S) probe-class mission study

    NASA Astrophysics Data System (ADS)

    Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin

    2015-09-01

    Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 m starshade co-launched with a 1.1 m commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 m starshade intended to work with a 2.4 m telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 m starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.

  13. Soil moisture assimilation using a modified ensemble transform Kalman filter with water balance constraint

    NASA Astrophysics Data System (ADS)

    Wu, Guocan; Zheng, Xiaogu; Dan, Bo

    2016-04-01

    The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.

  14. 40 CFR 97.256 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making such...

  15. Improved Calibration through SMAP RFI Change Detection

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng

    2017-01-01

    Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.

  16. Cost effectiveness of stream-gaging program in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.

    1985-01-01

    This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.

  17. Budgets of divergent and rotational kinetic energy during two periods of intense convection

    NASA Technical Reports Server (NTRS)

    Buechler, D. E.; Fuelberg, H. E.

    1986-01-01

    The derivations of the energy budget equations for divergent and rotational components of kinetic energy are provided. The intense convection periods studied are: (1) synoptic scale data of 3 or 6 hour intervals and (2) mesoalphascale data every 3 hours. Composite energies and averaged budgets for the periods are presented; the effects of random data errors on derived energy parameters is investigated. The divergent kinetic energy and rotational kinetic energy budgets are compared; good correlation of the data is observed. The kinetic energies and budget terms increase with convective development; however, the conversion of the divergent and rotational energies are opposite.

  18. 22 CFR 96.33 - Budget, audit, insurance, and risk assessment requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... its governing body, if applicable, for management of its funds. The budget discloses all remuneration (including perquisites) paid to the agency's or person's board of directors, managers, employees, and... determining the type and amount of professional, general, directors' and officers', errors and omissions, and...

  19. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert

    2012-01-01

    The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.

  20. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  1. Propagation of angular errors in two-axis rotation systems

    NASA Astrophysics Data System (ADS)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  2. Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.

    ERIC Educational Resources Information Center

    Marcoulides, George A.

    1995-01-01

    A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)

  3. Quantifying uncertainty in forest nutrient budgets

    Treesearch

    Ruth D. Yanai; Carrie R. Levine; Mark B. Green; John L. Campbell

    2012-01-01

    Nutrient budgets for forested ecosystems have rarely included error analysis, in spite of the importance of uncertainty to interpretation and extrapolation of the results. Uncertainty derives from natural spatial and temporal variation and also from knowledge uncertainty in measurement and models. For example, when estimating forest biomass, researchers commonly report...

  4. 77 FR 52035 - Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-28

    ... Identifier: CMS-10003] Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB); Correction AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Correction of notice. SUMMARY: This document corrects a technical error in the notice [Document Identifier: CMS...

  5. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.

  6. Logging-related increases in stream density in a northern California watershed

    Treesearch

    Matthew S. Buffleben

    2012-01-01

    Although many sediment budgets estimate the effects of logging, few have considered the potential impact of timber harvesting on stream density. Failure to consider changes in stream density could lead to large errors in the sediment budget, particularly between the allocation of natural and anthropogenic sources of sediment.This study...

  7. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    PubMed

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  8. Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation

    PubMed Central

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672

  9. 40 CFR 60.4122 - Information requirements for Hg budget permit applications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Information requirements for Hg budget... requirements for Hg budget permit applications. A complete Hg Budget permit application shall include the following elements concerning the Hg Budget source for which the application is submitted, in a format...

  10. 40 CFR 97.22 - Information requirements for NOX Budget permit applications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Budget permit applications. 97.22 Section 97.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS Permits § 97.22 Information requirements for NOX Budget permit applications. A complete NOX Budget permit...

  11. Ultraspectral sounding retrieval error budget and estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping

    2011-11-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).

  12. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  13. Overview of the TOPEX/Poseidon Platform Harvest Verification Experiment

    NASA Technical Reports Server (NTRS)

    Morris, Charles S.; DiNardo, Steven J.; Christensen, Edward J.

    1995-01-01

    An overview is given of the in situ measurement system installed on Texaco's Platform Harvest for verification of the sea level measurement from the TOPEX/Poseidon satellite. The prelaunch error budget suggested that the total root mean square (RMS) error due to measurements made at this verification site would be less than 4 cm. The actual error budget for the verification site is within these original specifications. However, evaluation of the sea level data from three measurement systems at the platform has resulted in unexpectedly large differences between the systems. Comparison of the sea level measurements from the different tide gauge systems has led to a better understanding of the problems of measuring sea level in relatively deep ocean. As of May 1994, the Platform Harvest verification site has successfully supported 60 TOPEX/Poseidon overflights.

  14. Extended Kalman filter for attitude estimation of the earth radiation budget satellite

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Bar-Itzhack, Itzhack Y.

    1989-01-01

    The design and testing of an Extended Kalman Filter (EKF) for ground attitude determination, misalignment estimation and sensor calibration of the Earth Radiation Budget Satellite (ERBS) are described. Attitude is represented by the quaternion of rotation and the attitude estimation error is defined as an additive error. Quaternion normalization is used for increasing the convergence rate and for minimizing the need for filter tuning. The development of the filter dynamic model, the gyro error model and the measurement models of the Sun sensors, the IR horizon scanner and the magnetometers which are used to generate vector measurements are also presented. The filter is applied to real data transmitted by ERBS sensors. Results are presented and analyzed and the EKF advantages as well as sensitivities are discussed. On the whole the filter meets the expected synergism, accuracy and robustness.

  15. Sub-basin-scale sea level budgets from satellite altimetry, Argo floats and satellite gravimetry: a case study in the North Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Kleinherenbrink, Marcel; Riva, Riccardo; Sun, Yu

    2016-11-01

    In this study, for the first time, an attempt is made to close the sea level budget on a sub-basin scale in terms of trend and amplitude of the annual cycle. We also compare the residual time series after removing the trend, the semiannual and the annual signals. To obtain errors for altimetry and Argo, full variance-covariance matrices are computed using correlation functions and their errors are fully propagated. For altimetry, we apply a geographically dependent intermission bias [Ablain et al.(2015)], which leads to differences in trends up to 0.8 mm yr-1. Since Argo float measurements are non-homogeneously spaced, steric sea levels are first objectively interpolated onto a grid before averaging. For the Gravity Recovery And Climate Experiment (GRACE), gravity fields full variance-covariance matrices are used to propagate errors and statistically filter the gravity fields. We use four different filtered gravity field solutions and determine which post-processing strategy is best for budget closure. As a reference, the standard 96 degree Dense Decorrelation Kernel-5 (DDK5)-filtered Center for Space Research (CSR) solution is used to compute the mass component (MC). A comparison is made with two anisotropic Wiener-filtered CSR solutions up to degree and order 60 and 96 and a Wiener-filtered 90 degree ITSG solution. Budgets are computed for 10 polygons in the North Atlantic Ocean, defined in a way that the error on the trend of the MC plus steric sea level remains within 1 mm yr-1. Using the anisotropic Wiener filter on CSR gravity fields expanded up to spherical harmonic degree 96, it is possible to close the sea level budget in 9 of 10 sub-basins in terms of trend. Wiener-filtered Institute of Theoretical geodesy and Satellite Geodesy (ITSG) and the standard DDK5-filtered CSR solutions also close the trend budget if a glacial isostatic adjustment (GIA) correction error of 10-20 % is applied; however, the performance of the DDK5-filtered solution strongly depends on the orientation of the polygon due to residual striping. In 7 of 10 sub-basins, the budget of the annual cycle is closed, using the DDK5-filtered CSR or the Wiener-filtered ITSG solutions. The Wiener-filtered 60 and 96 degree CSR solutions, in combination with Argo, lack amplitude and suffer from what appears to be hydrological leakage in the Amazon and Sahel regions. After reducing the trend, the semiannual and the annual signals, 24-53 % of the residual variance in altimetry-derived sea level time series is explained by the combination of Argo steric sea levels and the Wiener-filtered ITSG MC. Based on this, we believe that the best overall solution for the MC of the sub-basin-scale budgets is the Wiener-filtered ITSG gravity fields. The interannual variability is primarily a steric signal in the North Atlantic Ocean, so for this the choice of filter and gravity field solution is not really significant.

  16. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

    2014-10-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.

  17. Performance analysis of next-generation lunar laser retroreflectors

    NASA Astrophysics Data System (ADS)

    Ciocci, Emanuele; Martini, Manuele; Contessa, Stefania; Porcelli, Luca; Mastrofini, Marco; Currie, Douglas; Delle Monache, Giovanni; Dell'Agnello, Simone

    2017-09-01

    Starting from 1969, Lunar Laser Ranging (LLR) to the Apollo and Lunokhod Cube Corner Retroreflectors (CCRs) provided several tests of General Relativity (GR). When deployed, the Apollo/Lunokhod CCRs design contributed only a negligible fraction of the ranging error budget. Today the improvement over the years in the laser ground stations makes the lunar libration contribution relevant. So the libration now dominates the error budget limiting the precision of the experimental tests of gravitational theories. The MoonLIGHT-2 project (Moon Laser Instrumentation for General relativity High-accuracy Tests - Phase 2) is a next-generation LLR payload developed by the Satellite/lunar/GNSS laser ranging/altimetry and Cube/microsat Characterization Facilities Laboratory (SCF _ Lab) at the INFN-LNF in collaboration with the University of Maryland. With its unique design consisting of a single large CCR unaffected by librations, MoonLIGHT-2 can significantly reduce error contribution of the reflectors to the measurement of the lunar geodetic precession and other GR tests compared to Apollo/Lunokhod CCRs. This paper treats only this specific next-generation lunar laser retroreflector (MoonLIGHT-2) and it is by no means intended to address other contributions to the global LLR error budget. MoonLIGHT-2 is approved to be launched with the Moon Express 1(MEX-1) mission and will be deployed on the Moon surface in 2018. To validate/optimize MoonLIGHT-2, the SCF _ Lab is carrying out a unique experimental test called SCF-Test: the concurrent measurement of the optical Far Field Diffraction Pattern (FFDP) and the temperature distribution of the CCR under thermal conditions produced with a close-match solar simulator and simulated space environment. The focus of this paper is to describe the SCF _ Lab specialized characterization of the performance of our next-generation LLR payload. While this payload will improve the contribution of the error budget of the space segment (MoonLIGHT-2) to GR tests and to constraints on new gravitational theories (like non-minimally coupled gravity and spacetime torsion), the description of the associated physics analysis and global LLR error budget is outside of the chosen scope of present paper. We note that, according to Reasenberg et al. (2016), software models used for LLR physics and lunar science cannot process residuals with an accuracy better than few centimeters and that, in order to process millimeter ranging data (or better) coming from (not only) future reflectors, it is necessary to update and improve the respective models inside the software package. The work presented here on results of the SCF-test thermal and optical analysis shows that a good performance is expected by MoonLIGHT-2 after its deployment on the Moon. This in turn will stimulate improvements in LLR ground segment hardware and help refine the LLR software code and models. Without a significant improvement of the LLR space segment, the acquisition of improved ground LLR hardware and challenging LLR software refinements may languish for lack of motivation, since the librations of the old generation LLR payloads largely dominate the global LLR error budget.

  18. School Budget Hold'em Facilitator's Guide

    ERIC Educational Resources Information Center

    Education Resource Strategies, 2012

    2012-01-01

    "School Budget Hold'em" is a game designed to help school districts rethink their budgeting process. It evolved out of Education Resource Strategies' (ERS) experience working with large urban districts around the country. "School Budget Hold'em" offers a completely new approach--one that can turn the budgeting process into a long-term visioning…

  19. Cost-effectiveness of the stream-gaging program in Missouri

    USGS Publications Warehouse

    Waite, L.A.

    1987-01-01

    This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)

  20. Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data

    NASA Technical Reports Server (NTRS)

    Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen

    2006-01-01

    This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.

  1. Systems engineering analysis of five 'as-manufactured' SXI telescopes

    NASA Astrophysics Data System (ADS)

    Harvey, James E.; Atanassova, Martina; Krywonos, Andrey

    2005-09-01

    Four flight models and a spare of the Solar X-ray Imager (SXI) telescope mirrors have been fabricated. The first of these is scheduled to be launched on the NOAA GOES- N satellite on July 29, 2005. A complete systems engineering analysis of the "as-manufactured" telescope mirrors has been performed that includes diffraction effects, residual design errors (aberrations), surface scatter effects, and all of the miscellaneous errors in the mirror manufacturer's error budget tree. Finally, a rigorous analysis of mosaic detector effects has been included. SXI is a staring telescope providing full solar disc images at X-ray wavelengths. For wide-field applications such as this, a field-weighted-average measure of resolution has been modeled. Our performance predictions have allowed us to use metrology data to model the "as-manufactured" performance of the X-ray telescopes and to adjust the final focal plane location to optimize the number of spatial resolution elements in a given operational field-of-view (OFOV) for either the aerial image or the detected image. The resulting performance predictions from five separate mirrors allow us to evaluate and quantify the optical fabrication process for producing these very challenging grazing incidence X-ray optics.

  2. Uncertainty issues in forest monitoring: All you wanted to know about uncertainties and never dared to ask

    Treesearch

    Michael Köhl; Charles Scott; Daniel Plugge

    2013-01-01

    Uncertainties are a composite of errors arising from observations and the appropriateness of models. An error budget approach can be used to identify and accumulate the sources of errors to estimate change in emissions between two points in time. Various forest monitoring approaches can be used to estimate the changes in emissions due to deforestation and forest...

  3. Modeling and analysis of pinhole occulter experiment

    NASA Technical Reports Server (NTRS)

    Ring, J. R.

    1986-01-01

    The objectives were to improve pointing control system implementation by converting the dynamic compensator from a continuous domain representation to a discrete one; to determine pointing stability sensitivites to sensor and actuator errors by adding sensor and actuator error models to treetops and by developing an error budget for meeting pointing stability requirements; and to determine pointing performance for alternate mounting bases (space station for example).

  4. Update on the Governor's Proposed 2009-10 Budget. Report 09-01

    ERIC Educational Resources Information Center

    Woolfork, Kevin

    2009-01-01

    On December 31, 2008, Governor Arnold Schwarzenegger released the summary of his proposed budget for fiscal year 2009-10, including proposed changes to the current year (2008-09) budget. The complete budget was released on January 9, 2009. The budget plans assume current-year General Fund revenues of $91 billion--down from September's $102 billion…

  5. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    DOE PAGES

    Ballantyne, A. P.; Andres, R.; Houghton, R.; ...

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less

  6. CBO’s Revenue Forecasting Record

    DTIC Science & Technology

    2015-11-01

    1983 1988 1993 1998 2003 2008 2013 -10 0 10 20 30 CBO Administration CBO’s Mean Forecast Error (1.1%) Forecast Errors for CBO’s and the...Administration’s Two-Year Revenue Projections CONGRESS OF THE UNITED STATES CONGRESSIONAL BUDGET OFFICE CBO CBO’s Revenue Forecasting Record NOVEMBER 2015...

  7. Near-Surface Meteorology During the Arctic Summer Cloud Ocean Study (ASCOS): Evaluation of Reanalyses and Global Climate Models.

    NASA Technical Reports Server (NTRS)

    De Boer, G.; Shupe, M.D.; Caldwell, P.M.; Bauer, Susanne E.; Persson, O.; Boyle, J.S.; Kelley, M.; Klein, S.A.; Tjernstrom, M.

    2014-01-01

    Atmospheric measurements from the Arctic Summer Cloud Ocean Study (ASCOS) are used to evaluate the performance of three atmospheric reanalyses (European Centre for Medium Range Weather Forecasting (ECMWF)- Interim reanalysis, National Center for Environmental Prediction (NCEP)-National Center for Atmospheric Research (NCAR) reanalysis, and NCEP-DOE (Department of Energy) reanalysis) and two global climate models (CAM5 (Community Atmosphere Model 5) and NASA GISS (Goddard Institute for Space Studies) ModelE2) in simulation of the high Arctic environment. Quantities analyzed include near surface meteorological variables such as temperature, pressure, humidity and winds, surface-based estimates of cloud and precipitation properties, the surface energy budget, and lower atmospheric temperature structure. In general, the models perform well in simulating large-scale dynamical quantities such as pressure and winds. Near-surface temperature and lower atmospheric stability, along with surface energy budget terms, are not as well represented due largely to errors in simulation of cloud occurrence, phase and altitude. Additionally, a development version of CAM5, which features improved handling of cloud macro physics, has demonstrated to improve simulation of cloud properties and liquid water amount. The ASCOS period additionally provides an excellent example of the benefits gained by evaluating individual budget terms, rather than simply evaluating the net end product, with large compensating errors between individual surface energy budget terms that result in the best net energy budget.

  8. THE MOST MASSIVE GALAXIES AT 3.0 {<=} z < 4.0 IN THE NEWFIRM MEDIUM-BAND SURVEY: PROPERTIES AND IMPROVED CONSTRAINTS ON THE STELLAR MASS FUNCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchesini, Danilo; Whitaker, Katherine E.; Brammer, Gabriel

    2010-12-10

    We use the optical to mid-infrared coverage of the NEWFIRM Medium-Band Survey (NMBS) to characterize, for the first time, the properties of a mass-complete sample of 14 galaxies at 3.0 {<=} z < 4.0 with M{sub star}>2.5 x 10{sup 11} M{sub sun}, and to derive significantly more accurate measurements of the high-mass end of the stellar mass function (SMF) of galaxies at 3.0 {<=} z < 4.0. The accurate photometric redshifts and well-sampled spectral energy distributions (SEDs) provided by the NMBS combined with the large surveyed area result in significantly reduced contributions from photometric redshift errors and cosmic variance tomore » the total error budget of the SMF. The typical very massive galaxy at 3.0 {<=} z < 4.0 is red and faint in the observer's optical, with a median r-band magnitude of (r{sub tot}) = 26.1, and median rest-frame U - V colors of (U - V) = 1.6. About 60% of the mass-complete sample has optical colors satisfying either the U- or the B-dropout color criteria, although {approx}50% of these galaxies has r>25.5. We find that {approx}30% of the sample has star formation rates (SFRs) from SED modeling consistent with zero, although SFRs of up to {approx}1-18 M{sub sun} yr{sup -1} are also allowed within 1{sigma}. However, >80% of the sample is detected at 24 {mu}m, resulting in total infrared luminosities in the range (0.5-4.0) x 10{sup 13} L{sub sun}. This implies the presence of either dust-enshrouded starburst activity (with SFRs of 600-4300 M{sub sun} yr{sup -1}) and/or highly obscured active galactic nuclei (AGNs). The contribution of galaxies with M{sub star}>2.5 x 10{sup 11} M{sub sun} to the total stellar mass budget at 3.0 {<=} z < 4.0 is {approx}8{sup +13}{sub -3}%. Compared to recent estimates of the stellar mass density in galaxies with M{sub star} {approx} 10{sup 9}-10{sup 11} M{sub sun} at z {approx} 5 and z {approx} 6, we find an evolution by a factor of 2-7 and 3-22 from z {approx} 5 and z {approx} 6, respectively, to z = 3.5. The previously found disagreement at the high-mass end between observed and model-predicted SMFs is now significant at the 3{sigma} level when only random uncertainties are considered. However, systematic uncertainties dominate the total error budget, with errors up to a factor of {approx}8 in the densities at the high-mass end, bringing the observed SMF in marginal agreement with the predicted SMF. Additional systematic uncertainties on the high-mass end could be potentially introduced by either (1) the intense star formation and/or the very common AGN activities as inferred from the MIPS 24 {mu}m detections, and/or (2) contamination by a significant population of massive, old, and dusty galaxies at z {approx} 2.6.« less

  9. Seasonal variability of stratospheric methane: implications for constraining tropospheric methane budgets using total column observations

    NASA Astrophysics Data System (ADS)

    Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.

    2016-11-01

    Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.

  10. Satellite Sampling and Retrieval Errors in Regional Monthly Rain Estimates from TMI AMSR-E, SSM/I, AMSU-B and the TRMM PR

    NASA Technical Reports Server (NTRS)

    Fisher, Brad; Wolff, David B.

    2010-01-01

    Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.

  11. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    DOE PAGES

    Ma, H. -Y.; Klein, S. A.; Xie, S.; ...

    2018-02-27

    Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less

  12. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    NASA Astrophysics Data System (ADS)

    Ma, H.-Y.; Klein, S. A.; Xie, S.; Zhang, C.; Tang, S.; Tang, Q.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.; Ahlgrimm, M.; Berg, L. K.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Liu, Y.; Merryfield, W.; Qian, Y.; Roehrig, R.; Wang, Y.-C.

    2018-03-01

    Many weather forecast and climate models simulate warm surface air temperature (T2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions of surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.

  13. The positive financial impact of using an Intensive Care Information System in a tertiary Intensive Care Unit.

    PubMed

    Levesque, Eric; Hoti, Emir; de La Serna, Sofia; Habouchi, Houssam; Ichai, Philippe; Saliba, Faouzi; Samuel, Didier; Azoulay, Daniel

    2013-03-01

    In the French healthcare system, the intensive care budget allocated is directly dependent on the activity level of the center. To evaluate this activity level, it is necessary to code the medical diagnoses and procedures performed on Intensive Care Unit (ICU) patients. The aim of this study was to evaluate the effects of using an Intensive Care Information System (ICIS) on the incidence of coding errors and its impact on the ICU budget allocated. Since 2005, the documentation on and monitoring of every patient admitted to our ICU has been carried out using an ICIS. However, the coding process was performed manually until 2008. This study focused on two periods: the period of manual coding (year 2007) and the period of computerized coding (year 2008) which covered a total of 1403 ICU patients. The time spent on the coding process, the rate of coding errors (defined as patients missed/not coded or wrongly identified as undergoing major procedure/s) and the financial impact were evaluated for these two periods. With computerized coding, the time per admission decreased significantly (from 6.8 ± 2.8 min in 2007 to 3.6 ± 1.9 min in 2008, p<0.001). Similarly, a reduction in coding errors was observed (7.9% vs. 2.2%, p<0.001). This decrease in coding errors resulted in a reduced difference between the potential and real ICU financial supplements obtained in the respective years (€194,139 loss in 2007 vs. a €1628 loss in 2008). Using specific computer programs improves the intensive process of manual coding by shortening the time required as well as reducing errors, which in turn positively impacts the ICU budget allocation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, H. -Y.; Klein, S. A.; Xie, S.

    Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less

  15. Advanced CD-SEM solution for edge placement error characterization of BEOL pitch 32nm metal layers

    NASA Astrophysics Data System (ADS)

    Charley, A.; Leray, P.; Lorusso, G.; Sutani, T.; Takemasa, Y.

    2018-03-01

    Metrology plays an important role in edge placement error (EPE) budgeting. Control for multi-patterning applications as new critical distances needs to be measured (edge to edge) and requirements become tighter and tighter in terms of accuracy and precision. In this paper we focus on imec iN7 BEOL platform and particularly on M2 patterning scheme using SAQP + block EUV for a 7.5 track logic design. Being able to characterize block to SAQP edge misplacement is important in a budgeting exercise (1) but is also extremely difficult due to challenging edge detection with CD-SEM (similar materials, thin layers, short distances, 3D features). In this study we develop an advanced solution to measure block to SAQP placement, we characterize it in terms of sensitivity, precision and accuracy through the comparison to reference metrology. In a second phase, the methodology is applied to budget local effects and the results are compared to the characterization of the SAQP and block independently.

  16. Performance of the Gemini Planet Imager’s adaptive optics system

    DOE PAGES

    Poyneer, Lisa A.; Palmer, David W.; Macintosh, Bruce; ...

    2016-01-07

    The Gemini Planet Imager’s adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. We give a definitive description of the system’s algorithms and technologies as built. Ultimately, the error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term.

  17. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  18. Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma

    NASA Technical Reports Server (NTRS)

    Fisher, Brad L.

    2004-01-01

    The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.

  19. Measuring changes in the illicit cigarette market using government revenue data: the example of South Africa

    PubMed Central

    van Walbeek, Corné

    2014-01-01

    Background The tobacco industry claims that illicit trade in cigarettes has increased sharply since the 1990s and that government has lost substantial tax revenue. Objectives (1) To determine whether cigarette excise tax revenue has been below budget in recent years, compared with previous decades. (2) To determine trends in the size of the illicit market since 1995. Methods For (1), mean percentage errors and root mean square percentage errors were calculated for budget revenue deviation for three products (cigarettes, beer and spirits), for various subperiods. For (2), predicted changes in total consumption, using actual cigarette price and GDP changes and previously published price and income elasticity estimates, were calculated and compared with changes in tax-paid consumption. Results Cigarette excise revenues were 0.7% below budget for 2000–2012 on average, compared with 3.0% below budget for beer and 4.7% below budget for spirits. There is no evidence that illicit trade in cigarettes in South Africa increased between 2002 and 2009. There is a substantial increase in illicit trade in 2010, probably peaking in 2011. In 2012 tax-paid consumption of cigarettes increased 2.6%, implying that the illicit market share decreased an estimated 0.6 percentage points. Conclusions Other than in 2010, there is no evidence that illicit trade is significantly undermining government revenue. Claims that illicit trade has consistently increased over the past 15 years, and has continued its sharp increase since 2010, are not supported. PMID:24431121

  20. Pattern uniformity control in integrated structures

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Biesemans, Serge; Enomoto, Masashi

    2017-03-01

    In our previous paper dealing with multi-patterning, we proposed a new indicator to quantify the quality of final wafer pattern transfer, called interactive pattern fidelity error (IPFE). It detects patterning failures resulting from any source of variation in creating integrated patterns. IPFE is a function of overlay and edge placement error (EPE) of all layers comprising the final pattern (i.e. lower and upper layers). In this paper, we extend the use cases with Via in additional to the bridge case (Block on Spacer). We propose an IPFE budget and CD budget using simple geometric and statistical models with analysis of a variance (ANOVA). In addition, we validate the model with experimental data. From the experimental results, improvements in overlay, local-CDU (LCDU) of contact hole (CH) or pillar patterns (especially, stochastic pattern noise (SPN)) and pitch walking are all critical to meet budget requirements. We also provide a special note about the importance of the line length used in analyzing LWR. We find that IPFE and CD budget requirements are consistent to the table of the ITRS's technical requirement. Therefore the IPFE concept can be adopted for a variety of integrated structures comprising digital logic circuits. Finally, we suggest how to use IPFE for yield management and optimization requirements for each process.

  1. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  2. Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models

    NASA Astrophysics Data System (ADS)

    Southworth, John

    2010-11-01

    I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the effective temperature of the star. I find that the correlations of planetary surface gravity and mass with orbital period have significance levels of only 3.1σ and 2.3σ, respectively. The significance of the latter has not increased with the addition of new data since Paper II. The division of planets into two classes based on Safronov number is increasingly blurred. Most of the objects studied here would benefit from improved photometric and spectroscopic observations, as well as improvements in our understanding of low-mass stars and their effective temperature scale.

  3. Lens-mount stability trade-off: a survey exemplified for DUV wafer inspection objectives

    NASA Astrophysics Data System (ADS)

    Bouazzam, Achmed; Erbe, Torsten; Fahr, Stephan; Werschnik, Jan

    2015-09-01

    The position stability of optical elements is an essential part of the tolerance budget of an optical system because its compensation would require an alignment step after the lens has left the factory. In order to achieve a given built performance the stability error contribution needs to be known and accounted for. Given a high-end lens touching the edge of technology not knowing, under- or overestimating this contribution becomes a serious cost and risk factor. If overestimated the remaining parts of the budget need to be tighter. If underestimated the total project might fail. For many mounting principles the stability benchmark is based on previous systems or information gathered by elaborated testing of complete optical systems. This renders the development of a new system into a risky endeavour, because these experiences are not sufficiently precise and tend to be not transferable when scaling of the optical elements is intended. This contribution discusses the influences of different optical mounting concepts on the position stability using the example of high numerical aperture (HNA) inspection lenses working in the deep ultraviolet (DUV) spectrum. A method to investigate the positional stability is presented for selected mounting examples typical for inspection lenses.

  4. Deuterium target data for precision neutrino-nucleus cross sections

    DOE PAGES

    Meyer, Aaron S.; Betancourt, Minerba; Gran, Richard; ...

    2016-06-23

    Amplitudes derived from scattering data on elementary targets are basic inputs to neutrino-nucleus cross section predictions. A prominent example is the isovector axial nucleon form factor, F A(q 2), which controls charged current signal processes at accelerator-based neutrino oscillation experiments. Previous extractions of F A from neutrino-deuteron scattering data rely on a dipole shape assumption that introduces an unquantified error. A new analysis of world data for neutrino-deuteron scattering is performed using a model-independent, and systematically improvable, representation of F A. A complete error budget for the nucleon isovector axial radius leads to r A 2 = 0.46(22)fm 2, withmore » a much larger uncertainty than determined in the original analyses. The quasielastic neutrino-neutron cross section is determined as σ(ν μn → μ -p)| Ev=1 GeV = 10.1(0.9)×10 -39cm 2. The propagation of nucleon-level constraints and uncertainties to nuclear cross sections is illustrated using MINERvA data and the GENIE event generator. Furthermore, these techniques can be readily extended to other amplitudes and processes.« less

  5. Ready-to-use pre-filled syringes of atropine for anaesthesia care in French hospitals - a budget impact analysis.

    PubMed

    Benhamou, Dan; Piriou, Vincent; De Vaumas, Cyrille; Albaladejo, Pierre; Malinovsky, Jean-Marc; Doz, Marianne; Lafuma, Antoine; Bouaziz, Hervé

    2017-04-01

    Patient safety is improved by the use of labelled, ready-to-use, pre-filled syringes (PFS) when compared to conventional methods of syringe preparation (CMP) of the same product from an ampoule. However, the PFS presentation costs more than the CMP presentation. To estimate the budget impact for French hospitals of switching from atropine in ampoules to atropine PFS for anaesthesia care. A model was constructed to simulate the financial consequences of the use of atropine PFS in operating theatres, taking into account wastage and medication errors. The model tested different scenarios and a sensitivity analysis was performed. In a reference scenario, the systematic use of atropine PFS rather than atropine CMP yielded a net one-year budget saving of €5,255,304. Medication errors outweighed other cost factors relating to the use of atropine CMP (€9,425,448). Avoidance of wastage in the case of atropine CMP (prepared and unused) was a major source of savings (€1,167,323). Significant savings were made by means of other scenarios examined. The sensitivity analysis suggests that the results obtained are robust and stable for a range of parameter estimates and assumptions. The financial model was based on data obtained from the literature and expert opinions. The budget impact analysis shows that even though atropine PFS is more expensive than atropine CMP, its use would lead to significant cost savings. Savings would mainly be due to fewer medication errors and their associated consequences and the absence of wastage when atropine syringes are prepared in advance. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  6. Determination of the carbon budget of a pasture: effect of system boundaries and flux uncertainties

    NASA Astrophysics Data System (ADS)

    Felber, R.; Bretscher, D.; Münger, A.; Neftel, A.; Ammann, C.

    2015-12-01

    Carbon (C) sequestration in the soil is considered as a potential important mechanism to mitigate greenhouse gas (GHG) emissions of the agricultural sector. It can be quantified by the net ecosystem carbon budget (NECB) describing the change of soil C as the sum of all relevant import and export fluxes. NECB was investigated here in detail for an intensively grazed dairy pasture in Switzerland. Two budget approaches with different system boundaries were applied: NECBtot for system boundaries including the grazing cows and NECBpast for system boundaries excluding the cows. CO2 and CH4 exchange induced by soil/vegetation processes as well as direct emissions by the animals were derived from eddy covariance measurements. Other C fluxes were either measured (milk yield, concentrate feeding) or derived based on animal performance data (intake, excreta). For the investigated year, both approaches resulted in a small non-significant C loss: NECBtot - 13 ± 61 g C m-2 yr-1 and NECBpast - 17 ± 81 g C m-2 yr-1. The considerable uncertainties, depending on the approach, were mainly due to errors in the CO2 exchange or in the animal related fluxes. The associated GHG budget revealed CH4 emissions from the cows to be the major contributor, but with much lower uncertainty compared to NECB. Although only one year of data limit the representativeness of the carbon budget results, they demonstrated the important contribution of the non-CO2 fluxes depending on the chosen system boundaries and the effect of their propagated uncertainty in an exemplary way. The simultaneous application and comparison of both NECB approaches provides a useful consistency check for the carbon budget determination and can help to identify and eliminate systematic errors.

  7. Determination of the carbon budget of a pasture: effect of system boundaries and flux uncertainties

    NASA Astrophysics Data System (ADS)

    Felber, Raphael; Bretscher, Daniel; Münger, Andreas; Neftel, Albrecht; Ammann, Christof

    2016-05-01

    Carbon (C) sequestration in the soil is considered as a potential important mechanism to mitigate greenhouse gas (GHG) emissions of the agricultural sector. It can be quantified by the net ecosystem carbon budget (NECB) describing the change of soil C as the sum of all relevant import and export fluxes. NECB was investigated here in detail for an intensively grazed dairy pasture in Switzerland. Two budget approaches with different system boundaries were applied: NECBtot for system boundaries including the grazing cows and NECBpast for system boundaries excluding the cows. CO2 and CH4 exchange induced by soil/vegetation processes as well as direct emissions by the animals were derived from eddy covariance measurements. Other C fluxes were either measured (milk yield, concentrate feeding) or derived based on animal performance data (intake, excreta). For the investigated year, both approaches resulted in a small near-neutral C budget: NECBtot -27 ± 62 and NECBpast 23 ± 76 g C m-2 yr-1. The considerable uncertainties, depending on the approach, were mainly due to errors in the CO2 exchange or in the animal-related fluxes. The comparison of the NECB results with the annual exchange of other GHG revealed CH4 emissions from the cows to be the major contributor in terms of CO2 equivalents, but with much lower uncertainty compared to NECB. Although only 1 year of data limit the representativeness of the carbon budget results, they demonstrate the important contribution of the non-CO2 fluxes depending on the chosen system boundaries and the effect of their propagated uncertainty in an exemplary way. The simultaneous application and comparison of both NECB approaches provides a useful consistency check for the carbon budget determination and can help to identify and eliminate systematic errors.

  8. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  9. Science support for the Earth radiation budget experiment

    NASA Technical Reports Server (NTRS)

    Coakley, James A., Jr.

    1994-01-01

    The work undertaken as part of the Earth Radiation Budget Experiment (ERBE) included the following major components: The development and application of a new cloud retrieval scheme to assess errors in the radiative fluxes arising from errors in the ERBE identification of cloud conditions. The comparison of the anisotropy of reflected sunlight and emitted thermal radiation with the anisotropy predicted by the Angular Dependence Models (ADM's) used to obtain the radiative fluxes. Additional studies included the comparison of calculated longwave cloud-free radiances with those observed by the ERBE scanner and the use of ERBE scanner data to track the calibration of the shortwave channels of the Advanced Very High Resolution Radiometer (AVHRR). Major findings included: the misidentification of cloud conditions by the ERBE scene identification algorithm could cause 15 percent errors in the shortwave flux reflected by certain scene types. For regions containing mixtures of scene types, the errors were typically less than 5 percent, and the anisotropies of the shortwave and longwave radiances exhibited a spatial scale dependence which, because of the growth of the scanner field of view from nadir to limb, gave rise to a view zenith angle dependent bias in the radiative fluxes.

  10. Treatment of temporal aliasing effects in the context of next generation satellite gravimetry missions

    NASA Astrophysics Data System (ADS)

    Daras, Ilias; Pail, Roland

    2017-09-01

    Temporal aliasing effects have a large impact on the gravity field accuracy of current gravimetry missions and are also expected to dominate the error budget of Next Generation Gravimetry Missions (NGGMs). This paper focuses on aspects concerning their treatment in the context of Low-Low Satellite-to-Satellite Tracking NGGMs. Closed-loop full-scale simulations are performed for a two-pair Bender-type Satellite Formation Flight (SFF), by taking into account error models of new generation instrument technology. The enhanced spatial sampling and error isotropy enable a further reduction of temporal aliasing errors from the processing perspective. A parameterization technique is adopted where the functional model is augmented by low-resolution gravity field solutions coestimated at short time intervals, while the remaining higher-resolution gravity field solution is estimated at a longer time interval. Fine-tuning the parameterization choices leads to significant reduction of the temporal aliasing effects. The investigations reveal that the parameterization technique in case of a Bender-type SFF can successfully mitigate aliasing effects caused by undersampling of high-frequency atmospheric and oceanic signals, since their most significant variations can be captured by daily coestimated solutions. This amounts to a "self-dealiasing" method that differs significantly from the classical dealiasing approach used nowadays for Gravity Recovery and Climate Experiment processing, enabling NGGMs to retrieve the complete spectrum of Earth's nontidal geophysical processes, including, for the first time, high-frequency atmospheric and oceanic variations.

  11. Earth radiation budget measurements from satellites and their interpretation for climate modeling and studies

    NASA Technical Reports Server (NTRS)

    Vonderhaar, T. H.; Stephens, G. L.; Campbell, G. G.

    1980-01-01

    The annual and seasonal averaged Earth atmosphere radiation budgets derived from the most complete set of satellite observations available are presented. The budgets were derived from a composite of 48 monthly mean radiation budget maps. Annually and seasonally averaged radiation budgets are presented as global averages and zonal averages. The geographic distribution of the various radiation budget quantities is described. The annual cycle of the radiation budget was analyzed and the annual variability of net flux was shown to be largely dominated by the regular semi and annual cycles forced by external Earth-Sun geometry variations. Radiative transfer calculations were compared to the observed budget quantities and surface budgets were additionally computed with particular emphasis on discrepancies that exist between the present computations and previous surface budget estimates.

  12. Trueness verification of actual creatinine assays in the European market demonstrates a disappointing variability that needs substantial improvement. An international study in the framework of the EC4 creatinine standardization working group.

    PubMed

    Delanghe, Joris R; Cobbaert, Christa; Galteau, Marie-Madeleine; Harmoinen, Aimo; Jansen, Rob; Kruse, Rolf; Laitinen, Päivi; Thienpont, Linda M; Wuyts, Birgitte; Weykamp, Cas; Panteghini, Mauro

    2008-01-01

    The European In Vitro Diagnostics (IVD) directive requires traceability to reference methods and materials of analytes. It is a task of the profession to verify the trueness of results and IVD compatibility. The results of a trueness verification study by the European Communities Confederation of Clinical Chemistry (EC4) working group on creatinine standardization are described, in which 189 European laboratories analyzed serum creatinine in a commutable serum-based material, using analytical systems from seven companies. Values were targeted using isotope dilution gas chromatography/mass spectrometry. Results were tested on their compliance to a set of three criteria: trueness, i.e., no significant bias relative to the target value, between-laboratory variation and within-laboratory variation relative to the maximum allowable error. For the lower and intermediate level, values differed significantly from the target value in the Jaffe and the dry chemistry methods. At the high level, dry chemistry yielded higher results. Between-laboratory coefficients of variation ranged from 4.37% to 8.74%. Total error budget was mainly consumed by the bias. Non-compensated Jaffe methods largely exceeded the total error budget. Best results were obtained for the enzymatic method. The dry chemistry method consumed a large part of its error budget due to calibration bias. Despite the European IVD directive and the growing needs for creatinine standardization, an unacceptable inter-laboratory variation was observed, which was mainly due to calibration differences. The calibration variation has major clinical consequences, in particular in pediatrics, where reference ranges for serum and plasma creatinine are low, and in the estimation of glomerular filtration rate.

  13. Linking morphodynamic response with sediment mass balance on the Colorado River in Marble Canyon: issues of scale, geomorphic setting, and sampling design

    USGS Publications Warehouse

    Grams, Paul E.; Topping, David J.; Schmidt, John C.; Hazel, Joseph E.; Kaplinski, Matt

    2013-01-01

    Measurements of morphologic change are often used to infer sediment mass balance. Such measurements may, however, result in gross errors when morphologic changes over short reaches are extrapolated to predict changes in sediment mass balance for long river segments. This issue is investigated by examination of morphologic change and sediment influx and efflux for a 100 km segment of the Colorado River in Grand Canyon, Arizona. For each of four monitoring intervals within a 7 year study period, the direction of sand-storage response within short morphologic monitoring reaches was consistent with the flux-based sand mass balance. Both budgeting methods indicate that sand storage was stable or increased during the 7 year period. Extrapolation of the morphologic measurements outside the monitoring reaches does not, however, provide a reasonable estimate of the magnitude of sand-storage change for the 100 km study area. Extrapolation results in large errors, because there is large local variation in site behavior driven by interactions between the flow and local bed topography. During the same flow regime and reach-average sediment supply, some locations accumulate sand while others evacuate sand. The interaction of local hydraulics with local channel geometry exerts more control on local morphodynamic response than sand supply over an encompassing river segment. Changes in the upstream supply of sand modify bed responses but typically do not completely offset the effect of local hydraulics. Thus, accurate sediment budgets for long river segments inferred from reach-scale morphologic measurements must incorporate the effect of local hydraulics in a sampling design or avoid extrapolation altogether.

  14. Towards the 1 mm/y stability of the radial orbit error at regional scales

    NASA Astrophysics Data System (ADS)

    Couhert, Alexandre; Cerri, Luca; Legeais, Jean-François; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel

    2015-01-01

    An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West “order-1” pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.

  15. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, H. -Y.; Klein, S. A.; Xie, S.

    Many weather forecasting and climate models simulate a warm surface air temperature (T2m) bias over mid-latitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multi-model intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to T2m bias using a short-term hindcast approach with observations mainly from the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site during the period of April to August 2011. The present study examines the contributionmore » of surface energy budget errors to the bias. All participating models simulate higher net shortwave and longwave radiative fluxes at the surface but there is no consistency on signs of biases in latent and sensible heat fluxes over the Central U.S. and ARM SGP. Nevertheless, biases in net shortwave and downward longwave fluxes, as well as surface evaporative fraction (EF) are the main contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF is affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis suggests that radiation errors are always an important source of T2m error for long-term climate runs with EF errors either of equal or lesser importance. However, for the short-term hindcasts, EF errors are more important provided a model has a substantial EF bias.« less

  16. Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales

    NASA Technical Reports Server (NTRS)

    Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel

    2015-01-01

    An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.

  17. Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales

    NASA Technical Reports Server (NTRS)

    Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel

    2014-01-01

    An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS,SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.

  18. Generalized sediment budgets of the Lower Missouri River, 1968–2014

    USGS Publications Warehouse

    Heimann, David C.

    2016-09-13

    Sediment budgets of the Lower Missouri River were developed in a study led by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers. The scope of the study included the development of a long-term (post-impoundment, 1968–2014) average annual sediment budget and selected annual, monthly, and daily sediment budgets for a reach and period that adequate data were available. Included in the analyses were 31 main-stem and tributary stations of the Lower Missouri River and two Mississippi River stations—the Mississippi River below Grafton, Illinois, and the Mississippi River at St. Louis, Missouri.Long-term average annual suspended-sediment loads of Missouri River main-stem stations ranged from 0.33 million tons at the Missouri River at Yankton, South Dakota, station to 71.2 million tons at Missouri River at Hermann, Mo., station. Gaged tributary gains accounted for 9–36 percent of the local reach budgets and cumulative gaged tributary contributions accounted for 84 percent of the long-term average suspended-sediment load of the Missouri River at Hermann, Mo., station. Although the sediment budgets for seven defined main-stem reaches generally were incomplete—missing bedload, reach storage, and ungaged tributary contributions—the budget residuals (net result of sediment inputs and outputs) for six of the seven reaches ranged from -7.0 to 1.7 million tons, or from -9.2 to 4.0 percent of the reach output suspended-sediment load, and were within the 10 percent reported measurement error of annual suspended-sediment loads for large rivers. The remaining reach, downstream from Gavin’s Point Dam, extended from Yankton, S. Dak., to Sioux City, Iowa, and had a budget residual of -9.8 million tons, which was -88 percent of the suspended-sediment load at Sioux City.The Lower Missouri River reach from Omaha, Nebraska, to Nebraska City, Nebr., had periods of concurrent sediment data for each primary budget component with which to analyze and determine a suspended-sediment budget for selected annual, monthly, and daily time increments. The temporal changes in the cumulative annual budget residuals were poorly correlated with the comparatively steady 1968–2011 annual stage trends at the Missouri River at Nebraska City, Nebr., station. An accurate total sediment budget is developed by having concurrent data available for all primary suspended and bedload components for a reach of interest throughout a period. Such a complete budget, with concurrent record for suspended-sediment load and bedload components, is unavailable for any reach and period in the Lower Missouri River. The primary data gaps are in bedload data, and also in suspended-sediment gains and losses including ungaged tributary inputs and sediment storage. Bedload data gaps in the Missouri River Basin are much more prevalent than suspended-sediment data gaps, and the first step in the development of reach bedload budgets is the establishment of a standardized bedload monitoring program at main-stem stations.The temporal changes in flow-adjusted suspended-sediment concentrations analyzed at main-stem Missouri River stations indicated an overall downward change in concentrations between 1968 and 2014. Temporary declines in flow-adjusted suspended-sediment concentrations during and following large floods were evident but generally returned to near pre-flood values within about 6 months.Data uncertainties associated with the development of a sediment budget include uncertainties associated with the collection of suspended-sediment and bedload data and the computation of suspended-sediment loads. These uncertainties vary depending on the frequency of data collection, the variability of conditions being represented by the discrete samples, and the statistical approach to suspended-sediment load computations. The coefficients of variation of suspended-sediment loads of Missouri River tributary stations for 1968–2014 were greater, 75.0 percent, than the main-stem stations, 47.1 percent. The lower coefficient of variation at main-stem stations compared to tributaries, primarily is the result of the lower variability in streamflow and sediment discharge identified at main-stem stations. To obtain similar accuracy between suspended-sediment loads at main-stem and tributary stations, a longer period of record is required of the tributary stations. During 1968–2014, however, the Missouri River main-stem station record was much more complete (87 percent) than the tributary station record (28 percent).

  19. Evaluating the design of an earth radiation budget instrument with system simulations. Part 2: Minimization of instantaneous sampling errors for CERES-I

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert

    1994-01-01

    Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.

  20. Y-House: Your Match Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oristaglio, Michael L.

    2016-03-16

    Y-House is a project in Solar Decathlon 2015. The design objective of Y-House is to re-envision the micro-home through a focus on merging efficiency with spaciousness, personalized form and openness to the natural environment. The main project objective during Budget Period 1 was to complete the design of Y-House, including the research needed to fully specify its mechanical and electrical systems. The team also had an objective to finalize most of the construction planning before entering Budget Period 2. The main project objective in Budget Period 2 was to complete construction of Y-House for participation in the SD 2015 competitionmore » event in Irvine, California, in October 2015. During both budget periods, the team was continuously seeking sponsors to fund its mission.« less

  1. Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun

    2016-07-01

    The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.

  2. The Planning, Programming and Budgeting System (PPBS). A Primer

    DTIC Science & Technology

    1987-01-01

    Commands input through the JCS as well The DRB reviews and resolves major issues , as required, prior t.o final DG publication . DG is designed to...Air Staff action officer, and takes you through a complete PPBS cycle as an aid to better understanding the overall process. You will find that it...JPAM) 34 -- ISSUES 35 -- THE PROGRAM DECISION MEMORANDUM (PDM) 36 - THE BUDGET ESTIMATE SUBMISSION (BES) 37 o BUDGETING 38 - BUDGET REVIEW - PROGRAM

  3. Performance-Driven Budgeting: The Example of New York City's Schools. ERIC Digest.

    ERIC Educational Resources Information Center

    Siegel, Dorothy

    This digest examines a completed pilot program in performance-driven budgeting (PDB) in the New York City public-school system. PDB links school-level budgeting and school planning; that is, decisions about resources must be aligned with school-developed instructional-improvement plans. The digest highlights how PDB came about; its primary goal;…

  4. System Engineering the Space Infrared Interferometric Telescope (SPIRIT)

    NASA Technical Reports Server (NTRS)

    Hyde, Tristram T.; Leisawitz, David T.; Rinehart, Stephen

    2007-01-01

    The Space Infrared Interferometric Telescope (SPIRIT) was designed to accomplish three scientific objectives: (1) learn how planetary systems form from protostellar disks and how they acquire their inhomogeneous chemical composition; (2) characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets of different types form; and (3) learn how high-redshift galaxies formed and merged to form the present-day population of galaxies. SPIRIT will accomplish these objectives through infrared observations with a two aperture interferometric instrument. This paper gives an overview of SPIRIT design and operation, and how the three design cycle concept study was completed. The error budget for several key performance values allocates tolerances to all contributing factors, and a performance model of the spacecraft plus instrument system demonstrates meeting those allocations with margin.

  5. Dual view Geostationary Earth Radiation Budget from the Meteosat Second Generation satellites.

    NASA Astrophysics Data System (ADS)

    Dewitte, Steven; Clerbaux, Nicolas; Ipe, Alessandro; Baudrez, Edward; Moreels, Johan

    2017-04-01

    The diurnal cycle of the radiation budget is a key component of the tropical climate. The geostationary Meteosat Second Generation (MSG) satellites carrying both the broadband Geostationary Earth Radiation Budget (GERB) instrument with nadir resolution of 50 km and the multispectral Spinning Enhanced VIsible and InfraRed Imager (SEVIRI) with nadir resolution of 3 km offer a unique opportunity to observe this diurnal cycle. The geostationary orbit has the advantage of good temporal sampling but the disadvantage of fixed viewing angles, which makes the measurements of the broadband Top Of Atmosphere (TOA) radiative fluxes more sensitive to angular dependent errors. The Meteosat-10 (MSG-3) satellite observes the earth from the standard position at 0° longitude. From October 2016 onwards the Meteosat-8 (MSG-1) satellite makes observations from a new position at 41.5° East over the Indian Ocean. The dual view from Meteosat-8 and Meteosat-10 allows the assessment and correction of angular dependent systematic errors of the flux estimates. We demonstrate this capability with the validation of a new method for the estimation of the clear-sky TOA albedo from the SEVIRI instruments.

  6. Comparisons of the error budgets associated with ground-based FTIR measurements of atmospheric CH4 profiles at Île de la Réunion and Jungfraujoch.

    NASA Astrophysics Data System (ADS)

    Vanhaelewyn, Gauthier; Duchatelet, Pierre; Vigouroux, Corinne; Dils, Bart; Kumps, Nicolas; Hermans, Christian; Demoulin, Philippe; Mahieu, Emmanuel; Sussmann, Ralf; de Mazière, Martine

    2010-05-01

    The Fourier Transform Infra Red (FTIR) remote measurements of atmospheric constituents at the observatories at Saint-Denis (20.90°S, 55.48°E, 50 m a.s.l., Île de la Réunion) and Jungfraujoch (46.55°N, 7.98°E, 3580 m a.s.l., Switzerland) are affiliated to the Network for the Detection of Atmospheric Composition Change (NDACC). The European NDACC FTIR data for CH4 were improved and homogenized among the stations in the EU project HYMN. One important application of these data is their use for the validation of satellite products, like the validation of SCIAMACHY or IASI CH4 columns. Therefore, it is very important that errors and uncertainties associated to the ground-based FTIR CH4 data are well characterized. In this poster we present a comparison of errors on retrieved vertical concentration profiles of CH4 between Saint-Denis and Jungfraujoch. At both stations, we have used the same retrieval algorithm, namely SFIT2 v3.92 developed jointly at the NASA Langley Research Center, the National Center for Atmospheric Research (NCAR) and the National Institute of Water and Atmosphere Research (NIWA) at Lauder, New Zealand, and error evaluation tools developed at the Belgian Institute for Space Aeronomy (BIRA-IASB). The error components investigated in this study are: smoothing, noise, temperature, instrumental line shape (ILS) (in particular the modulation amplitude and phase), spectroscopy (in particular the pressure broadening and intensity), interfering species and solar zenith angle (SZA) error. We will determine if the characteristics of the sites in terms of altitude, geographic locations and atmospheric conditions produce significant differences in the error budgets for the retrieved CH4 vertical profiles

  7. Use of Numerical Groundwater Model and Analytical Empirical Orthogonal Function for Calibrating Spatiotemporal pattern of Pumpage, Recharge and Parameter

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.

    2016-12-01

    This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.

  8. Estimating diffusivity from the mixed layer heat and salt balances in the North Pacific

    NASA Astrophysics Data System (ADS)

    Cronin, M. F.; Pelland, N.; Emerson, S. R.; Crawford, W. R.

    2015-12-01

    Data from two National Oceanographic and Atmospheric Administration (NOAA) surface moorings in the North Pacific, in combination with data from satellite, Argo floats and glider (when available), are used to evaluate the residual diffusive flux of heat across the base of the mixed layer from the surface mixed layer heat budget. The diffusion coefficient (i.e., diffusivity) is then computed by dividing the diffusive flux by the temperature gradient in the 20-m transition layer just below the base of the mixed layer. At Station Papa in the NE Pacific subpolar gyre, this diffusivity is 1×10-4 m2/s during summer, increasing to ~3×10-4 m2/s during fall. During late winter and early spring, diffusivity has large errors. At other times, diffusivity computed from the mixed layer salt budget at Papa correlate with those from the heat budget, giving confidence that the results are robust for all seasons except late winter-early spring and can be used for other tracers. In comparison, at the Kuroshio Extension Observatory (KEO) in the NW Pacific subtropical recirculation gyre, somewhat larger diffusivity are found based upon the mixed layer heat budget: ~ 3×10-4 m2/s during the warm season and more than an order of magnitude larger during the winter, although again, wintertime errors are large. These larger values at KEO appear to be due to the increased turbulence associated with the summertime typhoons, and weaker wintertime stratification.

  9. Estimating diffusivity from the mixed layer heat and salt balances in the North Pacific

    NASA Astrophysics Data System (ADS)

    Cronin, Meghan F.; Pelland, Noel A.; Emerson, Steven R.; Crawford, William R.

    2015-11-01

    Data from two National Oceanographic and Atmospheric Administration (NOAA) surface moorings in the North Pacific, in combination with data from satellite, Argo floats and glider (when available), are used to evaluate the residual diffusive flux of heat across the base of the mixed layer from the surface mixed layer heat budget. The diffusion coefficient (i.e., diffusivity) is then computed by dividing the diffusive flux by the temperature gradient in the 20 m transition layer just below the base of the mixed layer. At Station Papa in the NE Pacific subpolar gyre, this diffusivity is 1 × 10-4 m2/s during summer, increasing to ˜3 × 10-4 m2/s during fall. During late winter and early spring, diffusivity has large errors. At other times, diffusivity computed from the mixed layer salt budget at Papa correlate with those from the heat budget, giving confidence that the results are robust for all seasons except late winter-early spring and can be used for other tracers. In comparison, at the Kuroshio Extension Observatory (KEO) in the NW Pacific subtropical recirculation gyre, somewhat larger diffusivities are found based upon the mixed layer heat budget: ˜ 3 × 10-4 m2/s during the warm season and more than an order of magnitude larger during the winter, although again, wintertime errors are large. These larger values at KEO appear to be due to the increased turbulence associated with the summertime typhoons, and weaker wintertime stratification.

  10. Investigation of scene identification algorithms for radiation budget measurements

    NASA Technical Reports Server (NTRS)

    Diekmann, F. J.

    1986-01-01

    The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.

  11. Prototype Development of a Geostationary Synthetic Thinned Aperture Radiometer, GeoSTAR

    NASA Technical Reports Server (NTRS)

    Tanner, Alan B.; Wilson, William J.; Kangaslahti, Pekka P.; Lambrigsten, Bjorn H.; Dinardo, Steven J.; Piepmeier, Jeffrey R.; Ruf, Christopher S.; Rogacki, Steven; Gross, S. M.; Musko, Steve

    2004-01-01

    Preliminary details of a 2-D synthetic aperture radiometer prototype operating from 50 to 58 GHz will be presented. The instrument is being developed as a laboratory testbed, and the goal of this work is to demonstrate the technologies needed to do atmospheric soundings with high spatial resolution from Geostationary orbit. The concept is to deploy a large sparse aperture Y-array from a geostationary satellite, and to use aperture synthesis to obtain images of the earth without the need for a large mechanically scanned antenna. The laboratory prototype consists of a Y-array of 24 horn antennas, MMIC receivers, and a digital cross-correlation sub-system. System studies are discussed, including an error budget which has been derived from numerical simulations. The error budget defines key requirements, such as null offsets, phase calibration, and antenna pattern knowledge. Details of the instrument design are discussed in the context of these requirements.

  12. Intelligence/Electronic Warfare (IEW) direction-finding and fix estimation analysis report. Volume 2: Trailblazer

    NASA Technical Reports Server (NTRS)

    Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce

    1985-01-01

    An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.

  13. WFIRST: Managing Telescope Wavefront Stability to Meet Coronagraph Performance

    NASA Astrophysics Data System (ADS)

    Noecker, Martin; Poberezhskiy, Ilya; Kern, Brian; Krist, John; WFIRST System Engineering Team

    2018-01-01

    The WFIRST coronagraph instrument (CGI) needs a stable telescope and active wavefront control to perform coronagraph science with an expected sensitivity of 8x10-9 in the exoplanet-star flux ratio (SNR=10) at 200 milliarcseconds angular separation. With its subnanometer requirements on the stability of its input wavefront error (WFE), the CGI employs a combination of pointing and wavefront control loops and thermo-mechanical stability to meet budget allocations for beam-walk and low-order WFE, which enable stable starlight speckles on the science detector that can be removed by image subtraction. We describe the control strategy and the budget framework for estimating and budgeting the elements of wavefront stability, and the modeling strategy to evaluate it.

  14. Groundwater discharge to lakes (GDL) - the disregarded component of lake nutrient budgets

    NASA Astrophysics Data System (ADS)

    Lewandowski, J.; Meinikmann, K.; Pöschke, F.; Nützmann, G.

    2012-04-01

    Eutrophication is a major threat to lakes in temperate climatic zones. It is necessary to determine the relevance of different nutrient sources to conduct effective management measures, to understand in-lake processes and to model future scenarios. A prerequisite for such nutrient budgets are water budgets. While most components of the water budget can be determined quite accurate the quantification of groundwater discharge to lakes (GDL) and surface water infiltration into the aquifer are much more difficult. For example, it is quite common to determine the groundwater component as residual in the water and nutrient budget which is extremely problematic since in that case all errors of the budget terms are summed up in the groundwater term. In total, we identified 10 different reasons for disregarding the groundwater path in nutrient budgets. We investigated the fate of the nutrients nitrogen and phosphorus on their pathway from the catchment through the reactive aquifer-lake interface into the lake. We reviewed the international literature and summarized numbers reported for GDL of nutrients. Since literature is quite sparse we also had a look at numbers reported for submarine groundwater discharge (SGD) of nutrients for which much more literature exists and which is despite some fundamental differences in principal comparable to GDL.

  15. The measurement of the earth's radiation budget as a problem in information theory - A tool for the rational design of earth observing systems

    NASA Technical Reports Server (NTRS)

    Barkstrom, B. R.

    1983-01-01

    The measurement of the earth's radiation budget has been chosen to illustrate the technique of objective system design. The measurement process is an approximately linear transformation of the original field of radiant exitances, so that linear statistical techniques may be employed. The combination of variability, measurement strategy, and error propagation is presently made with the help of information theory, as suggested by Kondratyev et al. (1975) and Peckham (1974). Covariance matrices furnish the quantitative statement of field variability.

  16. Improving NGDC Track-line Data Quality Control

    NASA Astrophysics Data System (ADS)

    Chandler, M. T.; Wessel, P.

    2004-12-01

    Ship-board gravity, magnetic and bathymetry data archived at the National Geophysical Data Center (NGDC) represent decades of seagoing research, containing over 4,500 cruises. Cruise data remain relevent despite the prominence of satellite altimetry-derived global grids because many geologic processes remain resolvable by oceanographic research alone. Due to the tremendous investment put forth by scientists and taxpayers to compile this vast archive and the significant errors found within it, additional quality assessment and corrections are warranted. These can best be accomplished by adding to existing quality control measures at NGDC. We are currently developing open source software to provide additional quality control. Along with NGDC's current sanity checking, new data at NGDC will also be subjected to an along-track ``sniffer'' which will detect and flag suspicious data for later graphical inspection using a visual editor. If new data pass these tests, they will undergo further scrutinization using a crossover error (COE) calculator which will compare new data values to existing values at points of intersection within the archive. Data passing these tests will be deemed ``quality data`` and suitable for permanent addition to the archive, while data that fail will be returned to the source institution for correction. Crossover errors will be stored and an online COE database will be available. The COE database will allow users to apply corrections to the NGDC track-line database to produce corrected data files. At no time will the archived data itself be modified. An attempt will also be made to reduce navigational errors for pre-GPS navigated cruises. Upon completion these programs will be used to explore and model systematic errors within the archive, generate correction tables for all cruises, and to quantify the error budget in marine geophysical observations. Software will be released and these procedures will be implemented in cooperation with NGDC staff.

  17. Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas

    USGS Publications Warehouse

    Lindgren, R.J.

    2006-01-01

    A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.

  18. Stability Error Budget for an Aggressive Coronagraph on a 3.8 m Telescope

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Marchen, Luis; Krist, John; Rud, Mayer

    2011-01-01

    We evaluate in detail the stability requirements for a band-limited coronagraph with an inner working angle as small as 2 lambda/D coupled to an off-axis, 3.8-m diameter telescope. We have updated our methodologies since presenting a stability error budget for the Terrestrial Planet Finder Coronagraph mission that worked at 4 lambda/D and employed an 8th-order mask to reduce aberration sensitives. In the previous work, we determined the tolerances relative to the total light leaking through the coronagraph. Now, we separate the light into a radial component, which is readily separable from a planet signal, and an azimuthal component, which is easily confused with a planet signal. In the current study, throughput considerations require a 4th-order coronagraph. This, combined with the more aggressive working angle, places extraordinarily tight requirements on wavefront stability and opto-mechanical stability. We find that the requirements are driven mainly by coma that leaks around the coronagraph mask and mimics the localized signal of a planet, and pointing errors that scatter light into the background, decreasing SNR. We also show how the requirements would be relaxed if a low-order aberration detection system could be employed.

  19. Assessment of the global monthly mean surface insolation estimated from satellite measurements using global energy balance archive data

    NASA Technical Reports Server (NTRS)

    Li, Zhanqing; Whitlock, Charles H.; Charlock, Thomas P.

    1995-01-01

    Global sets of surface radiation budget (SRB) have been obtained from satellite programs. These satellite-based estimates need validation with ground-truth observations. This study validates the estimates of monthly mean surface insolation contained in two satellite-based SRB datasets with the surface measurements made at worldwide radiation stations from the Global Energy Balance Archive (GEBA). One dataset was developed from the Earth Radiation Budget Experiment (ERBE) using the algorithm of Li et al. (ERBE/SRB), and the other from the International Satellite Cloud Climatology Project (ISCCP) using the algorithm of Pinker and Laszlo and that of Staylor (GEWEX/SRB). Since the ERBE/SRB data contain the surface net solar radiation only, the values of surface insolation were derived by making use of the surface albedo data contained GEWEX/SRB product. The resulting surface insolation has a bias error near zero and a root-mean-square error (RMSE) between 8 and 28 W/sq m. The RMSE is mainly associated with poor representation of surface observations within a grid cell. When the number of surface observations are sufficient, the random error is estimated to be about 5 W/sq m with present satellite-based estimates. In addition to demonstrating the strength of the retrieving method, the small random error demonstrates how well the ERBE derives from the monthly mean fluxes at the top of the atmosphere (TOA). A larger scatter is found for the comparison of transmissivity than for that of insolation. Month to month comparison of insolation reveals a weak seasonal trend in bias error with an amplitude of about 3 W/sq m. As for the insolation data from the GEWEX/SRB, larger bias errors of 5-10 W/sq m are evident with stronger seasonal trends and almost identical RMSEs.

  20. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results (Part I): Earths Radiation Budget

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.

  1. Input-output budgets of inorganic nitrogen for 24 forest watersheds in the northeastern United States: a review

    Treesearch

    John L. Campbell; James W. Hornbeck; Myron J. Mitchell; Mary Beth Adams; Mark S. Castro; Charles T. Driscoll; Jeffrey S. Kahl; James N. Kochenderfer; Gene E. Likens; James A. Lynch; Peter S. Murdoch; Sarah J. Nelson; James B. Shanley

    2004-01-01

    Input-output budgets for dissolved inorganic nitrogen (DIN) are summarized for 24 small watersheds at 15 locations in the northeastern United States. The study watersheds are completely forested, free of recent physical disturbances, and span a geographical region bounded by West Virginia on the south and west, and Maine on the north and east. Total N budgets are not...

  2. Error budgeting single and two qubit gates in a superconducting qubit

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chiaro, B.; Dunsworth, A.; Foxen, B.; Neill, C.; Quintana, C.; Wenner, J.; Martinis, John. M.; Google Quantum Hardware Team Team

    Superconducting qubits have shown promise as a platform for both error corrected quantum information processing and demonstrations of quantum supremacy. High fidelity quantum gates are crucial to achieving both of these goals, and superconducting qubits have demonstrated two qubit gates exceeding 99% fidelity. In order to improve gate fidelity further, we must understand the remaining sources of error. In this talk, I will demonstrate techniques for quantifying the contributions of control, decoherence, and leakage to gate error, for both single and two qubit gates. I will also discuss the near term outlook for achieving quantum supremacy using a gate-based approach in superconducting qubits. This work is supported Google Inc., and by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1605114.

  3. Estimating sediment budgets at the interface between rivers and estuaries with application to the Sacramento-San Joaquin River Delta

    USGS Publications Warehouse

    Wright, S.A.; Schoellhamer, D.H.

    2005-01-01

    [1] Where rivers encounter estuaries, a transition zone develops where riverine and tidal processes both affect sediment transport processes. One such transition zone is the Sacramento-San Joaquin River Delta, a large, complex system where several rivers meet to form an estuary (San Francisco Bay). Herein we present the results of a detailed sediment budget for this river/estuary transitional system. The primary regional goal of the study was to measure sediment transport rates and pathways in the delta in support of ecosystem restoration efforts. In addition to achieving this regional goal, the study has produced general methods to collect, edit, and analyze (including error analysis) sediment transport data at the interface of rivers and estuaries. Estimating sediment budgets for these systems is difficult because of the mixed nature of riverine versus tidal transport processes, the different timescales of transport in fluvial and tidal environments, and the sheer complexity and size of systems such as the Sacramento-San Joaquin River Delta. Sediment budgets also require error estimates in order to assess whether differences in inflows and outflows, which could be small compared to overall fluxes, are indeed distinguishable from zero. Over the 4 year period of this study, water years 1999-2002, 6.6 ?? 0.9 Mt of sediment entered the delta and 2.2 ?? 0.7 Mt exited, resulting in 4.4 ?? 1.1 Mt (67 ?? 17%) of deposition. The estimated deposition rate corresponding to this mass of sediment compares favorably with measured inorganic sediment accumulation on vegetated wetlands in the delta.

  4. Characterizing biospheric carbon balance using CO2 observations from the OCO-2 satellite

    NASA Astrophysics Data System (ADS)

    Miller, Scot M.; Michalak, Anna M.; Yadav, Vineet; Tadić, Jovan M.

    2018-05-01

    NASA's Orbiting Carbon Observatory 2 (OCO-2) satellite launched in summer of 2014. Its observations could allow scientists to constrain CO2 fluxes across regions or continents that were previously difficult to monitor. This study explores an initial step toward that goal; we evaluate the extent to which current OCO-2 observations can detect patterns in biospheric CO2 fluxes and constrain monthly CO2 budgets. Our goal is to guide top-down, inverse modeling studies and identify areas for future improvement. We find that uncertainties and biases in the individual OCO-2 observations are comparable to the atmospheric signal from biospheric fluxes, particularly during Northern Hemisphere winter when biospheric fluxes are small. A series of top-down experiments indicate how these errors affect our ability to constrain monthly biospheric CO2 budgets. We are able to constrain budgets for between two and four global regions using OCO-2 observations, depending on the month, and we can constrain CO2 budgets at the regional level (i.e., smaller than seven global biomes) in only a handful of cases (16 % of all regions and months). The potential of the OCO-2 observations, however, is greater than these results might imply. A set of synthetic data experiments suggests that retrieval errors have a salient effect. Advances in retrieval algorithms and to a lesser extent atmospheric transport modeling will improve the results. In the interim, top-down studies that use current satellite observations are best-equipped to constrain the biospheric carbon balance across only continental or hemispheric regions.

  5. Weighing Rocky Exoplanets with Improved Radial Velocimetry

    NASA Astrophysics Data System (ADS)

    Xuesong Wang, Sharon; Wright, Jason; California Planet Survey Consortium

    2016-01-01

    The synergy between Kepler and the ground-based radial velocity (RV) surveys have made numerous discoveries of small and rocky exoplanets, opening the age of Earth analogs. However, most (29/33) of the RV-detected exoplanets that are smaller than 3 Earth radii do not have their masses constrained to better than 20% - limited by the current RV precision (1-2 m/s). Our work improves the RV precision of the Keck telescope, which is responsible for most of the mass measurements for small Kepler exoplanets. We have discovered and verified, for the first time, two of the dominant terms in Keck's RV systematic error budget: modeling errors (mostly in deconvolution) and telluric contamination. These two terms contribute 1 m/s and 0.6 m/s, respectively, to the RV error budget (RMS in quadrature), and they create spurious signals at periods of one sidereal year and its harmonics with amplitudes of 0.2-1 m/s. Left untreated, these errors can mimic the signals of Earth-like or Super-Earth planets in the Habitable Zone. Removing these errors will bring better precision to ten-year worth of Keck data and better constraints on the masses and compositions of small Kepler planets. As more precise RV instruments coming online, we need advanced data analysis tools to overcome issues like these in order to detect the Earth twin (RV amplitude 8 cm/s). We are developing a new, open-source RV data analysis tool in Python, which uses Bayesian MCMC and Gaussian processes, to fully exploit the hardware improvements brought by new instruments like MINERVA and NASA's WIYN/EPDS.

  6. Preventing Marketing Efforts That Bomb.

    ERIC Educational Resources Information Center

    Sevier, Robert A.

    2000-01-01

    In a marketplace overwhelmed with messages, too many institutions waste money on ineffective marketing. Highlights five common marketing errors: limited definition of marketing; unwillingness to address strategic issues; no supporting data; fuzzy goals and directions; and unrealistic expectations, time lines, and budgets. Though trustees are not…

  7. A manual to identify sources of fluvial sediment

    USGS Publications Warehouse

    Gellis, Allen C.; Fitzpatrick, Faith A.; Schubauer-Berigan, Joseph

    2016-01-01

    Sediment is an important pollutant of concern that can degrade and alter aquatic habitat. A sediment budget is an accounting of the sources, storage, and export of sediment over a defined spatial and temporal scale. This manual focuses on field approaches to estimate a sediment budget. We also highlight the sediment fingerprinting approach to attribute sediment to different watershed sources. Determining the sources and sinks of sediment is important in developing strategies to reduce sediment loads to water bodies impaired by sediment. Therefore, this manual can be used when developing a sediment TMDL requiring identification of sediment sources.The manual takes the user through the seven necessary steps to construct a sediment budget:Decision-making for watershed scale and time period of interestFamiliarization with the watershed by conducting a literature review, compiling background information and maps relevant to study questions, conducting a reconnaissance of the watershedDeveloping partnerships with landowners and jurisdictionsCharacterization of watershed geomorphic settingDevelopment of a sediment budget designData collectionInterpretation and construction of the sediment budgetGenerating products (maps, reports, and presentations) to communicate findings.Sediment budget construction begins with examining the question(s) being asked and whether a sediment budget is necessary to answer these question(s). If undertaking a sediment budget analysis is a viable option, the next step is to define the spatial scale of the watershed and the time scale needed to answer the question(s). Of course, we understand that monetary constraints play a big role in any decision.Early in the sediment budget development process, we suggest getting to know your watershed by conducting a reconnaissance and meeting with local stakeholders. The reconnaissance aids in understanding the geomorphic setting of the watershed and potential sources of sediment. Identifying the potential sediment sources early in the design of the sediment budget will help later in deciding which tools are necessary to monitor erosion and/or deposition at these sources. Tools can range from rapid inventories to estimate the sediment budget or quantifying sediment erosion, deposition, and export through more rigorous field monitoring. In either approach, data are gathered and erosion and deposition calculations are determined and compared to the sediment export with a description of the error uncertainty. Findings are presented to local stakeholders and management officials.Sediment fingerprinting is a technique that apportions the sources of fine-grained sediment in a watershed using tracers or fingerprints. Due to different geologic and anthropogenic histories, the chemical and physical properties of sediment in a watershed may vary and often represent a unique signature (or fingerprint) for each source within the watershed. Fluvial sediment samples (the target sediment) are also collected and exhibit a composite of the source properties that can be apportioned through various statistical techniques. Using an unmixing-model and error analysis, the final apportioned sediment is determined.

  8. Use of hydrologic budgets and hydrochemistry to determine ground-water and surface-water interactions for Rapid Creek, Western South Dakota

    USGS Publications Warehouse

    Anderson, Mark T.

    1995-01-01

    The study of ground-water and surface-water interactions often employs streamflow-gaging records and hydrologic budgets to determine ground-water seepage. Because ground-water seepage usually is computed as a residual in the hydrologic budget approach, all uncertainty of measurement and estimation of budget components is associated with the ground-water seepage. This uncertainty can exceed the estimate, especially when streamflow and its associated error of measurement, is large relative to other budget components. In a study of Rapid Creek in western South Dakota, the hydrologic budget approach with hydrochemistry was combined to determine ground-water seepage. The City of Rapid City obtains most of its municipal water from three infiltration galleries (Jackson Springs, Meadowbrook, and Girl Scout) constructed in the near-stream alluvium along Rapid Creek. The reach of Rapid Creek between Pactola Reservoir and Rapid City and, in particular the two subreaches containing the galleries, were studied intensively to identify the sources of water to each gallery. Jackson Springs Gallery was found to pump predominantly ground water with a minor component of surface water. Meadowbrook and Girl Scout Galleries induce infiltration of surface water from Rapid Creek but also have a significant component of ground water.

  9. $$|V_{ub}|$$ from $$B\\to\\pi\\ell\

    DOE PAGES

    Bailey, Jon A.; et al.

    2015-07-23

    We present a lattice-QCD calculation of the B → πℓν semileptonic form factors and a new determination of the CKM matrix element |V ub|. We use the MILC asqtad (2+1)-flavor lattice configurations at four lattice spacings and light-quark masses down to 1/20 of the physical strange-quark mass. We extrapolate the lattice form factors to the continuum using staggered chiral perturbation theory in the hard-pion and SU(2) limits. We employ a model-independent z parametrization to extrapolate our lattice form factors from large-recoil momentum to the full kinematic range. We introduce a new functional method to propagate information from the chiral-continuum extrapolationmore » to the z expansion. We present our results together with a complete systematic error budget, including a covariance matrix to enable the combination of our form factors with other lattice-QCD and experimental results. To obtain |V ub|, we simultaneously fit the experimental data for the B → πℓν differential decay rate obtained by the BABAR and Belle collaborations together with our lattice form-factor results. We find |V ub|=(3.72±0.16) × 10 –3, where the error is from the combined fit to lattice plus experiments and includes all sources of uncertainty. Our form-factor results bring the QCD error on |V ub| to the same level as the experimental error. We also provide results for the B → πℓν vector and scalar form factors obtained from the combined lattice and experiment fit, which are more precisely determined than from our lattice-QCD calculation alone. Lastly, these results can be used in other phenomenological applications and to test other approaches to QCD.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Jon A.; et al.

    We present a lattice-QCD calculation of the B → πℓν semileptonic form factors and a new determination of the CKM matrix element |V ub|. We use the MILC asqtad (2+1)-flavor lattice configurations at four lattice spacings and light-quark masses down to 1/20 of the physical strange-quark mass. We extrapolate the lattice form factors to the continuum using staggered chiral perturbation theory in the hard-pion and SU(2) limits. We employ a model-independent z parametrization to extrapolate our lattice form factors from large-recoil momentum to the full kinematic range. We introduce a new functional method to propagate information from the chiral-continuum extrapolationmore » to the z expansion. We present our results together with a complete systematic error budget, including a covariance matrix to enable the combination of our form factors with other lattice-QCD and experimental results. To obtain |V ub|, we simultaneously fit the experimental data for the B → πℓν differential decay rate obtained by the BABAR and Belle collaborations together with our lattice form-factor results. We find |V ub|=(3.72±0.16) × 10 –3, where the error is from the combined fit to lattice plus experiments and includes all sources of uncertainty. Our form-factor results bring the QCD error on |V ub| to the same level as the experimental error. We also provide results for the B → πℓν vector and scalar form factors obtained from the combined lattice and experiment fit, which are more precisely determined than from our lattice-QCD calculation alone. Lastly, these results can be used in other phenomenological applications and to test other approaches to QCD.« less

  11. Improved predictive ability of climate-human-behaviour interactions with modifications to the COMFA outdoor energy budget model.

    PubMed

    Vanos, J K; Warland, J S; Gillespie, T J; Kenny, N A

    2012-11-01

    The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO(2) reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m(-2), respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation (I (cl)), as well clothing non-uniformity, with changing air temperature (T (a)) and metabolic activity (M (act)). Equivalent T (a) values (for I (cl) estimation) are calculated in order to lower the I (cl) value with increasing M (act) at equal T (a). Furthermore, threshold T (a) values are calculated to predict the point at which an individual will change from a uniform I (cl) to a segmented I (cl) (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity (v (r)) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v (r) equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m(-2) and 1.7°C higher when using the improved v (r) equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.

  12. Improved predictive ability of climate-human-behaviour interactions with modifications to the COMFA outdoor energy budget model

    NASA Astrophysics Data System (ADS)

    Vanos, J. K.; Warland, J. S.; Gillespie, T. J.; Kenny, N. A.

    2012-11-01

    The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO2 reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m-2, respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation ( I cl), as well clothing non-uniformity, with changing air temperature ( T a) and metabolic activity ( M act). Equivalent T a values (for I cl estimation) are calculated in order to lower the I cl value with increasing M act at equal T a. Furthermore, threshold T a values are calculated to predict the point at which an individual will change from a uniform I cl to a segmented I cl (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity ( v r) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v r equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m-2 and 1.7°C higher when using the improved v r equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.

  13. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  14. 77 FR 55240 - Order Making Fiscal Year 2013 Annual Adjustments to Registration Fee Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-07

    ... Management and Budget (``OMB'') to project the aggregate offering price for purposes of the fiscal year 2012... AAMOP is given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n...

  15. Cost effectiveness of the stream-gaging program in Pennsylvania

    USGS Publications Warehouse

    Flippo, H.N.; Behrendt, T.E.

    1985-01-01

    This report documents a cost-effectiveness study of the stream-gaging program in Pennsylvania. Data uses and funding were identified for 223 continuous-record stream gages operated in 1983; four are planned for discontinuance at the close of water-year 1985; two are suggested for conversion, at the beginning of the 1985 water year, for the collection of only continuous stage records. Two of 11 special-purpose short-term gages are recommended for continuation when the supporting project ends; eight of these gages are to be discontinued and the other will be converted to a partial-record type. Current operation costs for the 212 stations recommended for continued operation is $1,199,000 per year in 1983. The average standard error of estimation for instantaneous streamflow is 15.2%. An overall average standard error of 9.8% could be attained on a budget of $1,271,000, which is 6% greater than the 1983 budget, by adopted cost-effective stream-gaging operations. (USGS)

  16. Multidisciplinary Analysis of the NEXUS Precursor Space Telescope

    NASA Astrophysics Data System (ADS)

    de Weck, Olivier L.; Miller, David W.; Mosier, Gary E.

    2002-12-01

    A multidisciplinary analysis is demonstrated for the NEXUS space telescope precursor mission. This mission was originally designed as an in-space technology testbed for the Next Generation Space Telescope (NGST). One of the main challenges is to achieve a very tight pointing accuracy with a sub-pixel line-of-sight (LOS) jitter budget and a root-mean-square (RMS) wavefront error smaller than λ/50 despite the presence of electronic and mechanical disturbances sources. The analysis starts with the assessment of the performance for an initial design, which turns out not to meet the requirements. Twentyfive design parameters from structures, optics, dynamics and controls are then computed in a sensitivity and isoperformance analysis, in search of better designs. Isoperformance allows finding an acceptable design that is well "balanced" and does not place undue burden on a single subsystem. An error budget analysis shows the contributions of individual disturbance sources. This paper might be helpful in analyzing similar, innovative space telescope systems in the future.

  17. Compensation of power drops in reflective semiconductor optical amplifier-based passive optical network with upstream data rate adjustment

    NASA Astrophysics Data System (ADS)

    Yeh, Chien-Hung; Chow, Chi-Wai; Chiang, Ming-Feng; Shih, Fu-Yuan; Pan, Ci-Ling

    2011-09-01

    In a wavelength division multiplexed-passive optical network (WDM-PON), different fiber lengths and optical components would introduce different power budgets to different optical networking units (ONUs). Besides, the power decay of the distributed optical carrier from the optical line terminal owing to aging of the optical transmitter could also reduce the injected power into the ONU. In this work, we propose and demonstrate a carrier distributed WDM-PON using a reflective semiconductor optical amplifier-based ONU that can adjust its upstream data rate to accommodate different injected optical powers. The WDM-PON is evaluated at standard-reach (25 km) and long-reach (100 km). Bit-error rate measurements at different injected optical powers and transmission lengths show that by adjusting the upstream data rate of the system (622 Mb/s, 1.25 and 2.5 Gb/s), error-free (<10-9) operation can still be achieved when the power budget drops.

  18. Error Budgeting and Tolerancing of Starshades for Exoplanet Detection

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Noecker, M. Charley; Glassman, Tiffany; Lo, Amy S.; Dumont, Philip J.; Kasdin, N. Jeremy; Cady, Eric J.; Vanderbei, Robert; Lawson, Peter R.

    2010-01-01

    A flower-like starshade positioned between a star and a space telescope is an attractive option for blocking the starlight to reveal the faint reflected light of an orbiting Earth-like planet. Planet light passes around the petals and directly enters the telescope where it is seen along with a background of scattered light due to starshade imperfections. We list the major perturbations that are expected to impact the performance of a starshade system and show that independent models at NGAS and JPL yield nearly identical optical sensitivities. We give the major sensitivities in the image plane for a design consisting of a 34-m diameter starshade, and a 2-m diameter telescope separated by 39,000 km, operating between 0.25 and 0.55 um. These sensitivities include individual petal and global shape terms evaluated at the inner working angle. Following a discussion of the combination of individual perturbation terms, we then present an error budget that is consistent with detection of an Earth-like planet 26 magnitudes fainter than its host star.

  19. 40 CFR 97.51 - Establishment of accounts.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... a complete account certificate of representation under § 97.13, the Administrator will establish: (1) A compliance account for each NOX Budget unit for which the account certificate of representation... representation was submitted and that has two or more NOX Budget units. (b) General accounts—(1) Application for...

  20. 40 CFR 97.51 - Establishment of accounts.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... a complete account certificate of representation under § 97.13, the Administrator will establish: (1) A compliance account for each NOX Budget unit for which the account certificate of representation... representation was submitted and that has two or more NOX Budget units. (b) General accounts—(1) Application for...

  1. 40 CFR 97.51 - Establishment of accounts.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... a complete account certificate of representation under § 97.13, the Administrator will establish: (1) A compliance account for each NOX Budget unit for which the account certificate of representation... representation was submitted and that has two or more NOX Budget units. (b) General accounts—(1) Application for...

  2. 40 CFR 97.51 - Establishment of accounts.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... a complete account certificate of representation under § 97.13, the Administrator will establish: (1) A compliance account for each NOX Budget unit for which the account certificate of representation... representation was submitted and that has two or more NOX Budget units. (b) General accounts—(1) Application for...

  3. 24 CFR 990.280 - Project-based budgeting and accounting.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... all data needed to complete project-based financial statements in accordance with Accounting... accounting. 990.280 Section 990.280 Housing and Urban Development Regulations Relating to Housing and Urban... budgeting and accounting. (a) All PHAs covered by this subpart shall develop and maintain a system of...

  4. 43 CFR 26.7 - Application format and instructions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... must be made using the Office of Management and Budget approved form (SF-424) entitled “Federal...' State Grant Procedures Handbook. General instructions for completing the form by part numbers are: (a... State Grant Procedures Handbook for definitions of cost categories and for budget narrative instructions...

  5. Demonstrating Starshade Performance as Part of NASA's Technology Development for Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Kasdin, N. Jeremy; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M. W.; Walkemeyer, P. E.; Bach, V. M.; Oakes, E.; Cady, E. J.; Martin, S. R.; Marchen, L. F.; Macintosh, B.; Rudd, R.; Mikula, J. A.; Lynch, D. H.

    2012-01-01

    In this poster we describe the results of our project to design, manufacture, and measure a prototype starshade petal as part of the Technology Development for Exoplanet Missions program. An external occult is a satellite employing a large screen, or starshade,that flies in formation with a spaceborne telescope to provide the starlight suppression needed for detecting and characterizing exoplanets. Among the advantages of using an occulter are the broadband allowed for characterization and the removal of light for the observatory, greatly relaxing the requirements on the telescope and instrument. In this first two-year phase we focused on the key requirement of manufacturing a precision petal with the precise tolerances needed to meet the overall error budget. These tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We show the results of this analysis and a representative error budget. We also present the final manufactured occulter petal and the metrology on its shape that demonstrates it meets requirements. We show that a space occulter built of petals with the same measured shape would achieve better than 1e-9 contrast. We also show our progress in building and testing sample edges with the sharp radius of curvature needed for limiting solar glint. Finally, we describe our plans for the second TDEM phase.

  6. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    NASA Astrophysics Data System (ADS)

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  7. 7 CFR 1291.6 - Completed application.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... that is external to the project and that is of direct importance to the intended beneficiaries and/or... Assistance”. (b) SF-424A “Budget Information—Non-Construction Programs” showing the budget for each project... fiscal year's limit as announced in the Federal Register. Provide in sufficient detail information about...

  8. INPUT-OUTPUT BUDGETS OF INORGANIC NITROGEN FOR 24 FOREST WATERSHEDS IN THE NORTHEASTERN UNITED STATES: A REVIEW

    EPA Science Inventory

    Input-output budgets for dissolved inorganic nitrogen (DIN) are summarized for 24 small watersheds at 15 locations in the northeasternUnited States. The study watersheds are completely forested, free of recent physical disturbances, and span a geographical region bounded by West ...

  9. Application of Monte-Carlo Analyses for the Microwave Anisotropy Probe (MAP) Mission

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Rohrbaugh, David; Schiff, Conrad; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The Microwave Anisotropy Probe (MAP) is the third launch in the National Aeronautics and Space Administration's (NASA's) a Medium Class Explorers (MIDEX) program. MAP will measure, in greater detail, the cosmic microwave background radiation from an orbit about the Sun-Earth-Moon L2 Lagrangian point. Maneuvers will be required to transition MAP from it's initial highly elliptical orbit to a lunar encounter which will provide the remaining energy to send MAP out to a lissajous orbit about L2. Monte-Carlo analysis methods were used to evaluate the potential maneuver error sources and determine their effect of the fixed MAP propellant budget. This paper will discuss the results of the analyses on three separate phases of the MAP mission - recovering from launch vehicle errors, responding to phasing loop maneuver errors, and evaluating the effect of maneuver execution errors and orbit determination errors on stationkeeping maneuvers at L2.

  10. Worst-error analysis of batch filter and sequential filter in navigation problems. [in spacecraft trajectory estimation

    NASA Technical Reports Server (NTRS)

    Nishimura, T.

    1975-01-01

    This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.

  11. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  12. Ground-water and surface-water flow and estimated water budget for Lake Seminole, southwestern Georgia and northwestern Florida

    USGS Publications Warehouse

    Dalton, Melinda S.; Aulenbach, Brent T.; Torak, Lynn J.

    2004-01-01

    Lake Seminole is a 37,600-acre impoundment formed at the confluence of the Flint and Chattahoochee Rivers along the Georgia?Florida State line. Outflow from Lake Seminole through Jim Woodruff Lock and Dam provides headwater to the Apalachicola River, which is a major supply of freshwater, nutrients, and detritus to ecosystems downstream. These rivers,together with their tributaries, are hydraulically connected to karst limestone units that constitute most of the Upper Floridan aquifer and to a chemically weathered residuum of undifferentiated overburden. The ground-water flow system near Lake Seminole consists of the Upper Floridan aquifer and undifferentiated overburden. The aquifer is confined below by low-permeability sediments of the Lisbon Formation and, generally, is semiconfined above by undifferentiated overburden. Ground-water flow within the Upper Floridan aquifer is unconfined or semiconfined and discharges at discrete points by springflow or diffuse leakage into streams and other surface-water bodies. The high degree of connectivity between the Upper Floridan aquifer and surface-water bodies is limited to the upper Eocene Ocala Limestone and younger units that are in contact with streams in the Lake Seminole area. The impoundment of Lake Seminole inundated natural stream channels and other low-lying areas near streams and raised the water-level altitude of the Upper Floridan aquifer near the lake to nearly that of the lake, about 77 feet. Surface-water inflow from the Chattahoochee and Flint Rivers and Spring Creek and outflow to the Apalachicola River through Jim Woodruff Lock and Dam dominate the water budget for Lake Seminole. About 81 percent of the total water-budget inflow consists of surface water; about 18 percent is ground water, and the remaining 1 percent is lake precipitation. Similarly, lake outflow consists of about 89 percent surface water, as flow to the Apalachicola River through Jim Woodruff Lock and Dam, about 4 percent ground water, and about 2 percent lake evaporation. Measurement error and uncertainty in flux calculations cause a flow imbalance of about 4 percent between inflow and outflow water-budget components. Most of this error can be attributed to errors in estimating ground-water discharge from the lake, which was calculated using a ground-water model calibrated to October 1986 conditions for the entire Apalachicola?Chattahoochee?Flint River Basin and not just the area around Lake Seminole. Evaporation rates were determined using the preferred, but mathematically complex, energy budget and five empirical equations: Priestley-Taylor, Penman, DeBruin-Keijman, Papadakis, and the Priestley-Taylor used by the Georgia Automated Environmental Monitoring Network. Empirical equations require a significant amount of data but are relatively easy to calculate and compare well to long-term average annual (April 2000?March 2001) pan evaporation, which is 65 inches. Calculated annual lake evaporation, for the study period, using the energy-budget method was 67.2 inches, which overestimated long-term average annual pan evaporation by 2.2 inches. The empirical equations did not compare well with the energy-budget method during the 18-month study period, with average differences in computed evaporation using each equation ranging from 8 to 26 percent. The empirical equations also compared poorly with long-term average annual pan evaporation, with average differences in evaporation ranging from 3 to 23 percent. Energy budget and long-term average annual pan evaporation estimates did compare well, with only a 3-percent difference between estimates. Monthly evaporation estimates using all methods ranged from 0.7 to 9.5 inches and were lowest during December 2000 and highest during May 2000. Although the energy budget is generally the preferred method, the dominance of surface water in the Lake Seminole water budget makes the method inaccurate and difficult to use, because surface water makes up m

  13. Assessing Hydrological and Energy Budgets in Amazonia through Regional Downscaling, and Comparisons with Global Reanalysis Products

    NASA Astrophysics Data System (ADS)

    Nunes, A.; Ivanov, V. Y.

    2014-12-01

    Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.

  14. Simulation of the brightness temperatures observed by the visible infrared imaging radiometer suite instrument

    NASA Astrophysics Data System (ADS)

    Evrard, Rebecca L.; Ding, Yifeng

    2018-01-01

    Clouds play a large role in the Earth's global energy budget, but the impact of cirrus clouds is still widely questioned and researched. Cirrus clouds reside high in the atmosphere and due to cold temperatures are comprised of ice crystals. Gaining a better understanding of ice cloud optical properties and the distribution of cirrus clouds provides an explanation for the contribution of cirrus clouds to the global energy budget. Using radiative transfer models (RTMs), accurate simulations of cirrus clouds can enhance the understanding of the global energy budget as well as improve the use of global climate models. A newer, faster RTM such as the visible infrared imaging radiometer suite (VIIRS) fast radiative transfer model (VFRTM) is compared to a rigorous RTM such as the line-by-line radiative transfer model plus the discrete ordinates radiative transfer program. By comparing brightness temperature (BT) simulations from both models, the accuracy of the VFRTM can be obtained. This study shows root-mean-square error <0.2 K for BT difference using reanalysis data for atmospheric profiles and updated ice particle habit information from the moderate-resolution imaging spectroradiometer collection 6. At a higher resolution, the simulated results of the VFRTM are compared to the observations of VIIRS resulting in a <1.5 % error from the VFRTM for all cases. The VFRTM is validated and is an appropriate RTM to use for global cloud retrievals.

  15. Integrated modeling environment for systems-level performance analysis of the Next-Generation Space Telescope

    NASA Astrophysics Data System (ADS)

    Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry

    1998-08-01

    All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; McVey, B.; Quimby, D.C.

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less

  17. 76 FR 13647 - Proposed Collection; Comment Request-Interactive Diet and Activity Tracking in AARP (iDATA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-14

    ... be submitted to the Office of Management and Budget (OMB) for review and approval. Proposed... size to evaluate the measurement error structure of the diet and physical activity assessment... on cancer research, diagnosis, prevention and treatment. Dietary and physical activity data will be...

  18. Semiannual Report to Congress, No. 49. April 1, 2004-September 30, 2004

    ERIC Educational Resources Information Center

    US Department of Education, 2004

    2004-01-01

    This report highlights significant work of the U.S. Department of Education's Office of Inspector General for the 6-month period ending September 30, 2004. Sections include: Activities and Accomplishments; Elimination of Fraud and Error in Student Aid Programs; Budget and Performance Integration; Financial Management; Expanded Electronic…

  19. Prediction errors in wildland fire situation analyses.

    Treesearch

    Geoffrey H. Donovan; Peter Noordijk

    2005-01-01

    Wildfires consume budgets and put the heat on fire managers to justify and control suppression costs. To determine the appropriate suppression strategy, land managers must conduct a wildland fire situation analysis (WFSA) when:A wildland fire is expected to or does escape initial attack,A wildland fire managed for resource benefits...

  20. Resource-Bounded Information Gathering for Correlation Clustering

    DTIC Science & Technology

    2007-01-01

    5], budgeted learning, [4], and active learning , for example, [3]. 3 Acknowledgments We thank Avrim Blum, Katrina Ligett, Chris Pal, Sridhar...2007 3. N. Roy, A. McCallum, Toward Optimal Active Learning through Sampling Estima- tion of Error Reduction, Proc. of 18th ICML, 2001 4. A. Kapoor, R

  1. Intelligence/Electronic Warfare (IEW) Direction-Finding and Fix Estimation Analysis Report. Volume 2. Trailblazer

    DTIC Science & Technology

    1985-12-20

    Report) Approved for Public Disemination I 17. DISTRIBUTION STATEMENT (of the abstract entered In Block 20, It different from Report) I1. SUPPLEMENTARY...Continue an riverl. aid. It neceseary ind Idoni..•y by block number) Fix Estimation Statistical Assumptions, Error Budget, Unnodclcd Errors, Coding...llgedl i t Eh’ fI) t r !". 1 I ’ " r, tl 1: a Icr it h m hc ro ,, ] y zcd arc Csedil other Current TIV! Sysem ’ he report examines the underlying

  2. 78 FR 4878 - Agency Information Collection Activities: Submission for the Office of Management and Budget (OMB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-23

    ...: Submission for the Office of Management and Budget (OMB) Review; Comment Request AGENCY: Nuclear Regulatory... the total number of hours needed annually to complete the requirement or request: 183.5. 10. Abstract... regulations and requirements, both technical and quality, in purchase documents. In order to ensure that...

  3. Positive Health and Financial Practices: Does Budgeting Make a Difference?

    ERIC Educational Resources Information Center

    O'Neill, Barbara; Xiao, Jing Jian; Ensle, Karen

    2017-01-01

    This study explored relationships between the practice of following a hand-written or computer-generated budget and the frequency of performance of positive personal health and financial practices. Data were collected from an online quiz completed by 942 adults, providing a simultaneous assessment of individuals' health and financial practices.…

  4. Equity and the "B" Word: Budgeting and Professional Capacity in Student Affairs

    ERIC Educational Resources Information Center

    McCambly, Heather N.; Haley, Karen J.

    2016-01-01

    The dual pressures of the national college completion agenda and diminished public investment in higher education have led to a growing reliance on performance-based policies. Using a policy implementation framework, this qualitative study examines the implementation of a performance-based budgeting model at a broad-access urban research…

  5. 75 FR 66121 - Information Collection Sent to the Office of Management and Budget (OMB) for Approval; National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-27

    ... and memorials in the world. Applicants for USPP officer positions must complete and pass a competitive... DEPARTMENT OF THE INTERIOR National Park Service [OMB Control Number 1024-0245] Information Collection Sent to the Office of Management and Budget (OMB) for Approval; National Park Police Personal...

  6. 75 FR 54907 - Information Collection Sent to the Office of Management and Budget (OMB) for Approval; OMB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-09

    ... sport fish and wildlife management and restoration, including: Improvement of fish and wildlife habitats... and 91400-9782-Survey-7B] Information Collection Sent to the Office of Management and Budget (OMB) for... of Activity household participant Completion time per Total burden responses responses response hours...

  7. Soil Carbon Budget During Establishment of Short Rotation Woody Crops

    NASA Astrophysics Data System (ADS)

    Coleman, M. D.

    2003-12-01

    Carbon budgets were monitored following forest harvest and during re-establishment of short rotation woody crops. Soil CO2 efflux was monitored using infared gas analyzer methods, fine root production was estimated with minirhizotrons, above ground litter inputs were trapped, coarse root inputs were estimated with developed allometric relationships, and soil carbon pools were measured in loblolly pine and cottonwood plantations. Our carbon budget allows evaluation of errors, as well as quantifying pools and fluxes in developing stands during non-steady-state conditions. Soil CO2 efflux was larger than the combined inputs from aboveground litter fall and root production. Fine-root production increased during stand development; however, mortality was not yet equivalent to production, showing the belowground carbon budget was not yet in equilibrium and root carbon standing crop was accruing. Belowground production was greater in cottonwood than pine, but the level of pine soil CO2 efflux was equal to or greater than that of cottonwood, indicating heterotrophic respiration was higher for pine. Comparison of unaccounted efflux with soil organic carbon changes provides verification of loss or accrual.

  8. Cost-effectiveness of the stream-gaging program in Maryland, Delaware, and the District of Columbia

    USGS Publications Warehouse

    Carpenter, David H.; James, R.W.; Gillen, D.F.

    1987-01-01

    This report documents the results of a cost-effectiveness study of the stream-gaging program in Maryland, Delaware, and the District of Columbia. Data uses and funding sources were identified for 99 continuously operated stream gages in Maryland , Delaware, and the District of Columbia. The current operation of the program requires a budget of $465,260/year. The average standard error of estimation of streamflow records is 11.8%. It is shown that this overall level of accuracy at the 99 sites could be maintained with a budget of $461,000, if resources were redistributed among the gages. (USGS)

  9. A hydrological budget (2002-2008) for a large subtropical wetland ecosystem indicates marine groundwater discharge accompanies diminished freshwater flow

    USGS Publications Warehouse

    Saha, Amartya K.; Moses, Christopher S.; Price, Rene M.; Engel, Victor; Smith, Thomas J.; Anderson, Gordon

    2012-01-01

    Water budget parameters are estimated for Shark River Slough (SRS), the main drainage within Everglades National Park (ENP) from 2002 to 2008. Inputs to the water budget include surface water inflows and precipitation while outputs consist of evapotranspiration, discharge to the Gulf of Mexico and seepage losses due to municipal wellfield extraction. The daily change in volume of SRS is equated to the difference between input and outputs yielding a residual term consisting of component errors and net groundwater exchange. Results predict significant net groundwater discharge to the SRS peaking in June and positively correlated with surface water salinity at the mangrove ecotone, lagging by 1 month. Precipitation, the largest input to the SRS, is offset by ET (the largest output); thereby highlighting the importance of increasing fresh water inflows into ENP for maintaining conditions in terrestrial, estuarine, and marine ecosystems of South Florida.

  10. Observing the earth radiation budget from satellites - Past, present, and a look to the future

    NASA Technical Reports Server (NTRS)

    House, F. B.

    1985-01-01

    Satellite measurements of the radiative exchange between the planet earth and space have been the objective of many experiments since the beginning of the space age in the late 1950's. The on-going mission of the Earth Radiation Budget (ERB) experiments has been and will be to consider flight hardware, data handling and scientific analysis methods in a single design strategy. Research and development on observational data has produced an analysis model of errors associated with ERB measurement systems on polar satellites. Results show that the variability of reflected solar radiation from changing meteorology dominates measurement uncertainties. As an application, model calculations demonstrate that measurement requirements for the verification of climate models may be satisfied with observations from one polar satellite, provided there is information on diurnal variations of the radiation budget from the ERBE mission.

  11. Power Budget Analysis of Colorless Hybrid WDM/TDM-PON Scheme Using Downstream DPSK and Re-modulated Upstream OOK Data Signals

    NASA Astrophysics Data System (ADS)

    Khan, Yousaf; Afridi, Muhammad Idrees; Khan, Ahmed Mudassir; Rehman, Waheed Ur; Khan, Jahanzeb

    2014-09-01

    Hybrid wavelength-division multiplexed/time-division multiplexed passive optical access networks (WDM/TDM-PONs) combine the advance features of both WDM and TDM PONs to provide a cost-effective access network solution. We demonstrate and analyze the transmission performances and power budget issues of a colorless hybrid WDM/TDM-PON scheme. A 10-Gb/s downstream differential phase shift keying (DPSK) and remodulated upstream on/off keying (OOK) data signals are transmitted over 25 km standard single mode fiber. Simulation results show error free transmission having adequate power margins in both downstream and upstream transmission, which prove the applicability of the proposed scheme to future passive optical access networks. The power budget confines both the PON splitting ratio and the distance between the Optical Line Terminal (OLT) and Optical Network Unit (ONU).

  12. 78 FR 68079 - Information Collection Activities: Oil and Gas Well-Completion Operations; Submitted for Office...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ...: Oil and Gas Well-Completion Operations; Submitted for Office of Management and Budget (OMB) Review... Completion Operations. This notice also provides the public a second opportunity to comment on the revised... Well-Completion Operations. OMB Control Number: 1014-0004. Abstract: The Outer Continental Shelf (OCS...

  13. Los Alamos National Laboratory and Lawrence Livermore National Laboratory Plutonium Sustainment Monthly Program Report September 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLaughlin, Anastasia Dawn; Storey, Bradford G.; Bowidowicz, Martin

    In March of 2012 the Plutonium Sustainment program at LANL completed or addressed the following high-level activities: (1) Delivered Revision 2 of the Plutonium Sustainment Manufacturing Study, which incorporated changes needed due to the release of the FY2013 President's Budget and the delay in the Chemistry and Metallurgy Research Replacement Nuclear Facility (CMRRNF). (2) W87 pit type development activities completed a detailed process capability review for the flowsheet in preparation for the Engineering Development Unit Build. (3) Completed revising the Laser Beam Welding schedule to address scope and resource changes. (4) Completed machining and inspecting the first set of high-fidelitymore » cold parts on Precitech 2 for Gemini. (5) The Power Supply Assembly Area started floor cutting with a concrete saw and continued legacy equipment decommissioning. There are currently no major issues associated with achieving MRT L2 Milestones 4195-4198 or the relevant PBIs associated with Plutonium Sustainment. There are no budget issues associated with FY12 final budget guidance. Table 1 identifies all Baseline Change Requests (BCRs) that were initiated, in process, or completed during the month. The earned value metrics overall for LANL are within acceptable thresholds, so no high-level recovery plan is required. Each of the 5 major LANL WBS elements is discussed in detail.« less

  14. Energy budget above a high-elevation subalpine forest in complex topography

    USGS Publications Warehouse

    Turnipseed, A.A.; Blanken, P.D.; Anderson, D.E.; Monson, Russell K.

    2002-01-01

    Components of the energy budget were measured above a subalpine coniferous forest over two complete annual cycles. Sensible and latent heat fluxes were measured by eddy covariance. Bowen ratios ranged from 0.7 to 2.5 in the summer (June-September) depending upon the availability of soil water, but were considerably higher (???3-6) during winter (December-March). Energy budget closure averaged better than 84% on a half-hourly basis in both seasons with slightly greater closure during the winter months. The energy budget showed a dependence on friction velocity (u*), approaching complete closure at u* values greater than 1 m s-1. The dependence of budget closure on u* explained why energy balance was slightly better in the winter as opposed to summer, since numerous periods of high turbulence occur in winter. It also explained the lower degree of energy closure (???10% less) during easterly upslope flow since these periods were characterized by low wind speeds (U < 4 m s-1) and friction velocities (u* < 0.5 m s-1). Co-spectral analysis suggests a shift of flux density towards higher frequencies under conditions where closure was obtained. It is suggested that low frequency contributions to the flux and advection were responsible for the lack of day-time energy budget closure. These effects were reduced at high friction velocities observed at our site. Our ability to close the energy budget at night was also highly dependent on friction velocity, approaching near closure (???90%) at u* values between 0.7 and 1.1 m s-1. Below this range, the airflow within the canopy becomes decoupled with the flow above. Above this range, insufficient temperature resolution of the sonic anemometer obscured the small temperature fluctuations, rendering measurements intractable. ?? 2002 Elsevier Science B.V. All rights reserved.

  15. International Conference on Problems Related to the Stratosphere

    NASA Technical Reports Server (NTRS)

    Huntress, W., Jr.

    1977-01-01

    The conference focused on four main areas of investigation: laboratory studies and stratospheric chemistry and constituents, sources for and chemical budget of stratospheric halogen compounds, sources for and chemical budget of stratospheric nitrous oxide, and the dynamics of decision making on regulation of potential pollutants of the stratosphere. Abstracts of the scientific sessions of the conference as well as complete transcriptions of the panel discussions on sources for an atmospheric budget of holocarbons and nitrous oxide are included. The political, social and economic issues involving regulation of potential stratospheric pollutants were examined extensively.

  16. Monitoring the spring-summer surface energy budget transition in the Gobi Desert using AVHRR GAC data. [Global Area Coverage

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Reiter, Elmar R.

    1986-01-01

    A research program has been started in which operationally available weather satellites radiance data are used to reconstruct various properties of the diurnal surface energy budget over sites for which detailed estimates of the complete radiation, heat, and moisture exchange process are available. In this paper, preliminary analysis of the 1985 Gobi Desert summer period results is presented. The findings demonstrate various important relationships concerning the feasibility of retrieving the amplitudes of the diurnal surface energy budget processes for daytime and nighttime conditions.

  17. 78 FR 30952 - Agency Information Collection Activities: Proposed Request and Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-23

    ... packages requiring clearance by the Office of Management and Budget (OMB) in compliance with Public Law 104... following addresses or fax numbers. (OMB), Office of Management and Budget, Attn: Desk Officer for SSA, Fax... completed form a part of the documentary evidence of record by placing it in the official record of the...

  18. Fiscal 1982 Budget highlights R&D

    NASA Astrophysics Data System (ADS)

    Richman, Barbara T.

    Geophysical research and development programs show growth beyond inflation in the $739.3 billion budget for fiscal 1982 that Jimmy Carter sent to Congress 5 days before completing his term. Included in the budget are provisions for increased support for the Ocean Margin Drilling Program and funds for an interagency Geological Applications Program, funds for an agriculture and resource surveys program that relies on remote sensing, and funds for the Venus Orbiting Imaging Radar mission.Ronald Reagan is expected to make changes in the budget as early as late February, although in mid January the heads of the scientific agencies could not characterize possible changes. Some Washingtonians say sharp cuts are inevitable, with basic research a prime candidate. Others, however, contend that the Reagan administration's push for productivity and innovation could prevent severe carving. Eos will track the FY 1982 budget changes through congressional approval.

  19. A Bayesian approach to multisource forest area estimation

    Treesearch

    Andrew O. Finley

    2007-01-01

    In efforts such as land use change monitoring, carbon budgeting, and forecasting ecological conditions and timber supply, demand is increasing for regional and national data layers depicting forest cover. These data layers must permit small area estimates of forest and, most importantly, provide associated error estimates. This paper presents a model-based approach for...

  20. Cost-efficient selection of a marker panel in genetic studies

    Treesearch

    Jamie S. Sanderlin; Nicole Lazar; Michael J. Conroy; Jaxk Reeves

    2012-01-01

    Genetic techniques are frequently used to sample and monitor wildlife populations. The goal of these studies is to maximize the ability to distinguish individuals for various genetic inference applications, a process which is often complicated by genotyping error. However, wildlife studies usually have fixed budgets, which limit the number of geneticmarkers available...

  1. 76 FR 28052 - Submission for OMB Review; Comment Request; Interactive Diet and Activity Tracking in AARP (iDATA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-13

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Submission for OMB Review... Office of Management and Budget (OMB) a request to review and approve the information collection listed... measurement error structure of the diet and physical activity assessment instruments and the heterogeneity of...

  2. 78 FR 3433 - Agency Information Collection Activities; Submission for Office of Management and Budget Review...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-16

    ... and by educating the public, especially young people, about tobacco products and the dangers their use... identified. When FDA receives tobacco-specific adverse event and product problem information, it will use the... quality problem, or product use error occurs. This risk identification process is the first necessary step...

  3. A Practical Solution to Optimizing the Reliability of Teaching Observation Measures under Budget Constraints

    ERIC Educational Resources Information Center

    Meyer, J. Patrick; Liu, Xiang; Mashburn, Andrew J.

    2014-01-01

    Researchers often use generalizability theory to estimate relative error variance and reliability in teaching observation measures. They also use it to plan future studies and design the best possible measurement procedures. However, designing the best possible measurement procedure comes at a cost, and researchers must stay within their budget…

  4. Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spread function for the geostationary earth radiation budget experiment.

    PubMed

    Matthews, Grant

    2004-12-01

    The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.

  5. Precision VUV Spectro-Polarimetry for Solar Chromospheric Magnetic Field Measurements

    NASA Astrophysics Data System (ADS)

    Ishikawa, R.; Bando, T.; Hara, H.; Ishikawa, S.; Kano, R.; Kubo, M.; Katsukawa, Y.; Kobiki, T.; Narukage, N.; Suematsu, Y.; Tsuneta, S.; Aoki, K.; Miyagawa, K.; Ichimoto, K.; Kobayashi, K.; Auchère, F.; Clasp Team

    2014-10-01

    The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a VUV spectro-polarimeter optimized for measuring the linear polarization of the Lyman-α line (121.6 nm) to be launched in 2015 with NASA's sounding rocket (Ishikawa et al. 2011; Narukage et al. 2011; Kano et al. 2012; Kobayashi et al. 2012). With this experiment, we aim to (1) observe the scattering polarization in the Lyman-α line, (2) detect the Hanle effect, and (3) assess the magnetic fields in the upper chromosphere and transition region for the first time. The polarization measurement error consists of scale error δ a (error in amplitude of linear polarization), azimuth error Δφ (error in the direction of linear polarization), and spurious polarization ɛ (false linear polarization signals). The error ɛ should be suppressed below 0.1% in the Lyman-α core (121.567 nm ±0.02 nm), and 0.5% in the Lyman-α wing (121.567 nm ±0.05 nm), based on our scientific requirements shown in Table 2 of Kubo et al. (2014). From scientific justification, we adopt Δ φ<2° and δ a<10% as the instrument requirements. The spectro-polarimeter features a continuously rotating MgF2 waveplate (Ishikawa et al. 2013), a dual-beam spectrograph with a spherical grating working also as a beam splitter, and two polarization analyzers (Bridou et al. 2011), which are mounted at 90 degree from each other to measure two orthogonal polarization simultaneously. For the optical layout of the CLASP instrument, see Figure 3 in Kubo et al. (2014). Considering the continuous rotation of the half-waveplate, the modulation efficiency is 0.64 both for Stokes Q and U. All the raw data are returned and demodulation (successive addition or subtraction of images) is done on the ground. We control the CLASP polarization performance in the following three steps. First, we evaluate the throughput and polarization properties of each optical component in the Lyman-α line, using the Ultraviolet Synchrotron ORbital Radiation Facility (UVSOR) at the Institute for Molecular Science. The second step is polarization calibration of the spectro-polarimeter after alignment. Since the spurious polarization caused by the axisymmetric telescope is estimated to be negligibly small because of the symmetry (Ishikawa et al. 2014), we do not perform end-to-end polarization calibration. As the final step, before the scientific observation near the limb, we make a short observation at the Sun center and verify the polarization sensitivity, because the scattering polarization is expected to be close to zero at the Sun center due to symmetric geometry. In order to clarify whether we will be able to achieve the required polarization sensitivity and accuracy via these steps, we exercise polarization error budget, by investigating all the possible causes and their magnitudes of polarization errors, all of which are not necessarily verified by the polarization calibration. Based on these error budgets, we conclude that a polarization sensitivity of 0.1% in the line core, δ a<10% and Δ φ<2° can be achieved combined with the polarization calibration of the spectro-polarimeter and the onboard calibration at the Sun center(refer to Ishikawa et al. 2014, for the detail). We are currently conducting verification tests of the flight components and development of the UV light source for the polarization calibration. From 2014 spring, we will begin the integration, alignment, and calibration. We will update the error budgets throughout the course of these tests.

  6. Development of a time-stepping sediment budget model for assessing land use impacts in large river basins.

    PubMed

    Wilkinson, S N; Dougall, C; Kinsey-Henderson, A E; Searle, R D; Ellis, R J; Bartley, R

    2014-01-15

    The use of river basin modelling to guide mitigation of non-point source pollution of wetlands, estuaries and coastal waters has become widespread. To assess and simulate the impacts of alternate land use or climate scenarios on river washload requires modelling techniques that represent sediment sources and transport at the time scales of system response. Building on the mean-annual SedNet model, we propose a new D-SedNet model which constructs daily budgets of fine sediment sources, transport and deposition for each link in a river network. Erosion rates (hillslope, gully and streambank erosion) and fine sediment sinks (floodplains and reservoirs) are disaggregated from mean annual rates based on daily rainfall and runoff. The model is evaluated in the Burdekin basin in tropical Australia, where policy targets have been set for reducing sediment and nutrient loads to the Great Barrier Reef (GBR) lagoon from grazing and cropping land. D-SedNet predicted annual loads with similar performance to that of a sediment rating curve calibrated to monitored suspended sediment concentrations. Relative to a 22-year reference load time series at the basin outlet derived from a dynamic general additive model based on monitoring data, D-SedNet had a median absolute error of 68% compared with 112% for the rating curve. RMS error was slightly higher for D-SedNet than for the rating curve due to large relative errors on small loads in several drought years. This accuracy is similar to existing agricultural system models used in arable or humid environments. Predicted river loads were sensitive to ground vegetation cover. We conclude that the river network sediment budget model provides some capacity for predicting load time-series independent of monitoring data in ungauged basins, and for evaluating the impact of land management on river sediment load time-series, which is challenging across large regions in data-poor environments. © 2013. Published by Elsevier B.V. All rights reserved.

  7. New Methods for Assessing and Reducing Uncertainty in Microgravity Studies

    NASA Astrophysics Data System (ADS)

    Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.

    2017-12-01

    Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.

  8. Achievable flatness in a large microwave power transmitting antenna

    NASA Technical Reports Server (NTRS)

    Ried, R. C.

    1980-01-01

    A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.

  9. Effects of acuity-adaptable rooms on flow of patients and delivery of care.

    PubMed

    Hendrich, Ann L; Fay, Joy; Sorrells, Amy K

    2004-01-01

    Delayed transfers of patients between nursing units and lack of available beds are significant problems that increase costs and decrease quality of care and satisfaction among patients and staff. To test whether use of acuity-adaptable rooms helps solve problems with transfers of patients, satisfaction levels, and medical errors. A pre-post method was used to compare the effects of environmental design on various clinical and financial measures. Twelve outcome-based questions were formulated as the basis for inquiry. Two years of baseline data were collected before the unit moved and were compared with 3 years of data collected after the move. Significant improvements in quality and operational cost occurred after the move, including a large reduction in clinician handoffs and transfers; reductions in medication error and patient fall indexes; improvements in predictive indicators of patients' satisfaction; decrease in budgeted nursing hours per patient day and increased available nursing time for direct care without added cost; increase in patient days per bed, with a smaller bed base (number of beds per patient days). Some staff turnover occurred during the first year; turnover stabilized thereafter. Data in 5 key areas (flow of patients and hospital capacity, patients' dissatisfaction, sentinel events, mean length of stay, and allocation of nursing productivity) appear to be sufficient to test the business case for future investment in partial or complete replication of this model with appropriate populations of patients.

  10. Atmospheric energetics as related to cyclogenesis over the eastern United States. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    West, P. W.

    1973-01-01

    A method is presented to investigate the atmospheric energy budget as related to cyclogenesis. Energy budget equations are developed that are shown to be advantageous because the individual terms represent basic physical processes which produce changes in atmospheric energy, and the equations provide a means to study the interaction of the cyclone with the larger scales of motion. The work presented represents an extension of previous studies because all of the terms of the energy budget equations were evaluated throughout the development period of the cyclone. Computations are carried out over a limited atmospheric volume which encompasses the cyclone, and boundary fluxes of energy that were ignored in most previous studies are evaluated. Two examples of cyclogenesis over the eastern United States were chosen for study. One of the cases (1-4 November, 1966) represented an example of vigorous development, while the development in the other case (5-8 December, 1969) was more modest. Objectively analyzed data were used in the evaluation of the energy budget terms in order to minimize computational errors, and an objective analysis scheme is described that insures that all of the resolution contained in the rawinsonde observations is incorporated in the analyses.

  11. Water budgets for selected watersheds in the Delaware River basin, eastern Pennsylvania and western New Jersey

    USGS Publications Warehouse

    Sloto, Ronald A.; Buxton, Debra E.

    2005-01-01

    This pilot study, done by the U.S. Geological Survey in cooperation with the Delaware River Basin Commission, developed annual water budgets using available data for five watersheds in the Delaware River Basin with different degrees of urbanization and different geological settings. A basin water budget and a water-use budget were developed for each watershed. The basin water budget describes inputs to the watershed (precipitation and imported water), outputs of water from the watershed (streamflow, exported water, leakage, consumed water, and evapotranspiration), and changes in ground-water and surface-water storage. The water-use budget describes water withdrawals in the watershed (ground-water and surface-water withdrawals), discharges of water in the watershed (discharge to surface water and ground water), and movement of water of water into and out of the watershed (imports, exports, and consumed water). The water-budget equations developed for this study can be applied to any watershed in the Delaware River Basin. Data used to develop the water budgets were obtained from available long-term meteorological and hydrological data-collection stations and from water-use data collected by regulatory agencies. In the Coastal Plain watersheds, net ground-water loss from unconfined to confined aquifers was determined by using ground-water-flow-model simulations. Error in the water-budget terms is caused by missing data, poor or incomplete measurements, overestimated or underestimated quantities, measurement or reporting errors, and the use of point measurements, such as precipitation and water levels, to estimate an areal quantity, particularly if the watershed is hydrologically or geologically complex or the data-collection station is outside the watershed. The complexity of the water budgets increases with increasing watershed urbanization and interbasin transfer of water. In the Wissahickon Creek watershed, for example, some ground water is discharged to streams in the watershed, some is exported as wastewater, and some is exported for public supply. In addition, ground water withdrawn outside the watershed is imported for public supply or imported as wastewater for treatment and discharge in the watershed. A GIS analysis was necessary to quantify many of the water-budget components. The 89.9-square mile East Branch Brandywine Creek watershed in Pennsylvania is a rural watershed with reservoir storage that is underlain by fractured rock. Water budgets were developed for 1977-2001. Average annual precipitation, streamflow, and evapotranspiration were 46.89, 21.58, and 25.88 inches, respectively. Some water was imported (average of 0.68 inches) into the watershed for public-water supply and as wastewater for treatment and discharge; these imports resulted in a net gain of water to the watershed. More water was discharged to East Branch Brandywine Creek than was withdrawn from it; the net discharge resulted in an increase in streamflow. Most ground water was withdrawn (average of 0.25 inches) for public-water supply. Surface water was withdrawn (average of 0.58 inches) for public-water and industrial supply. Discharge of water by sewage-treatment plants and industries (average of 1.22 inches) and regulation by Marsh Creek Reservoir caused base flow to appear an average of 7.2 percent higher than it would have been without these additional sources. On average, 67 percent of the difference was caused by sewage-treatment-plant and industrial discharges, and 33 percent was caused by regulation of the Marsh Creek Reservoir. Water imports, withdrawals, and discharges have been increasing as the watershed becomes increasingly urbanized. The 64-square mile Wissahickon Creek watershed in Pennsylvania is an urban watershed underlain by fractured rock. Water budgets were developed for 1987-98. Average annual precipitation, streamflow, and evapotranspiration were 47.23, 22.24, and 23.12 inches, respectively. The watershed is highly u

  12. Space shuttle post-entry and landing analysis. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Crawford, B. S.; Duiven, E. M.

    1973-01-01

    Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.

  13. Sunrise/sunset thermal shock disturbance analysis and simulation for the TOPEX satellite

    NASA Technical Reports Server (NTRS)

    Dennehy, C. J.; Welch, R. V.; Zimbelman, D. F.

    1990-01-01

    It is shown here that during normal on-orbit operations the TOPEX low-earth orbiting satellite is subjected to an impulsive disturbance torque caused by rapid heating of its solar array when entering and exiting the earth's shadow. Error budgets and simulation results are used to demonstrate that this sunrise/sunset torque disturbance is the dominant Normal Mission Mode (NMM) attitude error source. The detailed thermomechanical modeling, analysis, and simulation of this torque is described, and the predicted on-orbit performance of the NMM attitude control system in the face of the sunrise/sunset disturbance is presented. The disturbance results in temporary attitude perturbations that exceed NMM pointing requirements. However, they are below the maximum allowable pointing error which would cause the radar altimeter to break lock.

  14. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  15. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  16. The DiskMass Survey. II. Error Budget

    NASA Astrophysics Data System (ADS)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  17. CAUSES: Diagnosis of the Summertime Warm Bias in CMIP5 Climate Models at the ARM Southern Great Plains Site

    NASA Astrophysics Data System (ADS)

    Zhang, Chengzhu; Xie, Shaocheng; Klein, Stephen A.; Ma, Hsi-yen; Tang, Shuaiqi; Van Weverberg, Kwinten; Morcrette, Cyril J.; Petch, Jon

    2018-03-01

    All the weather and climate models participating in the Clouds Above the United States and Errors at the Surface project show a summertime surface air temperature (T2 m) warm bias in the region of the central United States. To understand the warm bias in long-term climate simulations, we assess the Atmospheric Model Intercomparison Project simulations from the Coupled Model Intercomparison Project Phase 5, with long-term observations mainly from the Atmospheric Radiation Measurement program Southern Great Plains site. Quantities related to the surface energy and water budget, and large-scale circulation are analyzed to identify possible factors and plausible links involved in the warm bias. The systematic warm season bias is characterized by an overestimation of T2 m and underestimation of surface humidity, precipitation, and precipitable water. Accompanying the warm bias is an overestimation of absorbed solar radiation at the surface, which is due to a combination of insufficient cloud reflection and clear-sky shortwave absorption by water vapor and an underestimation in surface albedo. The bias in cloud is shown to contribute most to the radiation bias. The surface layer soil moisture impacts T2 m through its control on evaporative fraction. The error in evaporative fraction is another important contributor to T2 m. Similar sources of error are found in hindcast from other Clouds Above the United States and Errors at the Surface studies. In Atmospheric Model Intercomparison Project simulations, biases in meridional wind velocity associated with the low-level jet and the 500 hPa vertical velocity may also relate to T2 m bias through their control on the surface energy and water budget.

  18. An audit of the global carbon budget: identifying and reducing sources of uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Tans, P. P.; Marland, G.; Stocker, B. D.

    2012-12-01

    Uncertainties in our carbon accounting practices may limit our ability to objectively verify emission reductions on regional scales. Furthermore uncertainties in the global C budget must be reduced to benchmark Earth System Models that incorporate carbon-climate interactions. Here we present an audit of the global C budget where we try to identify sources of uncertainty for major terms in the global C budget. The atmospheric growth rate of CO2 has increased significantly over the last 50 years, while the uncertainty in calculating the global atmospheric growth rate has been reduced from 0.4 ppm/yr to 0.2 ppm/yr (95% confidence). Although we have greatly reduced global CO2 growth rate uncertainties, there remain regions, such as the Southern Hemisphere, Tropics and Arctic, where changes in regional sources/sinks will remain difficult to detect without additional observations. Increases in fossil fuel (FF) emissions are the primary factor driving the increase in global CO2 growth rate; however, our confidence in FF emission estimates has actually gone down. Based on a comparison of multiple estimates, FF emissions have increased from 2.45 ± 0.12 PgC/yr in 1959 to 9.40 ± 0.66 PgC/yr in 2010. Major sources of increasing FF emission uncertainty are increased emissions from emerging economies, such as China and India, as well as subtle differences in accounting practices. Lastly, we evaluate emission estimates from Land Use Change (LUC). Although relative errors in emission estimates from LUC are quite high (2 sigma ~ 50%), LUC emissions have remained fairly constant in recent decades. We evaluate the three commonly used approaches to estimating LUC emissions- Bookkeeping, Satellite Imagery, and Model Simulations- to identify their main sources of error and their ability to detect net emissions from LUC.; Uncertainties in Fossil Fuel Emissions over the last 50 years.

  19. The Effect of Detector Nonlinearity on WFIRST PSF Profiles for Weak Gravitational Lensing Measurements

    NASA Astrophysics Data System (ADS)

    Plazas, A. A.; Shapiro, C.; Kannawadi, A.; Mandelbaum, R.; Rhodes, J.; Smith, R.

    2016-10-01

    Weak gravitational lensing (WL) is one of the most powerful techniques to learn about the dark sector of the universe. To extract the WL signal from astronomical observations, galaxy shapes must be measured and corrected for the point-spread function (PSF) of the imaging system with extreme accuracy. Future WL missions—such as NASA’s Wide-Field Infrared Survey Telescope (WFIRST)—will use a family of hybrid near-infrared complementary metal-oxide-semiconductor detectors (HAWAII-4RG) that are untested for accurate WL measurements. Like all image sensors, these devices are subject to conversion gain nonlinearities (voltage response to collected photo-charge) that bias the shape and size of bright objects such as reference stars that are used in PSF determination. We study this type of detector nonlinearity (NL) and show how to derive requirements on it from WFIRST PSF size and ellipticity requirements. We simulate the PSF optical profiles expected for WFIRST and measure the fractional error in the PSF size (ΔR/R) and the absolute error in the PSF ellipticity (Δe) as a function of star magnitude and the NL model. For our nominal NL model (a quadratic correction), we find that, uncalibrated, NL can induce an error of ΔR/R = 1 × 10-2 and Δe 2 = 1.75 × 10-3 in the H158 bandpass for the brightest unsaturated stars in WFIRST. In addition, our simulations show that to limit the bias of ΔR/R and Δe in the H158 band to ˜10% of the estimated WFIRST error budget, the quadratic NL model parameter β must be calibrated to ˜1% and ˜2.4%, respectively. We present a fitting formula that can be used to estimate WFIRST detector NL requirements once a true PSF error budget is established.

  20. Nuclear Weapons Sustainment: Improvements Made to Budget Estimates Report, but Opportunities Remain to Further Enhance Transparency

    DTIC Science & Technology

    2015-12-01

    Enhance Transparency Report to Congressional Committees December 2015 GAO-16-23 United States Government Accountability Office United...SUSTAINMENT Improvements Made to Budget Estimates Report, but Opportunities Remain to Further Enhance Transparency Why GAO Did This Study DOD and DOE are...modernization plans and (2) complete, transparent information on the methodologies used to develop those estimates. GAO analyzed the departments

  1. Congressional Budget Action for Fiscal Year 2012 and Its Impact on Education Funding. Issue Brief

    ERIC Educational Resources Information Center

    Delisle, Jason

    2011-01-01

    The fiscal year 2012 budget process has been anything but typical or predictable. While fiscal year 2012 starts in just a few weeks on October 1, 2011, the annual appropriations process is far from complete, and funding for federal education programs has not yet been finalized. Nevertheless, congressional action in the months that have led up to…

  2. Performance Funding in Illinois Higher Education: The Roles of Politics, Budget Environment, and Individual Actors in the Process

    ERIC Educational Resources Information Center

    Blankenberger, Bob; Phillips, Alan

    2016-01-01

    The completion agenda is the dominant theme in higher education policy in the United States today, and one of the primary strategies advocated in the agenda is performance funding in budgeting for public institutions. Illinois is one example of a state that has attempted to implement performance funding as a means of directing the behavior of…

  3. High Resolution Atmospheric Inversion of Urban CO2 Emissions During the Dormant Season of the Indianapolis Flux Experiment (INFLUX)

    NASA Technical Reports Server (NTRS)

    Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh; hide

    2016-01-01

    Urban emissions of greenhouse gases (GHG) represent more than 70% of the global fossil fuel GHG emissions. Unless mitigation strategies are successfully implemented, the increase in urban GHG emissions is almost inevitable as large metropolitan areas are projected to grow twice as fast as the world population in the coming 15 years. Monitoring these emissions becomes a critical need as their contribution to the global carbon budget increases rapidly. In this study, we developed the first comprehensive monitoring systems of CO2 emissions at high resolution using a dense network of CO2 atmospheric measurements over the city of Indianapolis. The inversion system was evaluated over a 8-month period and showed an increase compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product, with a 20% increase in the total emissions over the area (from 4.5 to 5.7 Metric Megatons of Carbon +/- 0.23 Metric Megatons of Carbon). However, several key parameters of the inverse system need to be addressed to carefully characterize the spatial distribution of the emissions and the aggregated total emissions.We found that spatial structures in prior emission errors, mostly undetermined, affect significantly the spatial pattern in the inverse solution, as well as the carbon budget over the urban area. Several other parameters of the inversion were sufficiently constrained by additional observations such as the characterization of the GHG boundary inflow and the introduction of hourly transport model errors estimated from the meteorological assimilation system. Finally, we estimated the uncertainties associated with remaining systematic errors and undetermined parameters using an ensemble of inversions. The total CO2 emissions for the Indianapolis urban area based on the ensemble mean and quartiles are 5.26 - 5.91 Metric Megatons of Carbon, i.e. a statistically significant difference compared to the prior total emissions of 4.1 to 4.5 Metric Megatons of Carbon. We therefore conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emissions and their associated error structures are required if we are to determine the spatial structures of urban emissions at high resolution.

  4. Estimating the global terrestrial hydrologic cycle through modeling, remote sensing, and data assimilation

    NASA Astrophysics Data System (ADS)

    Pan, Ming; Troy, Tara; Sahoo, Alok; Sheffield, Justin; Wood, Eric

    2010-05-01

    Documentation of the water cycle and its evolution over time is a primary scientific goal of the Global Energy and Water Cycle Experiment (GEWEX) and fundamental to assessing global change impacts. In developed countries, observation systems that include in-situ, remote sensing and modeled data can provide long-term, consistent and generally high quality datasets of water cycle variables. The export of these technologies to less developed regions has been rare, but it is these regions where information on water availability and change is probably most needed in the face of regional environmental change due to climate, land use and water management. In these data sparse regions, in situ data alone are insufficient to develop a comprehensive picture of how the water cycle is changing, and strategies that merge in-situ, model and satellite observations within a framework that results in consistent water cycle records is essential. Such an approach is envisaged by the Global Earth Observing System of Systems (GOESS), but has yet to be applied. The goal of this study is to quantify the variation and changes in the global water cycle over the past 50 years. We evaluate the global water cycle using a variety of independent large-scale datasets of hydrologic variables that are used to bridge the gap between sparse in-situ observations, including remote-sensing based retrievals, observation-forced hydrologic modeling, and weather model reanalyses. A data assimilation framework that blends these disparate sources of information together in a consistent fashion with attention to budget closure is applied to make best estimates of the global water cycle and its variation. The framework consists of a constrained Kalman filter applied to the water budget equation. With imperfect estimates of the water budget components, the equation additionally has an error residual term that is redistributed across the budget components using error statistics, which are estimated from the uncertainties among data products. The constrained Kalman filter treats the budget closure constraint as a perfect observation within the assimilation framework. Precipitation is estimated using gauge observations, reanalysis products, and remote sensing products for below 50°N. Evapotranspiration is estimated in a number of ways: from the VIC land surface hydrologic model forced with a hybrid reanalysis-observation global forcing dataset, from remote sensing retrievals based on a suite of energy balance and process based models, and from an atmospheric water budget approach using reanalysis products for the atmospheric convergence and storage terms and our best estimate for precipitation. Terrestrial water storage changes, including surface and subsurface changes, are estimated using estimates from both VIC and the GRACE remote sensing retrievals. From these components, discharge can then be calculated as a residual of the water budget and compared with gauge observations to evaluate the closure of the water budget. Through the use of these largely independent data products, we estimate both the mean seasonal cycle of the water budget components and their uncertainties for a set of 20 large river basins across the globe. We particularly focus on three regions of interest in global changes studies: the Northern Eurasian region which is experiencing rapid change in terrestrial processes; the Amazon which is a central part of the global water, energy and carbon budgets; and Africa, which is predicted to face some of the most critical challenges for water and food security in the coming decades.

  5. Utilization of satellite cloud information to diagnose the energy state and transformations in extratropical cyclones

    NASA Technical Reports Server (NTRS)

    Smith, P. J.

    1984-01-01

    A study of the contribution of latent heat release to the synoptic scale vertical motions in the Jan. 9-11, 1975 extratropical cyclone case study was completed. Results indicate that early cyclone development was dominated by dry dynamical forcing. However, as the cyclone matured, the influence of latent heating became more significant. This influence appeared to be of two types, (1) the direct impact of heating causing a lowering of surface pressures, and (2) an indirect role in which the heating altered thermal and vorticity gradients and lead to subsequent increases in dry dynamical forcing. The kinetic energy budget was completed and extended to include an available potential energy budget. Focusing on the eddy component of the budgets, results indicate that kinetic energy increased throughout the cyclone's development, with the increase being most pronounced after the onset of significant latent heat release. Latent heating played a strong role not only in generating available potential energy, but also in forcing baroclinic release of potential energy.

  6. Estimating the components of the sensible heat budget of a tall forest canopy in complex terrain

    NASA Astrophysics Data System (ADS)

    Moderow, U.; Feigenwinter, C.; Bernhofer, C.

    2007-04-01

    Ultrasonic wind measurements, sonic temperature and air temperature data at two heights in the advection experiment MORE II were used to establish a complete budget of sensible heat including vertical advection, horizontal advection and horizontal turbulent flux divergence. MORE II took place at the long-term Carbo-Europe IP site in Tharandt, Germany. During the growing period of 2003 three additional towers were established to measure all relevant parameters for an estimation of advective fluxes, primarily of CO2. Additionally, in relation to other advection experiments, a calculation of the horizontal turbulent flux divergence is proposed and the relation of this flux to atmospheric stability and friction velocity is discussed. In order to obtain a complete budget, different scaling heights for horizontal advection and horizontal turbulent flux divergence are tested. It is shown that neglecting advective fluxes may lead to incorrect results. If advective fluxes are taken into account, the sensible heat budget based upon vertical turbulent flux and storage change only, is reduced by approximately 30%. Additional consideration of horizontal turbulent flux divergence would in turn add 5 10% to this sum (i.e., the sum of vertical turbulent flux plus storage change plus horizontal and vertical advection). In comparison with available energy horizontal advection is important at night whilst horizontal turbulent flux divergence is rather insignificant. Obviously, advective fluxes typically improve poor nighttime energy budget closure and might change ecosystem respiration fluxes considerably.

  7. Report by the International Space Station (ISS) Management and Cost Evaluation (IMCE) Task Force

    NASA Technical Reports Server (NTRS)

    Young, A. Thomas; Kellogg, Yvonne (Technical Monitor)

    2001-01-01

    The International Space Station (ISS) Management and Cost Evaluation Task Force (IMCE) was chartered to conduct an independent external review and assessment of the ISS cost, budget, and management. In addition, the Task Force was asked to provide recommendations that could provide maximum benefit to the U.S. taxpayers and the International Partners within the President's budget request. The Task Force has made the following principal findings: (1) The ISS Program's technical achievements to date, as represented by on-orbit capability, are extraordinary; (2) The Existing ISS Program Plan for executing the FY 02-06 budget is not credible; (3) The existing deficiencies in management structure, institutional culture, cost estimating, and program control must be acknowledged and corrected for the Program to move forward in a credible fashion; (4) Additional budget flexibility, from within the Office of Space Flight (OSF) must be provided for a credible core complete program; (5) The research support program is proceeding assuming the budget that was in place before the FY02 budget runout reduction of $1B; (6) There are opportunities to maximize research on the core station program with modest cost impact; (7) The U.S. Core Complete configuration (three person crew) as an end-state will not achieve the unique research potential of the ISS; (8) The cost estimates for the U.S.-funded enhancement options (e.g., permanent seven person crew) are not sufficiently developed to assess credibility. After these findings, the Task Force has formulated several primary recommendations which are published here and include: (1) Major changes must be made in how the ISS program is managed; (2) Additional cost reductions are required within the baseline program; (3) Additional funds must be identified and applied from the Human Space Flight budget; (4) A clearly defined program with a credible end-state, agreed to by all stakeholders, must be developed and implemented.

  8. The Soil Sink for Nitrous Oxide: Trivial Amount but Challenging Question

    NASA Astrophysics Data System (ADS)

    Davidson, E. A.; Savage, K. E.; Sihi, D.

    2015-12-01

    Net uptake of atmospheric nitrous oxide (N2O) has been observed sporadically for many years. Such observations have often been discounted as measurement error or noise, but they were reported frequently enough to gain some acceptance as valid. The advent of fast response field instruments with good sensitivity and precision has permitted confirmation that some soils can be small sinks of N2O. With regards to "closing the global N2O budget" the soil sink is trivial, because it is smaller than the error terms of most other budget components. Although not important from a global budget perspective, the existence of a soil sink for atmospheric N2O presents a fascinating challenge for understanding the physical, chemical, and biological processes that explain the sink. Reduction of N2O by classical biological denitrification requires reducing conditions generally found in wet soil, and yet we have measured the N2O sink in well drained soils, where we also simultaneously measure a sink for atmospheric methane (CH4). Co-occurrence of N2O reduction and CH4 oxidation would require a broad range of microsite conditions within the soil, spanning high and low oxygen concentrations. Abiotic sinks for N2O or other biological processes that consume N2O could exist, but have not yet been identified. We are attempting to simulate processes of diffusion of N2O, CH4, and O2 from the atmosphere and within a soil profile to determine if classical biological N2O reduction and CH4 oxidation at rates consistent with measured fluxes are plausible.

  9. The East Asian Atmospheric Water Cycle and Monsoon Circulation in the Met Office Unified Model

    NASA Astrophysics Data System (ADS)

    Rodríguez, José M.; Milton, Sean F.; Marzin, Charline

    2017-10-01

    In this study the low-level monsoon circulation and observed sources of moisture responsible for the maintenance and seasonal evolution of the East Asian monsoon are examined, studying the detailed water budget components. These observational estimates are contrasted with the Met Office Unified Model (MetUM) climate simulation performance in capturing the circulation and water cycle at a variety of model horizontal resolutions and in fully coupled ocean-atmosphere simulations. We study the role of large-scale circulation in determining the hydrological cycle by analyzing key systematic errors in the model simulations. MetUM climate simulations exhibit robust circulation errors, including a weakening of the summer west Pacific Subtropical High, which leads to an underestimation of the southwesterly monsoon flow over the region. Precipitation and implied diabatic heating biases in the South Asian monsoon and Maritime Continent region are shown, via nudging sensitivity experiments, to have an impact on the East Asian monsoon circulation. By inference, the improvement of these tropical biases with increased model horizontal resolution is hypothesized to be a factor in improvements seen over East Asia with increased resolution. Results from the annual cycle of the hydrological budget components in five domains show a good agreement between MetUM simulations and ERA-Interim reanalysis in northern and Tibetan domains. In simulations, the contribution from moisture convergence is larger than in reanalysis, and they display less precipitation recycling over land. The errors are closely linked to monsoon circulation biases.

  10. 76 FR 28211 - Proposed Information Collection; Comment Request; Survey of Housing Starts, Sales, and Completions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-16

    ... Housing Starts, Sales, and Completions AGENCY: U.S. Census Bureau, Commerce. ACTION: Notice. SUMMARY: The... and Budget (OMB) clearance of the Survey of Housing Starts, Sales and Completions, also known as the... system, etc.), and if applicable, date of sale, sales price, and type of financing. The SOC provides...

  11. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  12. Assessment of legibility and completeness of handwritten and electronic prescriptions.

    PubMed

    Albarrak, Ahmed I; Al Rashidi, Eman Abdulrahman; Fatani, Rwaa Kamil; Al Ageel, Shoog Ibrahim; Mohammed, Rafiuddin

    2014-12-01

    To assess the legibility and completeness of handwritten prescriptions and compare with electronic prescription system for medication errors. Prospective study. King Khalid University Hospital (KKUH), Riyadh, Saudi Arabia. Handwritten prescriptions were received from clinical units of Medicine Outpatient Department (MOPD), Primary Care Clinic (PCC) and Surgery Outpatient Department (SOPD) whereas electronic prescriptions were collected from the pediatric ward. The handwritten prescription was assessed for completeness by the checklist designed according to the hospital prescription and evaluated for legibility by two pharmacists. The comparison between handwritten and electronic prescription errors was evaluated based on the validated checklist adopted from previous studies. Legibility and completeness of prescriptions. 398 prescriptions (199 handwritten and 199 e-prescriptions) were assessed. About 71 (35.7%) of handwritten and 5 (2.5%) of electronic prescription errors were identified. A significant statistical difference (P < 0.001) was observed between handwritten and e-prescriptions in omitted dose and omitted route of administration category of error distribution. The rate of completeness in patient identification in handwritten prescriptions was 80.97% in MOPD, 76.36% in PCC and 85.93% in SOPD clinic units. Assessment of medication prescription completeness was 91.48% in MOPD, 88.48% in PCC, and 89.28% in SOPD. This study revealed a high incidence of prescribing errors in handwritten prescriptions. The use of e-prescription system showed a significant decline in the incidence of errors. The legibility of handwritten prescriptions was relatively good whereas the level of completeness was very low.

  13. Radiometric Spacecraft Tracking for Deep Space Navigation

    NASA Technical Reports Server (NTRS)

    Lanyi, Gabor E.; Border, James S.; Shin, Dong K.

    2008-01-01

    Interplanetary spacecraft navigation relies on three types of terrestrial tracking observables.1) Ranging measures the distance between the observing site and the probe. 2) The line-of-sight velocity of the probe is inferred from Doppler-shift by measuring the frequency shift of the received signal with respect to the unshifted frequency. 3) Differential angular coordinates of the probe with respect to natural radio sources are nominally obtained via a differential delay technique of (Delta) DOR (Delta Differential One-way Ranging). The accuracy of spacecraft coordinate determination depends on the measurement uncertainties associated with each of these three techniques. We evaluate the corresponding sources of error and present a detailed error budget.

  14. Decoding small surface codes with feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  15. Research Costs Investigated: A Study Into the Budgets of Dutch Publicly Funded Drug-Related Research.

    PubMed

    van Asselt, Thea; Ramaekers, Bram; Corro Ramos, Isaac; Joore, Manuela; Al, Maiwenn; Lesman-Leegte, Ivonne; Postma, Maarten; Vemer, Pepijn; Feenstra, Talitha

    2018-01-01

    The costs of performing research are an important input in value of information (VOI) analyses but are difficult to assess. The aim of this study was to investigate the costs of research, serving two purposes: (1) estimating research costs for use in VOI analyses; and (2) developing a costing tool to support reviewers of grant proposals in assessing whether the proposed budget is realistic. For granted study proposals from the Netherlands Organization for Health Research and Development (ZonMw), type of study, potential cost drivers, proposed budget, and general characteristics were extracted. Regression analysis was conducted in an attempt to generate a 'predicted budget' for certain combinations of cost drivers, for implementation in the costing tool. Of 133 drug-related research grant proposals, 74 were included for complete data extraction. Because an association between cost drivers and budgets was not confirmed, we could not generate a predicted budget based on regression analysis, but only historic reference budgets given certain study characteristics. The costing tool was designed accordingly, i.e. with given selection criteria the tool returns the range of budgets in comparable studies. This range can be used in VOI analysis to estimate whether the expected net benefit of sampling will be positive to decide upon the net value of future research. The absence of association between study characteristics and budgets may indicate inconsistencies in the budgeting or granting process. Nonetheless, the tool generates useful information on historical budgets, and the option to formally relate VOI to budgets. To our knowledge, this is the first attempt at creating such a tool, which can be complemented with new studies being granted, enlarging the underlying database and keeping estimates up to date.

  16. Summary of Analysis of Sources of Forecasting Errors in BP 1500 Requirements Estimating Process and Description of Compensating Methodology.

    DTIC Science & Technology

    1982-04-25

    the Directorate of Programs (AFLC/ XRP ), and 11-4 * the Directorate of Logistics Plans and Programs, Aircraft/Missiles Program Division of the Air Staff...OWRM). * The P-18 Exhibit/Budget Estimate Submission (BES), a document developed by AFLC/LOR, is reviewed by AFLC/ XRP , and is presented to HQ USAF

  17. Improvements to the design process for a real-time passive millimeter-wave imager to be used for base security and helicopter navigation in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.

    2014-06-01

    This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.

  18. Comparison and testing of extended Kalman filters for attitude estimation of the Earth radiation budget satellite

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Bar-Itzhack, Itzhack Y.; Rokni, Mohammad

    1990-01-01

    The testing and comparison of two Extended Kalman Filters (EKFs) developed for the Earth Radiation Budget Satellite (ERBS) is described. One EKF updates the attitude quaternion using a four component additive error quaternion. This technique is compared to that of a second EKF, which uses a multiplicative error quaternion. A brief development of the multiplicative algorithm is included. The mathematical development of the additive EKF was presented in the 1989 Flight Mechanics/Estimation Theory Symposium along with some preliminary testing results using real spacecraft data. A summary of the additive EKF algorithm is included. The convergence properties, singularity problems, and normalization techniques of the two filters are addressed. Both filters are also compared to those from the ERBS operational ground support software, which uses a batch differential correction algorithm to estimate attitude and gyro biases. Sensitivity studies are performed on the estimation of sensor calibration states. The potential application of the EKF for real time and non-real time ground attitude determination and sensor calibration for future missions such as the Gamma Ray Observatory (GRO) and the Small Explorer Mission (SMEX) is also presented.

  19. Determining the Optimal Work Breakdown Structure for Defense Acquisition Contracts

    DTIC Science & Technology

    2016-03-24

    programs. Public utility corresponds with the generally understood concept that having more money is desirable, and having less money is not desirable...From this perspective, program completion on budget provides maximum utility , while being over budget reduces utility as there is less money for other...tree. Utility theory tools were applied using three utility perspectives, and optimal WBSs were identified. Results demonstrated that reporting at WBS

  20. DOD Financial Management: Effect of Continuing Weaknesses on Management and Operations and Status of Key Challenges

    DTIC Science & Technology

    2014-05-13

    the information needed to effectively (1) manage its assets, (2) assess program performance and make budget decisions , (3) make cost- effective ... decision making, including the information needed to effectively (1) manage its assets, (2) assess program performance and make budget decisions , (3...incorporating key elements of a comprehensive management approach , such as a complete analysis of the return on investment, quantitatively -defined goals

  1. Budget Reduction in the Navy

    DTIC Science & Technology

    1990-12-01

    process; (4) the degree efbudgetary responsiveness in DOD/DON cutback budgeting to criteria developed from two theoretical models of fical reduction... developed from two theoretical models of fiscal reduction methodology. |V ..... A A,’ 4 . 0 f .; . . Dis Apm a al@r Di3t I peala iii, TARLK Or COUTENS...accompanying reshaping of U.S. forces include a continuation of the positive developments in Eastern Europe and the Soviet Union, completion of

  2. Author Correction: Emission budgets and pathways consistent with limiting warming to 1.5 °C

    NASA Astrophysics Data System (ADS)

    Millar, Richard J.; Fuglestvedt, Jan S.; Friedlingstein, Pierre; Rogelj, Joeri; Grubb, Michael J.; Matthews, H. Damon; Skeie, Ragnhild B.; Forster, Piers M.; Frame, David J.; Allen, Myles R.

    2018-06-01

    In the version of this Article originally published, a coding error resulted in the erroneous inclusion of a subset of RCP4.5 and RCP8.5 simulations in the sets used for RCP2.6 and RCP6, respectively, leading to an incorrect depiction of the data of the latter two sets in Fig. 1b and RCP2.6 in Table 2. This coding error has now been corrected. The graphic and quantitative changes in the corrected Fig. 1b and Table 2 are contrasted with the originally published display items below. The core conclusions of the paper are not affected, but some numerical values and statements have also been updated as a result; these are listed below. All these errors have now been corrected in the online versions of this Article.

  3. Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements

    NASA Astrophysics Data System (ADS)

    Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin

    2018-03-01

    In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.

  4. Proprioceptive deficit in patients with complete tearing of the anterior cruciate ligament.

    PubMed

    Godinho, Pedro; Nicoliche, Eduardo; Cossich, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio

    2014-01-01

    To investigate the existence of proprioceptive deficits between the injured limb and the uninjured (i.e. contralateral normal) limb, in individuals who suffered complete tearing of the anterior cruciate ligament (ACL), using a strength reproduction test. Sixteen patients with complete tearing of the ACL participated in the study. A voluntary maximum isometric strength test was performed, with reproduction of the muscle strength in the limb with complete tearing of the ACL and the healthy contralateral limb, with the knee flexed at 60°. The meta-intensity was used for the procedure of 20% of the voluntary maximum isometric strength. The proprioceptive performance was determined by means of absolute error, variable error and constant error values. Significant differences were found between the control group and ACL group for the variables of absolute error (p = 0.05) and constant error (p = 0.01). No difference was found in relation to variable error (p = 0.83). Our data corroborate the hypothesis that there is a proprioceptive deficit in subjects with complete tearing of the ACL in an injured limb, in comparison with the uninjured limb, during evaluation of the sense of strength. This deficit can be explained in terms of partial or total loss of the mechanoreceptors of the ACL.

  5. Performance analysis of a GPS Interferometric attitude determination system for a gravity gradient stabilized spacecraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Stoll, John C.

    1995-01-01

    The performance of an unaided attitude determination system based on GPS interferometry is examined using linear covariance analysis. The modelled system includes four GPS antennae onboard a gravity gradient stabilized spacecraft, specifically the Air Force's RADCAL satellite. The principal error sources are identified and modelled. The optimal system's sensitivities to these error sources are examined through an error budget and by varying system parameters. The effects of two satellite selection algorithms, Geometric and Attitude Dilution of Precision (GDOP and ADOP, respectively) are examined. The attitude performance of two optimal-suboptimal filters is also presented. Based on this analysis, the limiting factors in attitude accuracy are the knowledge of the relative antenna locations, the electrical path lengths from the antennae to the receiver, and the multipath environment. The performance of the system is found to be fairly insensitive to torque errors, orbital inclination, and the two satellite geometry figures-of-merit tested.

  6. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  7. Systematic evaluation of NASA precipitation radar estimates using NOAA/NSSL National Mosaic QPE products

    NASA Astrophysics Data System (ADS)

    Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.

    2011-12-01

    Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.

  8. Interprovincial has completed the 540-mile Norman Wells line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowland, L.O.

    Interprovinial Pipe Line Ltd., has completed and is now operating the 540-mi., 12-in. crude trunkline from Norman Wells, Northwest Territories, to Zama, Alberta. This is the largest and the only significant pipe line to be built north of the provincial boundaries. Construction was completed a month ahead of schedule and was comfortably under budget, through two winter seasons.

  9. On the Utilization of Ice Flow Models and Uncertainty Quantification to Interpret the Impact of Surface Radiation Budget Errors on Estimates of Greenland Ice Sheet Surface Mass Balance and Regional Estimates of Mass Balance

    NASA Astrophysics Data System (ADS)

    Schlegel, N.; Larour, E. Y.; Gardner, A. S.; Lang, C.; Miller, C. E.; van den Broeke, M. R.

    2016-12-01

    How Greenland ice flow may respond to future increases in surface runoff and to increases in the frequency of extreme melt events is unclear, as it requires detailed comprehension of Greenland surface climate and the ice sheet's sensitivity to associated uncertainties. With established uncertainty quantification tools run within the framework of Ice Sheet System Model (ISSM), we conduct decadal-scale forward modeling experiments to 1) quantify the spatial resolution needed to effectively force distinct components of the surface radiation budget, and subsequently surface mass balance (SMB), in various regions of the ice sheet and 2) determine the dynamic response of Greenland ice flow to variations in components of the net radiation budget. The Glacier Energy and Mass Balance (GEMB) software is a column surface model (1-D) that has recently been embedded as a module within ISSM. Using the ISSM-GEMB framework, we perform sensitivity analyses to determine how perturbations in various components of the surface radiation budget affect model output; these model experiments allow us predict where and on what spatial scale the ice sheet is likely to dynamically respond to changes in these parameters. Preliminary results suggest that SMB should be forced at at least a resolution of 23 km to properly capture dynamic ice response. In addition, Monte-Carlo style sampling analyses reveals that the areas with the largest uncertainty in mass flux are located near the equilibrium line altitude (ELA), upstream of major outlet glaciers in the North and West of the ice sheet. Sensitivity analysis indicates that these areas are also the most vulnerable on the ice sheet to persistent, far-field shifts in SMB, suggesting that continued warming, and upstream shift in the ELA, are likely to result in increased velocities, and consequentially SMB-induced thinning upstream of major outlet glaciers. Here, we extend our investigation to consider various components of the surface radiation budget separately, in order to determine how and where errors in these fields may independently impact ice flow. This work was performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Cryosphere and Interdisciplinary Research in Earth Science Programs.

  10. Evaluating an educational intervention to improve the accuracy of death certification among trainees from various specialties

    PubMed Central

    Villar, Jesús; Pérez-Méndez, Lina

    2007-01-01

    Background The inaccuracy of death certification can lead to the misallocation of resources in health care programs and research. We evaluated the rate of errors in the completion of death certificates among medical residents from various specialties, before and after an educational intervention which was designed to improve the accuracy in the certification of the cause of death. Methods A 90-min seminar was delivered to seven mixed groups of medical trainees (n = 166) from several health care institutions in Spain. Physicians were asked to read and anonymously complete a same case-scenario of death certification before and after the seminar. We compared the rates of errors and the impact of the educational intervention before and after the seminar. Results A total of 332 death certificates (166 completed before and 166 completed after the intervention) were audited. Death certificates were completed with errors by 71.1% of the physicians before the educational intervention. Following the seminar, the proportion of death certificates with errors decreased to 9% (p < 0.0001). The most common error in the completion of death certificates was the listing of the mechanism of death instead of the cause of death. Before the seminar, 56.8% listed respiratory or cardiac arrest as the immediate cause of death. None of the participants listed any mechanism of death after the educational intervention (p < 0.0001). Conclusion Major errors in the completion of the correct cause of death on death certificates are common among medical residents. A simple educational intervention can dramatically improve the accuracy in the completion of death certificates by physicians. PMID:18005414

  11. Assessment of legibility and completeness of handwritten and electronic prescriptions

    PubMed Central

    Albarrak, Ahmed I; Al Rashidi, Eman Abdulrahman; Fatani, Rwaa Kamil; Al Ageel, Shoog Ibrahim; Mohammed, Rafiuddin

    2014-01-01

    Objectives To assess the legibility and completeness of handwritten prescriptions and compare with electronic prescription system for medication errors. Design Prospective study. Setting King Khalid University Hospital (KKUH), Riyadh, Saudi Arabia. Subjects and methods Handwritten prescriptions were received from clinical units of Medicine Outpatient Department (MOPD), Primary Care Clinic (PCC) and Surgery Outpatient Department (SOPD) whereas electronic prescriptions were collected from the pediatric ward. The handwritten prescription was assessed for completeness by the checklist designed according to the hospital prescription and evaluated for legibility by two pharmacists. The comparison between handwritten and electronic prescription errors was evaluated based on the validated checklist adopted from previous studies. Main outcome measures Legibility and completeness of prescriptions. Results 398 prescriptions (199 handwritten and 199 e-prescriptions) were assessed. About 71 (35.7%) of handwritten and 5 (2.5%) of electronic prescription errors were identified. A significant statistical difference (P < 0.001) was observed between handwritten and e-prescriptions in omitted dose and omitted route of administration category of error distribution. The rate of completeness in patient identification in handwritten prescriptions was 80.97% in MOPD, 76.36% in PCC and 85.93% in SOPD clinic units. Assessment of medication prescription completeness was 91.48% in MOPD, 88.48% in PCC, and 89.28% in SOPD. Conclusions This study revealed a high incidence of prescribing errors in handwritten prescriptions. The use of e-prescription system showed a significant decline in the incidence of errors. The legibility of handwritten prescriptions was relatively good whereas the level of completeness was very low. PMID:25561864

  12. Global Patterns of Legacy Nitrate Storage in the Vadose Zone

    NASA Astrophysics Data System (ADS)

    Ascott, M.; Gooddy, D.; Wang, L.; Stuart, M.; Lewis, M.; Ward, R.; Binley, A. M.

    2017-12-01

    Global-scale nitrogen (N) budgets have been developed to quantify the impact of man's influence on the nitrogen cycle. However, these budgets often do not consider legacy effects such as accumulation of nitrate in the deep vadose zone. In this presentation we show that the vadose zone is an important store of nitrate which should be considered in future nitrogen budgets for effective policymaking. Using estimates of depth to groundwater and nitrate leaching for 1900-2000, we quantify for the first time the peak global storage of nitrate in the vadose zone, estimated as 605 - 1814 Teragrams (Tg). Estimates of nitrate storage are validated using previous national and basin scale estimates of N storage and observed groundwater nitrate data for North America and Europe. Nitrate accumulation per unit area is greatest in North America, China and Central and Eastern Europe where thick vadose zones are present and there is an extensive history of agriculture. In these areas the long solute travel time in the vadose zone means that the anticipated impact of changes in agricultural practices on groundwater quality may be substantially delayed. We argue that in these areas use of conventional nitrogen budget approaches is inappropriate and their continued use will lead to significant errors.

  13. Environmental cost of using poor decision metrics to prioritize environmental projects.

    PubMed

    Pannell, David J; Gibson, Fiona L

    2016-04-01

    Conservation decision makers commonly use project-scoring metrics that are inconsistent with theory on optimal ranking of projects. As a result, there may often be a loss of environmental benefits. We estimated the magnitudes of these losses for various metrics that deviate from theory in ways that are common in practice. These metrics included cases where relevant variables were omitted from the benefits metric, project costs were omitted, and benefits were calculated using a faulty functional form. We estimated distributions of parameters from 129 environmental projects from Australia, New Zealand, and Italy for which detailed analyses had been completed previously. The cost of using poor prioritization metrics (in terms of lost environmental values) was often high--up to 80% in the scenarios we examined. The cost in percentage terms was greater when the budget was smaller. The most costly errors were omitting information about environmental values (up to 31% loss of environmental values), omitting project costs (up to 35% loss), omitting the effectiveness of management actions (up to 9% loss), and using a weighted-additive decision metric for variables that should be multiplied (up to 23% loss). The latter 3 are errors that occur commonly in real-world decision metrics, in combination often reducing potential benefits from conservation investments by 30-50%. Uncertainty about parameter values also reduced the benefits from investments in conservation projects but often not by as much as faulty prioritization metrics. © 2016 Society for Conservation Biology.

  14. Homogeneous Studies of Transiting Extrasolar Planets: Current Status and Future Plans

    NASA Astrophysics Data System (ADS)

    Taylor, John

    2011-09-01

    We now know of over 500 planets orbiting stars other than our Sun. The jewels in the crown are the transiting planets, for these are the only ones whose masses and radii are measurable. They are fundamental for our understanding of the formation, evolution, structure and atmospheric properties of extrasolar planets. However, their characterization is not straightforward, requiring extremely high-precision photometry and spectroscopy as well as input from theoretical stellar models. I summarize the motivation and current status of a project to measure the physical properties of all known transiting planetary systems using homogeneous techniques (Southworth 2008, 2009, 2010, 2011 in preparation). Careful attention is paid to the treatment of limb darkening, contaminating light, correlated noise, numerical integration, orbital eccentricity and orientation, systematic errors from theoretical stellar models, and empirical constraints. Complete error budgets are calculated for each system and can be used to determine which type of observation would be most useful for improving the parameter measurements. Known correlations between the orbital periods, masses, surface gravities, and equilibrium temperatures of transiting planets can be explored more safely due to the homogeneity of the properties. I give a sneak preview of Homogeneous Studies Paper 4, which includes the properties of thirty transiting planetary systems observed by the CoRoT, Kepler and Deep Impact space missions. Future opportunities are discussed, plus remaining problems with our understanding of transiting planets. I acknowledge funding from the UK STFC in the form of an Advanced Fellowship.

  15. Development of a Large Scale, High Speed Wheel Test Facility

    NASA Technical Reports Server (NTRS)

    Kondoleon, Anthony; Seltzer, Donald; Thornton, Richard; Thompson, Marc

    1996-01-01

    Draper Laboratory, with its internal research and development budget, has for the past two years been funding a joint effort with the Massachusetts Institute of Technology (MIT) for the development of a large scale, high speed wheel test facility. This facility was developed to perform experiments and carry out evaluations on levitation and propulsion designs for MagLev systems currently under consideration. The facility was developed to rotate a large (2 meter) wheel which could operate with peripheral speeds of greater than 100 meters/second. The rim of the wheel was constructed of a non-magnetic, non-conductive composite material to avoid the generation of errors from spurious forces. A sensor package containing a multi-axis force and torque sensor mounted to the base of the station, provides a signal of the lift and drag forces on the package being tested. Position tables mounted on the station allow for the introduction of errors in real time. A computer controlled data acquisition system was developed around a Macintosh IIfx to record the test data and control the speed of the wheel. This paper describes the development of this test facility. A detailed description of the major components is presented. Recently completed tests carried out on a novel Electrodynamic (EDS) suspension system, developed by MIT as part of this joint effort are described and presented. Adaptation of this facility for linear motor and other propulsion and levitation testing is described.

  16. The SF3M approach to 3-D photo-reconstruction for non-expert users: application to a gully network

    NASA Astrophysics Data System (ADS)

    Castillo, C.; James, M. R.; Redel-Macías, M. D.; Pérez, R.; Gómez, J. A.

    2015-04-01

    3-D photo-reconstruction (PR) techniques have been successfully used to produce high resolution elevation models for different applications and over different spatial scales. However, innovative approaches are required to overcome some limitations that this technique may present in challenging scenarios. Here, we evaluate SF3M, a new graphical user interface for implementing a complete PR workflow based on freely available software (including external calls to VisualSFM and CloudCompare), in combination with a low-cost survey design for the reconstruction of a several-hundred-meters-long gully network. SF3M provided a semi-automated workflow for 3-D reconstruction requiring ~ 49 h (of which only 17% required operator assistance) for obtaining a final gully network model of > 17 million points over a gully plan area of 4230 m2. We show that a walking itinerary along the gully perimeter using two light-weight automatic cameras (1 s time-lapse mode) and a 6 m-long pole is an efficient method for 3-D monitoring of gullies, at a low cost (about EUR 1000 budget for the field equipment) and time requirements (~ 90 min for image collection). A mean error of 6.9 cm at the ground control points was found, mainly due to model deformations derived from the linear geometry of the gully and residual errors in camera calibration. The straightforward image collection and processing approach can be of great benefit for non-expert users working on gully erosion assessment.

  17. Rigorous quantitative elemental microanalysis by scanning electron microscopy/energy dispersive x-ray spectrometry (SEM/EDS) with spectrum processing by NIST DTSA-II

    NASA Astrophysics Data System (ADS)

    Newbury, Dale E.; Ritchie, Nicholas W. M.

    2014-09-01

    Quantitative electron-excited x-ray microanalysis by scanning electron microscopy/silicon drift detector energy dispersive x-ray spectrometry (SEM/SDD-EDS) is capable of achieving high accuracy and high precision equivalent to that of the high spectral resolution wavelength dispersive x-ray spectrometer even when severe peak interference occurs. The throughput of the SDD-EDS enables high count spectra to be measured that are stable in calibration and resolution (peak shape) across the full deadtime range. With this high spectral stability, multiple linear least squares peak fitting is successful for separating overlapping peaks and spectral background. Careful specimen preparation is necessary to remove topography on unknowns and standards. The standards-based matrix correction procedure embedded in the NIST DTSA-II software engine returns quantitative results supported by a complete error budget, including estimates of the uncertainties from measurement statistics and from the physical basis of the matrix corrections. NIST DTSA-II is available free for Java-platforms at: http://www.cstl.nist.gov/div837/837.02/epq/dtsa2/index.html).

  18. Orbit and sampling requirements: TRMM experience

    NASA Technical Reports Server (NTRS)

    North, Gerald

    1993-01-01

    The Tropical Rainfall Measuring Mission (TRMM) concept originated in 1984. Its overall goal is to produce datasets that can be used in the improvement of general circulation models. A primary objective is a multi-year data stream of monthly averages of rain rate over 500 km boxes over the tropical oceans. Vertical distributions of the hydrometers, related to latent heat profiles, and the diurnal cycle of rainrates are secondary products believed to be accessible. The mission is sponsored jointly by the U.S. and Japan. TRMM is an approved mission with launch set for 1997. There are many retrieval and ground truth issues still being studied for TRMM, but here we concentrate on sampling since it is the single largest term in the error budget. The TRMM orbit plane is inclined by 35 degrees to the equator, which leads to a precession of the visits to a given grid box through the local hours of the day, requiring three to six weeks to complete the diurnal cycle, depending on latitude. For sampling studies we can consider the swath width to be about 700 km.

  19. View-Dependent Simplification of Arbitrary Polygonal Environments

    DTIC Science & Technology

    2006-01-01

    of backfacing nodes are not rendered [ Kumar 96]. 4.3 Triangle-Budget Simplification The screenspace error threshold and silhouette test allow the user...Greg Turk, and Dinesh Manocha for their invaluable guidance and support throughout this project. Funding for this work was provided by DARPA...Proceedings Visualization 95 , IEEE Computer Society Press (Atlanta, GA), 1995, pp. 296-303. [ Kumar 96] Kumar , Subodh, D. Manocha, W. Garrett, M. Lin

  20. LORAN-C LATITUDE-LONGITUDE CONVERSION AT SEA: PROGRAMMING CONSIDERATIONS.

    USGS Publications Warehouse

    McCullough, James R.; Irwin, Barry J.; Bowles, Robert M.

    1985-01-01

    Comparisons are made of the precision of arc-length routines as computer precision is reduced. Overland propagation delays are discussed and illustrated with observations from offshore New England. Present practice of LORAN-C error budget modeling is then reviewed with the suggestion that additional terms be considered in future modeling. Finally, some detailed numeric examples are provided to help with new computer program checkout.

  1. Social Security Fraud and Error Prevention Act of 2014

    THOMAS, 113th Congress

    Rep. Becerra, Xavier [D-CA-34

    2014-02-26

    House - 02/26/2014 Referred to the Committee on Ways and Means, and in addition to the Committee on the Budget, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  2. 75 FR 24757 - Order Making Fiscal Year 2011 Annual Adjustments to the Fee Rates Applicable Under Section 6(b...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ... Management and Budget (``OMB'') to project aggregate offering price for purposes of the fiscal year 2010... methodology it developed in consultation with the CBO and OMB to project dollar volume for purposes of prior... AAMOP is given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n...

  3. 34 CFR 668.45 - Information on completion or graduation rates.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Institutional and Financial Assistance Information for Students § 668.45 Information on completion or graduation rates. (a)(1... Management and Budget under control number 1845-0004) (Authority: 20 U.S.C. 1092) [74 FR 55944, Oct. 29, 2009] ...

  4. 7 CFR 273.21 - Monthly Reporting and Retrospective Budgeting (MRRB).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... recertification interview, the State agency shall provide the household with the following: (1) An oral... questions or to obtain help in completing the monthly report; and (6) Written explanations of this... State agency, a completed monthly report for the month in question shall be submitted by the household...

  5. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  6. Simulation of streamflow, evapotranspiration, and groundwater recharge in the lower San Antonio River Watershed, South-Central Texas, 2000-2007

    USGS Publications Warehouse

    Lizarraga, Joy S.; Ockerman, Darwin J.

    2010-01-01

    The U.S. Geological Survey (USGS), in cooperation with the San Antonio River Authority, the Evergreen Underground Water Conservation District, and the Goliad County Groundwater Conservation District, configured, calibrated, and tested a watershed model for a study area consisting of about 2,150 square miles of the lower San Antonio River watershed in Bexar, Guadalupe, Wilson, Karnes, DeWitt, Goliad, Victoria, and Refugio Counties in south-central Texas. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge using rainfall, potential ET, and upstream discharge data obtained from National Weather Service meteorological stations and USGS streamflow-gaging stations. Additional time-series inputs to the model include wastewater treatment-plant discharges, withdrawals for cropland irrigation, and estimated inflows from springs. Model simulations of streamflow, ET, and groundwater recharge were done for 2000-2007. Because of the complexity of the study area, the lower San Antonio River watershed was divided into four subwatersheds; separate HSPF models were developed for each subwatershed. Simulation of the overall study area involved running simulations of the three upstream models, then running the downstream model. The surficial geology was simplified as nine contiguous water-budget zones to meet model computational limitations and also to define zones for which ET, recharge, and other water-budget information would be output by the model. The model was calibrated and tested using streamflow data from 10 streamflow-gaging stations; additionally, simulated ET was compared with measured ET from a meteorological station west of the study area. The model calibration is considered very good; streamflow volumes were calibrated to within 10 percent of measured streamflow volumes. During 2000-2007, the estimated annual mean rainfall for the water-budget zones ranged from 33.7 to 38.5 inches per year; the estimated annual mean rainfall for the entire watershed was 34.3 inches. Using the HSPF model it was estimated that for 2000-2007, less than 10 percent of the annual mean rainfall on the study watershed exited the watershed as streamflow, whereas about 82 percent, or an average of 28.2 inches per year, exited the watershed as ET. Estimated annual mean groundwater recharge for the entire study area was 3.0 inches, or about 9 percent of annual mean rainfall. Estimated annual mean recharge was largest in water-budget zone 3, the zone where the Carrizo Sand outcrops. In water-budget zone 3, the estimated annual mean recharge was 5.1 inches or about 15 percent of annual mean rainfall. Estimated annual mean recharge was smallest in water-budget zone 6, about 1.1 inches or about 3 percent of annual mean rainfall. The Cibolo Creek subwatershed and the subwatershed of the San Antonio River upstream from Cibolo Creek had the largest and smallest basin yields, about 4.8 inches and 1.2 inches, respectively. Estimated annual ET and annual recharge generally increased with increasing annual rainfall. Also, ET was larger in zones 8 and 9, the most downstream zones in the watershed. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error.

  7. Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.

    2006-01-01

    Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.

  8. Error decomposition and estimation of inherent optical properties.

    PubMed

    Salama, Mhd Suhyb; Stein, Alfred

    2009-09-10

    We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.

  9. 2 kWe Solar Dynamic Ground Test Demonstration Project. Volume 3; Fabrication and Test Report

    NASA Technical Reports Server (NTRS)

    Alexander, Dennis

    1997-01-01

    The Solar Dynamic Ground Test Demonstration (SDGTD) project has successfully designed and fabricated a complete solar-powered closed Brayton electrical power generation system and tested it in a relevant thermal vacuum facility at NASA Lewis Research Center (LeRC). In addition to completing technical objectives, the project was completed 3-l/2 months early, and under budget.

  10. 50th Anniversary of Radiation Budget Measurements from Satellites

    NASA Astrophysics Data System (ADS)

    Raschke, Ehrhard, ,, Dr.; Kinne, Stefan, ,, Dr.

    2010-05-01

    The "space race" between the USA and the Soviet Union supported rapid developments of instruments to measure properties of the atmosphere from satellite platforms. The satellite Explorer 7 (launch on 13 October 1959) was the first to carry sensors which were sensitive to the fluxes of solar (shortwave) and terrestrial (longwave) radiation leaving the Earth to space. Improved versions of those sensors and more complicated radiometers were flown on various operational and experimental satellites of the Nimbus, ESSA, TIROS, COSMOS, and NOAA series. There results, although often inherent to strong sampling insufficiencies, provided already a general picture on the spatial distribution and seasonal variability of radiation budget components at the Top of the Atmosphere, which finally could be refined with the more recent and more accurate and complete data sets of the experiments ERBE, CERES and ScRaB. Numerical analyses of climate data complemented such measurements to obtain a complete picture on the radiation budget at various levels within the atmosphere and at ground. These data is now used to validate the performance of climate models.

  11. Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students

    PubMed Central

    Malone, Amelia S.; Fuchs, Lynn S.

    2016-01-01

    The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of committing errors. Students (n = 227) completed a 9-item ordering test. A high proportion (81%) of problems were completed incorrectly. Most (65% of) errors were due to students misapplying whole number logic to fractions. Fraction-magnitude estimation skill, but not part-whole understanding, significantly predicted the probability of committing this type of error. Implications for practice are discussed. PMID:26966153

  12. Secondary Forest Age and Tropical Forest Biomass Estimation Using TM

    NASA Technical Reports Server (NTRS)

    Nelson, R. F.; Kimes, D. S.; Salas, W. A.; Routhier, M.

    1999-01-01

    The age of secondary forests in the Amazon will become more critical with respect to the estimation of biomass and carbon budgets as tropical forest conversion continues. Multitemporal Thematic Mapper data were used to develop land cover histories for a 33,000 Square kM area near Ariquemes, Rondonia over a 7 year period from 1989-1995. The age of the secondary forest, a surrogate for the amount of biomass (or carbon) stored above-ground, was found to be unimportant in terms of biomass budget error rates in a forested TM scene which had undergone a 20% conversion to nonforest/agricultural cover types. In such a situation, the 80% of the scene still covered by primary forest accounted for over 98% of the scene biomass. The difference between secondary forest biomass estimates developed with and without age information were inconsequential relative to the estimate of biomass for the entire scene. However, in futuristic scenarios where all of the primary forest has been converted to agriculture and secondary forest (55% and 42% respectively), the ability to age secondary forest becomes critical. Depending on biomass accumulation rate assumptions, scene biomass budget errors on the order of -10% to +30% are likely if the age of the secondary forests are not taken into account. Single-date TM imagery cannot be used to accurately age secondary forests into single-year classes. A neural network utilizing TM band 2 and three TM spectral-texture measures (bands 3 and 5) predicted secondary forest age over a range of 0-7 years with an RMSE of 1.59 years and an R(Squared) (sub actual vs predicted) = 0.37. A proposal is made, based on a literature review, to use satellite imagery to identify general secondary forest age groups which, within group, exhibit relatively constant biomass accumulation rates.

  13. Comparison of Surface Ground Temperature from Satellite Observations and the Off-Line Land Surface GEOS Assimilation System

    NASA Technical Reports Server (NTRS)

    Yang, R.; Houser, P.; Joiner, J.

    1998-01-01

    The surface ground temperature (Tg) is an important meteorological variable, because it represents an integrated thermal state of the land surface determined by a complex surface energy budget. Furthermore, Tg affects both the surface sensible and latent heat fluxes. Through these fluxes. the surface budget is coupled with the atmosphere above. Accurate Tg data are useful for estimating the surface radiation budget and fluxes, as well as soil moisture. Tg is not included in conventional synoptical weather station reports. Currently, satellites provide Tg estimates globally. It is necessary to carefully consider appropriate methods of using these satellite data in a data assimilation system. Recently, an Off-line Land surface GEOS Assimilation (OLGA) system was implemented at the Data Assimilation Office at NASA-GSFC. One of the goals of OLGA is to assimilate satellite-derived Tg data. Prior to the Tg assimilation, a thorough investigation of satellite- and model-derived Tg, including error estimates, is required. In this study we examine the Tg from the n Project (ISCCP DI) data and the OLGA simulations. The ISCCP data used here are 3-hourly DI data (2.5x2.5 degree resolution) for 1992 summer months (June, July, and August) and winter months (January and February). The model Tg for the same periods were generated by OLGA. The forcing data for this OLGA 1992 simulation were generated from the GEOS-1 Data Assimilation System (DAS) at Data Assimilation Office NASA-GSFC. We examine the discrepancies between ISCCP and OLGA Tg with a focus on its spatial and temporal characteristics, particularly on the diurnal cycle. The error statistics in both data sets, including bias, will be estimated. The impact of surface properties, including vegetation cover and type, topography, etc, on the discrepancies will be addressed.

  14. Application of probabilistic modelling for the uncertainty evaluation of alignment measurements of large accelerator magnets assemblies

    NASA Astrophysics Data System (ADS)

    Doytchinov, I.; Tonnellier, X.; Shore, P.; Nicquevert, B.; Modena, M.; Mainaud Durand, H.

    2018-05-01

    Micrometric assembly and alignment requirements for future particle accelerators, and especially large assemblies, create the need for accurate uncertainty budgeting of alignment measurements. Measurements and uncertainties have to be accurately stated and traceable, to international standards, for metre-long sized assemblies, in the range of tens of µm. Indeed, these hundreds of assemblies will be produced and measured by several suppliers around the world, and will have to be integrated into a single machine. As part of the PACMAN project at CERN, we proposed and studied a practical application of probabilistic modelling of task-specific alignment uncertainty by applying a simulation by constraints calibration method. Using this method, we calibrated our measurement model using available data from ISO standardised tests (10360 series) for the metrology equipment. We combined this model with reference measurements and analysis of the measured data to quantify the actual specific uncertainty of each alignment measurement procedure. Our methodology was successfully validated against a calibrated and traceable 3D artefact as part of an international inter-laboratory study. The validated models were used to study the expected alignment uncertainty and important sensitivity factors in measuring the shortest and longest of the compact linear collider study assemblies, 0.54 m and 2.1 m respectively. In both cases, the laboratory alignment uncertainty was within the targeted uncertainty budget of 12 µm (68% confidence level). It was found that the remaining uncertainty budget for any additional alignment error compensations, such as the thermal drift error due to variation in machine operation heat load conditions, must be within 8.9 µm and 9.8 µm (68% confidence level) respectively.

  15. A comparison of advanced overlay technologies

    NASA Astrophysics Data System (ADS)

    Dasari, Prasad; Smith, Nigel; Goelzer, Gary; Liu, Zhuan; Li, Jie; Tan, Asher; Koh, Chin Hwee

    2010-03-01

    The extension of optical lithography to 22nm and beyond by Double Patterning Technology is often challenged by CDU and overlay control. With reduced overlay measurement error budgets in the sub-nm range, relying on traditional Total Measurement Uncertainty (TMU) estimates alone is no longer sufficient. In this paper we will report scatterometry overlay measurements data from a set of twelve test wafers, using four different target designs. The TMU of these measurements is under 0.4nm, within the process control requirements for the 22nm node. Comparing the measurement differences between DBO targets (using empirical and model based analysis) and with image-based overlay data indicates the presence of systematic and random measurement errors that exceeds the TMU estimate.

  16. Evaluation of Water Year 2011 Glen Canyon Dam Flow Release Scenarios on Downstream Sand Storage along the Colorado River in Arizona

    USGS Publications Warehouse

    Wright, Scott A.; Grams, Paul E.

    2010-01-01

    This report describes numerical modeling simulations of sand transport and sand budgets for reaches of the Colorado River below Glen Canyon Dam. Two hypothetical Water Year 2011 annual release volumes were each evaluated with six hypothetical operational scenarios. The six operational scenarios include the current operation, scenarios with modifications to the monthly distribution of releases, and scenarios with modifications to daily flow fluctuations. Uncertainties in model predictions were evaluated by conducting simulations with error estimates for tributary inputs and mainstem transport rates. The modeling results illustrate the dependence of sand transport rates and sand budgets on the annual release volumes as well as the within year operating rules. The six operational scenarios were ranked with respect to the predicted annual sand budgets for Marble Canyon and eastern Grand Canyon reaches. While the actual WY 2011 annual release volume and levels of tributary inputs are unknown, the hypothetical conditions simulated and reported herein provide reasonable comparisons between the operational scenarios, in a relative sense, that may be used by decision makers within the Glen Canyon Dam Adaptive Management Program.

  17. Developing an Earth system Inverse model for the Earth's energy and water budgets.

    NASA Astrophysics Data System (ADS)

    Haines, K.; Thomas, C.; Liu, C.; Allan, R. P.; Carneiro, D. M.

    2017-12-01

    The CONCEPT-Heat project aims at developing a consistent energy budget for the Earth system in order to better understand and quantify global change. We advocate a variational "Earth system inverse" solution as the best methodology to bring the necessary expertise from different disciplines together. L'Ecuyer et al (2015) and Rodell et al (2015) first used a variational approach to adjust multiple satellite data products for air-sea-land vertical fluxes of heat and freshwater, achieving closed budgets on a regional and global scale. However their treatment of horizontal energy and water redistribution and its uncertainties was limited. Following the recent work of Liu et al (2015, 2017) which used atmospheric reanalysis convergences to derive a new total surface heat flux product from top of atmosphere fluxes, we have revisited the variational budget approach introducing a more extensive analysis of the role of horizontal transports of heat and freshwater, using multiple atmospheric and ocean reanalysis products. We find considerable improvements in fluxes in regions such as the North Atlantic and Arctic, for example requiring higher atmospheric heat and water convergences over the Arctic than given by ERA-Interim, thereby allowing lower and more realistic oceanic transports. We explore using the variational uncertainty analysis to produce lower resolution corrections to higher resolution flux products and test these against in situ flux data. We also explore the covariance errors implied between component fluxes that are imposed by the regional budget constraints. Finally we propose this as a valuable methodology for developing consistent observational constraints on the energy and water budgets in climate models. We take a first look at the same regional budget quantities in CMIP5 models and consider the implications of the differences for the processes and biases active in the models. Many further avenues of investigation are possible focused on better valuing the uncertainties in observational flux products and setting requirement targets for future observation programs.

  18. A robust pseudo-inverse spectral filter applied to the Earth Radiation Budget Experiment (ERBE) scanning channels

    NASA Technical Reports Server (NTRS)

    Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.

    1984-01-01

    Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.

  19. Advancing Technology for Starlight Suppression via an External Occulter

    NASA Technical Reports Server (NTRS)

    Kasdin, N. J.; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Walkemeyer, P.; Bach, V.; Oakes, E.; Cady, E.; hide

    2011-01-01

    External occulters provide the starlight suppression needed for detecting and characterizing exoplanets with a much simpler telescope and instrument than is required for the equivalent performing coronagraph. In this paper we describe progress on our Technology Development for Exoplanet Missions project to design, manufacture, and measure a prototype occulter petal. We focus on the key requirement of manufacturing a precision petal while controlling its shape within precise tolerances. The required tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We discuss the deployable starshade design, representative error budget, thermal analysis, and prototype manufacturing. We also present our meteorology system and methodology for verifying that the petal shape meets the contrast requirement. Finally, we summarize the progress to date building the prototype petal.

  20. EUV local CDU healing performance and modeling capability towards 5nm node

    NASA Astrophysics Data System (ADS)

    Jee, Tae Kwon; Timoshkov, Vadim; Choi, Peter; Rio, David; Tsai, Yu-Cheng; Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Schoofs, Stijn

    2017-10-01

    Both local variability and optical proximity correction (OPC) errors are big contributors to the edge placement error (EPE) budget which is closely related to the device yield. The post-litho contact hole healing will be demonstrated to meet after-etch local variability specifications using a low dose, 30mJ/cm2 dose-to-size, positive tone developed (PTD) resist with relevant throughput in high volume manufacturing (HVM). The total local variability of the node 5nm (N5) contact holes will be characterized in terms of local CD uniformity (LCDU), local placement error (LPE), and contact edge roughness (CER) using a statistical methodology. The CD healing process has complex etch proximity effects, so the OPC prediction accuracy is challenging to meet EPE requirements for the N5. Thus, the prediction accuracy of an after-etch model will be investigated and discussed using ASML Tachyon OPC model.

  1. Space shuttle entry and landing navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Crawford, B. S.

    1974-01-01

    A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.

  2. Cognitive Deficits Underlying Error Behavior on a Naturalistic Task after Severe Traumatic Brain Injury

    PubMed Central

    Hendry, Kathryn; Ownsworth, Tamara; Beadle, Elizabeth; Chevignard, Mathilde P.; Fleming, Jennifer; Griffin, Janelle; Shum, David H. K.

    2016-01-01

    People with severe traumatic brain injury (TBI) often make errors on everyday tasks that compromise their safety and independence. Such errors potentially arise from the breakdown or failure of multiple cognitive processes. This study aimed to investigate cognitive deficits underlying error behavior on a home-based version of the Cooking Task (HBCT) following TBI. Participants included 45 adults (9 females, 36 males) with severe TBI aged 18–64 years (M = 37.91, SD = 13.43). Participants were administered the HBCT in their home kitchens, with audiovisual recordings taken to enable scoring of total errors and error subtypes (Omissions, Additions, Estimations, Substitutions, Commentary/Questions, Dangerous Behavior, Goal Achievement). Participants also completed a battery of neuropsychological tests, including the Trail Making Test, Hopkins Verbal Learning Test-Revised, Digit Span, Zoo Map test, Modified Stroop Test, and Hayling Sentence Completion Test. After controlling for cooking experience, greater Omissions and Estimation errors, lack of goal achievement, and longer completion time were significantly associated with poorer attention, memory, and executive functioning. These findings indicate that errors on naturalistic tasks arise from deficits in multiple cognitive domains. Assessment of error behavior in a real life setting provides insight into individuals' functional abilities which can guide rehabilitation planning and lifestyle support. PMID:27790099

  3. The Effect of an Electronic Checklist on Critical Care Provider Workload, Errors, and Performance.

    PubMed

    Thongprayoon, Charat; Harrison, Andrew M; O'Horo, John C; Berrios, Ronaldo A Sevilla; Pickering, Brian W; Herasevich, Vitaly

    2016-03-01

    The strategy used to improve effective checklist use in intensive care unit (ICU) setting is essential for checklist success. This study aimed to test the hypothesis that an electronic checklist could reduce ICU provider workload, errors, and time to checklist completion, as compared to a paper checklist. This was a simulation-based study conducted at an academic tertiary hospital. All participants completed checklists for 6 ICU patients: 3 using an electronic checklist and 3 using an identical paper checklist. In both scenarios, participants had full access to the existing electronic medical record system. The outcomes measured were workload (defined using the National Aeronautics and Space Association task load index [NASA-TLX]), the number of checklist errors, and time to checklist completion. Two independent clinician reviewers, blinded to participant results, served as the reference standard for checklist error calculation. Twenty-one ICU providers participated in this study. This resulted in the generation of 63 simulated electronic checklists and 63 simulated paper checklists. The median NASA-TLX score was 39 for the electronic checklist and 50 for the paper checklist (P = .005). The median number of checklist errors for the electronic checklist was 5, while the median number of checklist errors for the paper checklist was 8 (P = .003). The time to checklist completion was not significantly different between the 2 checklist formats (P = .76). The electronic checklist significantly reduced provider workload and errors without any measurable difference in the amount of time required for checklist completion. This demonstrates that electronic checklists are feasible and desirable in the ICU setting. © The Author(s) 2014.

  4. Overlay improvement by exposure map based mask registration optimization

    NASA Astrophysics Data System (ADS)

    Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric

    2015-03-01

    Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density could also achieve under 5nm performance. We assume mask registration excluding random error is mostly induced by charge accumulation during mask writing, which may be calculated from surrounding exposed pattern density. Multi-loading test mask registration result shows that with x direction writing sequence, mask registration behavior in x direction is mainly related to sequence direction, but mask registration in y direction would be highly impacted by pattern density distribution map. It proves part of mask registration error is due to charge issue from nearby environment. If exposure sequence is chip by chip for normal multi chip layout case, mask registration of both x and y direction would be impacted analogously, which has also been proved by real data. Therefore, we try to set up a simple model to predict the mask registration error based on mask exposure map, and correct it with the given POSCOR (position correction) file for advanced mask writing if needed.

  5. The Relationship between Earned Value Management Metrics and Customer Satisfaction

    ERIC Educational Resources Information Center

    Plumer, David R.

    2010-01-01

    Information Technology (IT) products have a high rate of failure. Only 25% of IT projects were completed within budget and schedule, and 15% of completed projects were not operational. Researchers have not investigated the success of project management systems from the perspective of customer satisfaction. In this quantitative study, levels of…

  6. 34 CFR 263.10 - What are the payback reporting requirements?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for payments. (Approved by the Office of Management and Budget under control number 1810-0580... notice of intent to complete a work-related or cash payback, or to continue in a degree program as a full..., but cannot complete, a work-related payback, the payback reverts to a cash payback that is prorated...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Amanda S.; Brosha, Eric

    This is the second progress report on the demonstration of a prototype hydrogen sensor and electronics package. It goes into detail about the five tasks, four of which are already completed as of August 2016, with the final to be completed by January 26, 2017. Then the budget is detailed along with the planned work for May 27, 2016 to July 27, 2016.

  8. 7 CFR 273.21 - Monthly Reporting and Retrospective Budgeting (MRRB).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... interview, the State agency shall provide the household with the following: (1) An oral explanation of the... questions or to obtain help in completing the monthly report; and (6) Written explanations of this... State agency, a completed monthly report for the month in question shall be submitted by the household...

  9. Evaluation of the land surface water budget in NCEP/NCAR and NCEP/DOE reanalyses using an off-line hydrologic model

    NASA Astrophysics Data System (ADS)

    Maurer, Edwin P.; O'Donnell, Greg M.; Lettenmaier, Dennis P.; Roads, John O.

    2001-08-01

    The ability of the National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis (NRA1) and the follow-up NCEP/Department of Energy (DOE) reanalysis (NRA2), to reproduce the hydrologic budgets over the Mississippi River basin is evaluated using a macroscale hydrology model. This diagnosis is aided by a relatively unconstrained global climate simulation using the NCEP global spectral model, and a more highly constrained regional climate simulation using the NCEP regional spectral model, both employing the same land surface parameterization (LSP) as the reanalyses. The hydrology model is the variable infiltration capacity (VIC) model, which is forced by gridded observed precipitation and temperature. It reproduces observed streamflow, and by closure is constrained to balance other terms in the surface water and energy budgets. The VIC-simulated surface fluxes therefore provide a benchmark for evaluating the predictions from the reanalyses and the climate models. The comparisons, conducted for the 10-year period 1988-1997, show the well-known overestimation of summer precipitation in the southeastern Mississippi River basin, a consistent overestimation of evapotranspiration, and an underprediction of snow in NRA1. These biases are generally lower in NRA2, though a large overprediction of snow water equivalent exists. NRA1 is subject to errors in the surface water budget due to nudging of modeled soil moisture to an assumed climatology. The nudging and precipitation bias alone do not explain the consistent overprediction of evapotranspiration throughout the basin. Another source of error is the gravitational drainage term in the NCEP LSP, which produces the majority of the model's reported runoff. This may contribute to an overprediction of persistence of surface water anomalies in much of the basin. Residual evapotranspiration inferred from an atmospheric balance of NRA1, which is more directly related to observed atmospheric variables, matches the VIC prediction much more closely than the coupled models. However, the persistence of the residual evapotranspiration is much less than is predicted by the hydrological model or the climate models.

  10. AFGL Atmospheric Constituent Profiles (0.120km)

    DTIC Science & Technology

    1986-05-15

    compilations and (d) individual constituents. Each species is followed by the set of journal refer- ences which contributed either directly or indirectly to... enced materials; those publications that can be associated with particular molecules are so identified. 3. ERROR ESTIMATES/VARIABILITY The practical...budgets, J. Geophys. Res; 88, 10785-10807. (NO, NO 2 , HNO 3 , NO 3] Louisnard, N., Fergant, G., Girard, A., Gramont, L., Lado -Bordowsky, 0., Laurent, J

  11. Descriptive Summaries of the Research, Development, Test and Evaluation, Army Appropriation. Supporting Data FY 1994, Budget Estimates Submitted to Congress, April 1993

    DTIC Science & Technology

    1993-04-01

    determining effective group functioning, leader-group interaction , and decision making; (2) factors that determine effective, low error human performance...infectious disease and biological defense vaccines and drugs , vision, neurotxins, neurochemistry, molecular neurobiology, neurodegenrative diseases...Potential Rotor/Comprehensive Analysis Model for Rotor Aerodynamics-Johnson Aeronautics (FPR/CAMRAD-JA) code to predict Blade Vortex Interaction (BVI

  12. JASMINE: Data analysis and simulation

    NASA Astrophysics Data System (ADS)

    Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Sako, Nobutada; Jasmine Working Group

    JASMINE will study the structure and evolution of the Milky Way Galaxy. To accomplish these objectives JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μas at z = 14 mag. In this paper methods for data analysis and error budgets, on-board data handling such as sampling strategy and data compression, and simulation software for end-to-end simulation are presented.

  13. In Situ Metrology for the Corrective Polishing of Replicating Mandrels

    DTIC Science & Technology

    2010-06-08

    distribution is unlimited. 13. SUPPLEMENTARY NOTES Presented at Mirror Technology Days, Boulder, Colorado, USA, 7-9 June 2010. 14...ABSTRACT The International X-ray Observatory (IXO) will require mandrel metrology with extremely tight tolerances on mirrors with up to 1.6 meter radii...ideal. Error budgets for the IXO mirror segments are presented. A potential solution is presented that uses a voice-coil controlled gauging head, air

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Izaurralde, Roberto C.; Manowitz, David H.

    Accurate quantification and clear understanding of regional scale cropland carbon (C) cycling is critical for designing effective policies and management practices that can contribute toward stabilizing atmospheric CO2 concentrations. However, extrapolating site-scale observations to regional scales represents a major challenge confronting the agricultural modeling community. This study introduces a novel geospatial agricultural modeling system (GAMS) exploring the integration of the mechanistic Environmental Policy Integrated Climate model, spatially-resolved data, surveyed management data, and supercomputing functions for cropland C budgets estimates. This modeling system creates spatially-explicit modeling units at a spatial resolution consistent with remotely-sensed crop identification and assigns cropping systems tomore » each of them by geo-referencing surveyed crop management information at the county or state level. A parallel computing algorithm was also developed to facilitate the computationally intensive model runs and output post-processing and visualization. We evaluated GAMS against National Agricultural Statistics Service (NASS) reported crop yields and inventory estimated county-scale cropland C budgets averaged over 2000–2008. We observed good overall agreement, with spatial correlation of 0.89, 0.90, 0.41, and 0.87, for crop yields, Net Primary Production (NPP), Soil Organic C (SOC) change, and Net Ecosystem Exchange (NEE), respectively. However, we also detected notable differences in the magnitude of NPP and NEE, as well as in the spatial pattern of SOC change. By performing crop-specific annual comparisons, we discuss possible explanations for the discrepancies between GAMS and the inventory method, such as data requirements, representation of agroecosystem processes, completeness and accuracy of crop management data, and accuracy of crop area representation. Based on these analyses, we further discuss strategies to improve GAMS by updating input data and by designing more efficient parallel computing capability to quantitatively assess errors associated with the simulation of C budget components. The modularized design of the GAMS makes it flexible to be updated and adapted for different agricultural models so long as they require similar input data, and to be linked with socio-economic models to understand the effectiveness and implications of diverse C management practices and policies.« less

  15. Pupillometry and Saccades as Objective mTBI Biomarker

    DTIC Science & Technology

    2016-10-01

    INTRODUCTION: The DOD reported that 333,169 cases of traumatic brain injury (TBI) were confirmed since 2000, with mild TBI (mTBI) accounting for 82.4...Complete final report: NO INITIATED (A 6-month No Cost Extension was approved to complete data analysis and manuscript writing.) KEY RESEARCH...Staff hiring issues delayed study completion; however a 6-month No Cost extension was approved Budget Expenditure to Date Projected Expenditure

  16. Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument

    NASA Astrophysics Data System (ADS)

    Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory

    2014-10-01

    The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.

  17. Estimation of surface heat and moisture fluxes over a prairie grassland. I - In situ energy budget measurements incorporating a cooled mirror dew point hygrometer

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Crosson, William L.; Tanner, Bertrand D.

    1992-01-01

    Attention is focused on in situ measurements taken during FIFE required to support the development and validation of a biosphere model. Seasonal time series of surface flux measurements obtained from two surface radiation and energy budget stations utilized to support the FIFE surface flux measurement subprogram are examined. Data collection and processing procedures are discussed along with the measurement analysis for the complete 1987 test period.

  18. 24 CFR 905.510 - Submission requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... submitted by this part: Capital fund financing budget, management assessment, fairness opinion, and physical needs assessment. (5) Financing documents. The PHA must submit a complete set of the legal documents...

  19. 24 CFR 905.510 - Submission requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... submitted by this part: Capital fund financing budget, management assessment, fairness opinion, and physical needs assessment. (5) Financing documents. The PHA must submit a complete set of the legal documents...

  20. 24 CFR 905.510 - Submission requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... submitted by this part: Capital fund financing budget, management assessment, fairness opinion, and physical needs assessment. (5) Financing documents. The PHA must submit a complete set of the legal documents...

  1. Modern Era Retrospective-analysis for Research and Applications (MERRA) Global Water and Energy Budgets

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, Franklin R.; Chen, Junye

    2008-01-01

    The Modern. Era Retrospective-analysis for Research and Applications (MERRA) reanalyses has produced several years of data, on the way to a completing. the 1979-present modern satellite era. Here, we present a preliminary evaluation of those years currently available, includin g comparisons with the existing long reanalyses (ERA40, JRA25 and NCE P I and II) as well as with global data sets for the water and energy cycle Time series shows that the MERRA budgets can change with some of the variations in observing systems. We will present all terms of the budgets in MERRA including the time rates of change and analysis increments (tendency due to the analysis of observations)

  2. The CMIP6 Sea-Ice Model Intercomparison Project (SIMIP): Understanding sea ice through climate-model simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Notz, Dirk; Jahn, Alexandra; Holland, Marika

    A better understanding of the role of sea ice for the changing climate of our planet is the central aim of the diagnostic Coupled Model Intercomparison Project 6 (CMIP6)-endorsed Sea-Ice Model Intercomparison Project (SIMIP). To reach this aim, SIMIP requests sea-ice-related variables from climate-model simulations that allow for a better understanding and, ultimately, improvement of biases and errors in sea-ice simulations with large-scale climate models. This then allows us to better understand to what degree CMIP6 model simulations relate to reality, thus improving our confidence in answering sea-ice-related questions based on these simulations. Furthermore, the SIMIP protocol provides a standardmore » for sea-ice model output that will streamline and hence simplify the analysis of the simulated sea-ice evolution in research projects independent of CMIP. To reach its aims, SIMIP provides a structured list of model output that allows for an examination of the three main budgets that govern the evolution of sea ice, namely the heat budget, the momentum budget, and the mass budget. Furthermore, we explain the aims of SIMIP in more detail and outline how its design allows us to answer some of the most pressing questions that sea ice still poses to the international climate-research community.« less

  3. The CMIP6 Sea-Ice Model Intercomparison Project (SIMIP): Understanding sea ice through climate-model simulations

    DOE PAGES

    Notz, Dirk; Jahn, Alexandra; Holland, Marika; ...

    2016-09-23

    A better understanding of the role of sea ice for the changing climate of our planet is the central aim of the diagnostic Coupled Model Intercomparison Project 6 (CMIP6)-endorsed Sea-Ice Model Intercomparison Project (SIMIP). To reach this aim, SIMIP requests sea-ice-related variables from climate-model simulations that allow for a better understanding and, ultimately, improvement of biases and errors in sea-ice simulations with large-scale climate models. This then allows us to better understand to what degree CMIP6 model simulations relate to reality, thus improving our confidence in answering sea-ice-related questions based on these simulations. Furthermore, the SIMIP protocol provides a standardmore » for sea-ice model output that will streamline and hence simplify the analysis of the simulated sea-ice evolution in research projects independent of CMIP. To reach its aims, SIMIP provides a structured list of model output that allows for an examination of the three main budgets that govern the evolution of sea ice, namely the heat budget, the momentum budget, and the mass budget. Furthermore, we explain the aims of SIMIP in more detail and outline how its design allows us to answer some of the most pressing questions that sea ice still poses to the international climate-research community.« less

  4. High-frequency variations in Earth rotation and the planetary momentum budget

    NASA Technical Reports Server (NTRS)

    Rosen, Richard D.

    1995-01-01

    The major focus of the subject contract was on helping to resolve one of the more notable discrepancies still existing in the axial momentum budget of the solid Earth-atmosphere system, namely the disappearance of coherence between length-of-day (l.o.d.) and atmospheric angular momentum (AAM) at periods shorter than about a fortnight. Recognizing the importance of identifying the source of the high-frequency momentum budget anomaly, the scientific community organized two special measurement campaigns (SEARCH '92 and CONT '94) to obtain the best possible determinations of l.o.d. and AAM. An additional goal was to analyze newly developed estimates of the torques that transfer momentum between the atmosphere and its underlying surface to determine whether the ocean might be a reservoir of momentum on short time scales. Discrepancies between AAM and l.o.d. at sub-fortnightly periods have been attributed to either measurement errors in these quantities or the need to incorporate oceanic angular momentum into the planetary budget. Results from the SEARCH '92 and CONT '94 campaigns suggest that when special attention is paid to the quality of the measurements, better agreement between l.o.d. and AAM at high frequencies can be obtained. The mechanism most responsible for the high-frequency changes observed in AAM during these campaigns involves a direct coupling to the solid Earth, i.e, the mountain torque, thereby obviating a significant oceanic role.

  5. Development of Ocean Noise "Budgets"

    NASA Astrophysics Data System (ADS)

    D'Spain, G. L.; Miller, J. H.; Frisk, G. V.; Bradley, D. L.

    2003-12-01

    The National Oceanographic Partnership Program recently sponsored the third U.S. National Academy of Sciences study on the potential impact of manmade sound on the marine environment. Several recommendations for future research are made by the 11-member committee in their report titled Ocean Noise and Marine Mammals (National Academies Press, 2003). This presentation will focus on the subset of recommendations related to a "noise budget", i.e., an accounting of the relative contributions of various sources to the ocean noise field. A noise budget is defined in terms of a specific metric of the sound field. The metric, or budget "currency", typically considered is the acoustic pressure spectrum integrated over space and time, which is proportional to the total mechanical energy in the acoustic field. However, this currency may not be the only one of relevance to marine animals. Each of the various ways in which sound can potentially impact these animals, e.g., temporary threshold shift, masking, behavior disruption, etc, probably depends upon a different property, or set of properties, of the sound field. Therefore, a family of noise budgets based on various currencies will be required for complete evaluation of the potential impact of manmade noise on the marine environment. Validation of noise budgets will require sustained, long term measurements of the underwater noise field.

  6. [Use of medical inpatient services by heavy users: a case of hypochondriasis].

    PubMed

    Höfer, Peter; Ossege, Michael; Aigner, Martin

    2012-01-01

    Hypochondriasis is defined by ICD-10 and DSM-IV through the persistent preoccupation with the possibility of having one or more serious and progressive physical disorders. Patients suffering from hypochondriasis can be responsible for a high utilization of mental health system services. Data have shown that "Heavy User" require a disproportionate part of inpatient admissions and mental health budget costs. We assume that a psychotherapeutic approach, targeting a cognitive behavioral model in combination with neuropsychopharmacological treatment is useful. In our case report we present the "Heavy Using-Phenomenon" based on a patient hospitalized predominantly in neurological inpatient care facilities. From a medical point of view we want to point out to possible treatment errors, on the other hand we want to make aware of financial-socioeconomic factors leading to a massive burden on the global mental health budget.

  7. Improvements in lake water budget computations using Landsat data

    NASA Technical Reports Server (NTRS)

    Gervin, J. C.; Shih, S. F.

    1979-01-01

    A supervised multispectral classification was performed on Landsat data for Lake Okeechobee's extensive littoral zone to provide two types of information. First, the acreage of a given plant species as measured by satellite was combined with a more accurate transpiration rate to give a better estimate of evapotranspiration from the littoral zone. Second, the surface area coupled by plant communities was used to develop a better estimate of the water surface as a function of lake stage. Based on this information, more detailed representations of evapotranspiration and total water surface (and hence total lake volume) were provided to the water balance budget model for lake volume predictions. The model results based on information derived from satellite demonstrated a 94 percent reduction in cumulative lake stage error and a 70 percent reduction in the maximum deviation of the lake stage.

  8. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    The future X-ray observatory missions, such as International X-ray Observatory, require grazing incidence replicated optics of extremely large collecting area (3 m2) in combination with angular resolution of less than 5 arcsec half-power diameter. The resolution of a mirror shell depends ultimately on the quality of the cylindrical mandrels from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation studies have been performed to optimize the operational parameters as well as the polishing lap configuration. Furthermore, depending upon the surface error profile, a model for localized polishing based on dwell time approach is developed. Using the inputs from the mathematical model, a mandrel, having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. We report our first experimental results and discuss plans for further improvements in the polishing process.

  9. Are phonological influences on lexical (mis)selection the result of a monitoring bias?

    PubMed Central

    Ratinckx, Elie; Ferreira, Victor S.; Hartsuiker, Robert J.

    2009-01-01

    A monitoring bias account is often used to explain speech error patterns that seem to be the result of an interactive language production system, like phonological influences on lexical selection errors. A biased monitor is suggested to detect and covertly correct certain errors more often than others. For instance, this account predicts that errors which are phonologically similar to intended words are harder to detect than ones that are phonologically dissimilar. To test this, we tried to elicit phonological errors under the same conditions that show other kinds of lexical selection errors. In five experiments, we presented participants with high cloze probability sentence fragments followed by a picture that was either semantically related, a homophone of a semantically related word, or phonologically related to the (implicit) last word of the sentence. All experiments elicited semantic completions or homophones of semantic completions, but none elicited phonological completions. This finding is hard to reconcile with a monitoring bias account and is better explained with an interactive production system. Additionally, this finding constrains the amount of bottom-up information flow in interactive models. PMID:18942035

  10. Investing in College Completion. The Progress of Education Reform. Volume 11, Number 4

    ERIC Educational Resources Information Center

    Education Commission of the States (NJ1), 2010

    2010-01-01

    States are faced with the difficult challenge of increasing college completion rates at a time of historic budget shortfalls. While most agree that increasing the education level of U.S. citizens is essential to future economic prosperity (and public revenue collection), institutions will need to meet the goal through the more efficient use of…

  11. The Effect of State Financial Aid Policies on College Completion

    ERIC Educational Resources Information Center

    Ragland, Sheri E.

    2016-01-01

    In 2008, state legislatures provided $6 billion in financial aid to 2 million low-income young adults. When low-income young adults receive state financial aid and do not complete college, states lose their investment because fewer people with degrees will contribute to the state's economy. Declining states' budgets have led to (a) the rising cost…

  12. Performance-Based Funding of Higher Education: A Detailed Look at Best Practices in 6 States

    ERIC Educational Resources Information Center

    Miao, Kysie

    2012-01-01

    Performance-based funding is a system based on allocating a portion of a state's higher education budget according to specific performance measures such as course completion, credit attainment, and degree completion, instead of allocating funding based entirely on enrollment. It is a model that provides a fuller picture of how successfully…

  13. 77 FR 67345 - Agency Information Collection Activities; Submission to the Office of Management and Budget for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-09

    ... the application cycle. The total application projection for 2013-2014 is based upon two factors... submissions for the last completed or almost completed application cycle. The ABM is also based on the... applicants would result in an increase in burden of 347,945 hours. Accounting for both the increase in total...

  14. 75 FR 65039 - Submission for Review: Program Services Evaluation Surveys, OMB Control No. 3206-NEW

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-21

    ... will be completed in the next 3 years. The time estimate varies from 1 minute to 40 minutes to complete... 75 FR 35092 allowing for a 60-day public comment period. No comments were received for this... Office of Management and Budget is particularly interested in comments that: 1. Evaluate whether the...

  15. Utilizing College Access & Completion Innovation Funds to Improve Postsecondary Attainment in California

    ERIC Educational Resources Information Center

    Jones, Dennis P.; Ewell, Peter T.

    2009-01-01

    The College Access and Completion Innovation Fund proposed by the Obama administration in the FY 2009-10 budget holds considerable promise as a tool to leverage badly needed change in higher education nationally--and especially in California. It is potentially the most flexible tool among those currently available to promote attainment of…

  16. Online patient safety education programme for junior doctors: is it worthwhile?

    PubMed

    McCarthy, S E; O'Boyle, C A; O'Shaughnessy, A; Walsh, G

    2016-02-01

    Increasing demand exists for blended approaches to the development of professionalism. Trainees of the Royal College of Physicians of Ireland participated in an online patient safety programme. Study aims were: (1) to determine whether the programme improved junior doctors' knowledge, attitudes and skills relating to error reporting, open communication and care for the second victim and (2) to establish whether the methodology facilitated participants' learning. 208 junior doctors who completed the programme completed a pre-online questionnaire. Measures were "patient safety knowledge and attitudes", "medical safety climate" and "experience of learning". Sixty-two completed the post-questionnaire, representing a 30 % matched response rate. Participating in the programme resulted in immediate (p < 0.01) improvement in skills such as knowing when and how to complete incident forms and disclosing errors to patients, in self-rated knowledge (p < 0.01) and attitudes towards error reporting (p < 0.01). Sixty-three per cent disagreed that doctors routinely report medical errors and 42 % disagreed that doctors routinely share information about medical errors and what caused them. Participants rated interactive features as the most positive elements of the programme. An online training programme on medical error improved self-rated knowledge, attitudes and skills in junior doctors and was deemed an effective learning tool. Perceptions of work issues such as a poor culture of error reporting among doctors may prevent improved attitudes being realised in practice. Online patient safety education has a role in practice-based initiatives aimed at developing professionalism and improving safety.

  17. Patterned wafer geometry grouping for improved overlay control

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Park, Junbeom; Song, Changrock; Anis, Fatima; Vukkadala, Pradeep; Jeon, Sanghuck; Choi, DongSub; Huang, Kevin; Heo, Hoyoung; Smith, Mark D.; Robinson, John C.

    2017-03-01

    Process-induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget including non-uniform wafer stress. Previous studies have shown the correlation between process-induced stress and overlay and the opportunity for improvement in process control, including the use of patterned wafer geometry (PWG) metrology to reduce stress-induced overlay signatures. Key challenges of volume semiconductor manufacturing are how to improve not only the magnitude of these signatures, but also the wafer to wafer variability. This work involves a novel technique of using PWG metrology to provide improved litho-control by wafer-level grouping based on incoming process induced overlay, relevant for both 3D NAND and DRAM. Examples shown in this study are from 19 nm DRAM manufacturing.

  18. Carbon budget over 12 years in a production crop under temperate climate

    NASA Astrophysics Data System (ADS)

    Buysse, Pauline; Bodson, Bernard; Debacq, Alain; De Ligne, Anne; Heinesch, Bernard; Manise, Tanguy; Moureaux, Christine; Aubinet, Marc

    2017-04-01

    Carbon dioxide (CO2) exchanges between crops and the atmosphere are influenced by both climatic and crop management drivers. The investigated crop, situated at the Lonzée Terrestrial Observatory (LTO, candidate ICOS site) in Belgium and managed for more than 70 years using conventional farming practices, was monitored over three complete sugar beet (or maize)/winter wheat/potato/winter wheat rotation cycles from 2004 to 2016. Continuous eddy-covariance measurements and regular biomass samplings were performed in order to obtain the daily and seasonal Net Ecosystem Exchange (NEE), Gross Primary Productivity, Total Ecosystem Respiration, Net Primary Productivity, and Net Biome Production (NBP). Meteorological data and crop management practices were also recorded. The main objectives were to analyze the CO2 flux responses to climatic drivers and to establish the C budget of the cropland. Crop type significantly influenced the measured CO2 fluxes. In addition to crop season duration, which had an obvious impact on cumulated NEE values for each crop type, the CO2 flux response to photosynthetic photon flux density, vapor pressure deficit and temperature differed between crop types, while no significant response to soil water content was observed in any of them. Besides, a significant positive relationship between crop residue amount and ecosystem respiration was observed. Over the 12 years, NEE was negative (-4.34 ± 0.21 kg C m-2) but NBP was positive (1.05 ± 0.30 kg C m-2), i.e. as soon as all lateral carbon fluxes - dominated by carbon exportation - are included in the budget, the site behaves as a carbon source. Intercrops were seen to play a major role in the carbon budget, being mostly due to the long time period it represented (59 % of the 12 year time period). An in-depth analysis of intercrop periods and, more specifically, growing cover crops (mustard in the case of our study), is developed in a companion poster (ref. abstract EGU2017-12216, session SSS9.14/BG9.46/CL3.13). Although in line with preceding studies, the large C loss rate observed at LTO (NBP = + 87 ± 25 kg C m-2 yr-1) raises several questions as it corresponds to 1.8 % of the C stock in the top soil: is it realistic? Wouldn't it be affected by an undetected systematic error? If correct, could soil properties be preserved on the long term? This result at least calls for extensive C stock inventory for (in)validation.

  19. Sediment and nutrient budgets are inherently dynamic: evidence from a long-term study of two subtropical reservoirs

    NASA Astrophysics Data System (ADS)

    O'Brien, Katherine R.; Weber, Tony R.; Leigh, Catherine; Burford, Michele A.

    2016-12-01

    Accurate reservoir budgets are important for understanding regional fluxes of sediment and nutrients. Here we present a comprehensive budget of sediment (based on total suspended solids, TSS), total nitrogen (TN) and total phosphorus (TP) for two subtropical reservoirs on rivers with highly intermittent flow regimes. The budget is completed from July 1997 to June 2011 on the Somerset and Wivenhoe reservoirs in southeast Queensland, Australia, using a combination of monitoring data and catchment model predictions. A major flood in January 2011 accounted for more than half of the water entering and leaving both reservoirs in that year, and approximately 30 % of water delivered to and released from Wivenhoe over the 14-year study period. The flood accounted for an even larger proportion of total TSS and nutrient loads: in Wivenhoe more than one-third of TSS inputs and two-thirds of TSS outputs between 1997 and 2011 occurred during January 2011. During non-flood years, mean historical concentrations provided reasonable estimates of TSS and nutrient loads leaving the reservoirs. Calculating loads from historical mean TSS and TP concentrations during January 2011, however, would have substantially underestimated outputs over the entire study period, by up to a factor of 10. The results have important implications for sediment and nutrient budgets in catchments with highly episodic flow. First, quantifying inputs and outputs during major floods is essential for producing reliable long-term budgets. Second, sediment and nutrient budgets are dynamic, not static. Characterizing uncertainty and variability is therefore just as important for meaningful reservoir budgets as accurate quantification of loads.

  20. Toward the Library-Bookstore

    ERIC Educational Resources Information Center

    Severtson, Susan; Banks, George

    1971-01-01

    Close collaboration between library and bookstore (or a complete merger of the two) could do much to solve the immediate problems faced by many small colleges with small library collections and limited budgets. (MF)

  1. Oversight overload: harried hospitals say the growing number of billing audits they face could actually increase costs.

    PubMed

    Daly, Rich

    2011-11-21

    Providers say the administration's growing emphasis on billing audits is pushing them to the limit and threatens to increase their costs. Many billing problems stem from simple errors, not fraud, they say. "When you get into the nuts and bolts of some of these programs you realize it's not as easy as taking the overpayment line out of the budget," says Michael Regier, of VHA.

  2. A Method for Eliminating Beam Steering Error for the Modulated Absorption-Emission Thermometry Technique

    DTIC Science & Technology

    2015-01-01

    emissivity and the radiative intensity of the gas over a spectral band. The temperature is then calculated from the Planck function. The technique does not...pressure budget for cooling channels reduces pump horsepower and turbine inlet temperature DISTRIBUTION STATEMENT A – Approved for public release...distribution unlimited 4 Status of Modeling and Simulation • Existing data set for film cooling effectiveness consists of wall heat flux measurements • CFD

  3. Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data

    NASA Technical Reports Server (NTRS)

    Larden, D. R.; Bender, P. L.

    1982-01-01

    The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm.

  4. Simultaneous orbit determination

    NASA Technical Reports Server (NTRS)

    Wright, J. R.

    1988-01-01

    Simultaneous orbit determination is demonstrated using live range and Doppler data for the NASA/Goddard tracking configuration defined by the White Sands Ground Terminal (WSGT), the Tracking and Data Relay Satellite (TDRS), and the Earth Radiation Budget Satellite (ERBS). A physically connected sequential filter-smoother was developed for this demonstration. Rigorous necessary conditions are used to show that the state error covariance functions are realistic; and this enables the assessment of orbit estimation accuracies for both TDRS and ERBS.

  5. Low-Power Fault Tolerance for Spacecraft FPGA-Based Numerical Computing

    DTIC Science & Technology

    2006-09-01

    Ranganathan , “Power Management – Guest Lecture for CS4135, NPS,” Naval Postgraduate School, Nov 2004 [32] R. L. Phelps, “Operational Experiences with the...4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2...undesirable, are not necessarily harmful. Our intent is to prevent errors by properly managing faults. This research focuses on developing fault-tolerant

  6. High-resolution atmospheric inversion of urban CO2 emissions during the dormant season of the Indianapolis Flux Experiment (INFLUX)

    NASA Astrophysics Data System (ADS)

    Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh; Song, Yang; Karion, Anna; Oda, Tomohiro; Patarasuk, Risa; Razlivanov, Igor; Sarmiento, Daniel; Shepson, Paul; Sweeney, Colm; Turnbull, Jocelyn; Wu, Kai

    2016-05-01

    Based on a uniquely dense network of surface towers measuring continuously the atmospheric concentrations of greenhouse gases (GHGs), we developed the first comprehensive monitoring systems of CO2 emissions at high resolution over the city of Indianapolis. The urban inversion evaluated over the 2012-2013 dormant season showed a statistically significant increase of about 20% (from 4.5 to 5.7 MtC ± 0.23 MtC) compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product. Spatial structures in prior emission errors, mostly undetermined, appeared to affect the spatial pattern in the inverse solution and the total carbon budget over the entire area by up to 15%, while the inverse solution remains fairly insensitive to the CO2 boundary inflow and to the different prior emissions (i.e., ODIAC). Preceding the surface emission optimization, we improved the atmospheric simulations using a meteorological data assimilation system also informing our Bayesian inversion system through updated observations error variances. Finally, we estimated the uncertainties associated with undetermined parameters using an ensemble of inversions. The total CO2 emissions based on the ensemble mean and quartiles (5.26-5.91 MtC) were statistically different compared to the prior total emissions (4.1 to 4.5 MtC). Considering the relatively small sensitivity to the different parameters, we conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emission error structures are required to determine the spatial structures of urban emissions at high resolution.

  7. Quality of routine health data collected by health workers using smartphone at primary health care in Ethiopia.

    PubMed

    Medhanyie, Araya Abrha; Spigt, Mark; Yebyo, Henock; Little, Alex; Tadesse, Kidane; Dinant, Geert-Jan; Blanco, Roman

    2017-05-01

    Mobile phone based applications are considered by many as potentially useful for addressing challenges and improving the quality of data collection in developing countries. Yet very little evidence is available supporting or refuting the potential and widely perceived benefits on the use of electronic forms on smartphones for routine patient data collection by health workers at primary health care facilities. A facility based cross sectional study using a structured paper checklist was prepared to assess the completeness and accuracy of 408 electronic records completed and submitted to a central database server using electronic forms on smartphones by 25 health workers. The 408 electronic records were selected randomly out of a total of 1772 maternal health records submitted by the health workers to the central database over a period of six months. Descriptive frequencies and percentages of data completeness and error rates were calculated. When compared to paper records, the use of electronic forms significantly improved data completeness by 209 (8%) entries. Of a total 2622 entries checked for completeness, 2602 (99.2%) electronic record entries were complete, while 2393 (91.3%) paper record entries were complete. A very small percentage of error rates, which was easily identifiable, occurred in both electronic and paper forms although the error rate in the electronic records was more than double that of paper records (2.8% vs. 1.1%). More than half of entry errors in the electronic records related to entering a text value. With minimal training, supervision, and no incentives, health care workers were able to use electronic forms for patient assessment and routine data collection appropriately and accurately with a very small error rate. Minimising the number of questions requiring text responses in electronic forms would be helpful in minimizing data errors. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. SF3M software: 3-D photo-reconstruction for non-expert users and its application to a gully network

    NASA Astrophysics Data System (ADS)

    Castillo, C.; James, M. R.; Redel-Macías, M. D.; Pérez, R.; Gómez, J. A.

    2015-08-01

    Three-dimensional photo-reconstruction (PR) techniques have been successfully used to produce high-resolution surface models for different applications and over different spatial scales. However, innovative approaches are required to overcome some limitations that this technique may present for field image acquisition in challenging scene geometries. Here, we evaluate SF3M, a new graphical user interface for implementing a complete PR workflow based on freely available software (including external calls to VisualSFM and CloudCompare), in combination with a low-cost survey design for the reconstruction of a several-hundred-metres-long gully network. SF3M provided a semi-automated workflow for 3-D reconstruction requiring ~ 49 h (of which only 17 % required operator assistance) for obtaining a final gully network model of > 17 million points over a gully plan area of 4230 m2. We show that a walking itinerary along the gully perimeter using two lightweight automatic cameras (1 s time-lapse mode) and a 6 m long pole is an efficient method for 3-D monitoring of gullies, at a low cost (~ EUR 1000 budget for the field equipment) and the time requirements (~ 90 min for image collection). A mean error of 6.9 cm at the ground control points was found, mainly due to model deformations derived from the linear geometry of the gully and residual errors in camera calibration. The straightforward image collection and processing approach can be of great benefit for non-expert users working on gully erosion assessment.

  9. B -meson decay constants from 2 + 1 -flavor lattice QCD with domain-wall light quarks and relativistic heavy quarks

    DOE PAGES

    Christ, Norman H.; Flynn, Jonathan M.; Izubuchi, Taku; ...

    2015-03-10

    We calculate the B-meson decay constants f B, f Bs, and their ratio in unquenched lattice QCD using domain-wall light quarks and relativistic b quarks. We use gauge-field ensembles generated by the RBC and UKQCD collaborations using the domain-wall fermion action and Iwasaki gauge action with three flavors of light dynamical quarks. We analyze data at two lattice spacings of a ≈ 0.11, 0.086 fm with unitary pion masses as light as M π ≈ 290 MeV; this enables us to control the extrapolation to the physical light-quark masses and continuum. For the b quarks we use the anisotropic clovermore » action with the relativistic heavy-quark interpretation, such that discretization errors from the heavy-quark action are of the same size as from the light-quark sector. We renormalize the lattice heavy-light axial-vector current using a mostly nonperturbative method in which we compute the bulk of the matching factor nonperturbatively, with a small correction, that is close to unity, in lattice perturbation theory. We also improve the lattice heavy-light current through O(α sa). We extrapolate our results to the physical light-quark masses and continuum using SU(2) heavy-meson chiral perturbation theory, and provide a complete systematic error budget. We obtain f B0 = 199.5(12.6) MeV, f B+=195.6(14.9) MeV, f Bs=235.4(12.2) MeV, f Bs/f B0=1.197(50), and f Bs/f B+=1.223(71), where the errors are statistical and total systematic added in quadrature. Finally, these results are in good agreement with other published results and provide an important independent cross-check of other three-flavor determinations of B-meson decay constants using staggered light quarks.« less

  10. B-meson decay constants from 2+1-flavor lattice QCD with domain-wall light quarks and relativistic heavy quarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christ, Norman H.; Flynn, Jonathan M.; Izubuchi, Taku

    2015-03-10

    We calculate the B-meson decay constants f B, f Bs, and their ratio in unquenched lattice QCD using domain-wall light quarks and relativistic b-quarks. We use gauge-field ensembles generated by the RBC and UKQCD collaborations using the domain-wall fermion action and Iwasaki gauge action with three flavors of light dynamical quarks. We analyze data at two lattice spacings of a ≈ 0.11, 0.086 fm with unitary pion masses as light as M π ≈ 290 MeV; this enables us to control the extrapolation to the physical light-quark masses and continuum. For the b-quarks we use the anisotropic clover action withmore » the relativistic heavy-quark interpretation, such that discretization errors from the heavy-quark action are of the same size as from the light-quark sector. We renormalize the lattice heavy-light axial-vector current using a mostly nonperturbative method in which we compute the bulk of the matching factor nonperturbatively, with a small correction, that is close to unity, in lattice perturbation theory. We also improve the lattice heavy-light current through O(α sa). We extrapolate our results to the physical light-quark masses and continuum using SU(2) heavy-meson chiral perturbation theory, and provide a complete systematic error budget. We obtain f B0 = 196.2(15.7) MeV, f B+ = 195.4(15.8) MeV, f Bs = 235.4(12.2) MeV, f Bs/f B0 = 1.193(59), and f Bs/f B+ = 1.220(82), where the errors are statistical and total systematic added in quadrature. In addition, these results are in good agreement with other published results and provide an important independent cross check of other three-flavor determinations of B-meson decay constants using staggered light quarks.« less

  11. A preliminary source-to-sink sediment budget for aeolian sands

    NASA Astrophysics Data System (ADS)

    Sebe, Krisztina; Csillag, Gábor; Timár, Gábor; Jámbor, Áron

    2015-04-01

    Source-to-sink sediment budgets are being intensively studied in fluvial systems. In contrast, sediment budget calculations are very rare for wind-transported material. This may be attributed to the fact that the exact delineation of both source and sink areas in aeolian systems can pose difficulties. In the Pannonian Basin, aeolian action by northwesterly to northerly winds exerted a thorough impact on landscape evolution during the Quaternary, testified among others by yardangs, wind corridors and numerous ventifacts as well as extensive blown sand fields. Wind erosion has been dated to be important since at least 1.5 Ma ago. Considering the sand fraction, the Pleistocene Pannonian Basin seems to be a nearly complete aeolian sedimentary system from source to sink, thus it provides a good opportunity to carry out sediment budget calculations. The largest blown sand accumulation occupies ~10 000 km2 in the central part of the Pannonian Basin, in the area called Kiskunság, and contains considerable volumes of aeolian sands extending down to the Lower Pleistocene. Its material is traditionally considered to originate from fluvial sediments of the Danube floodplain. However, recent studies on wind erosion and wind direction reconstructions have indicated that a considerable portion of the sand can have had a provenance in the extensive unconsolidated sediments of the Late Miocene Lake Pannon, which cover the uplifting Transdanubian Range and its surroundings. To gain data on this question, we carried out sediment budget calculations to assess if material volumes of the supposed source and sink areas are comparable. In the source area we reconstructed a paleotopography, practically a bounding envelope surface for the Pliocene/Pleistocene boundary using existing knowledge e.g. on the typical succession of Lake Pannon sediments and the evolution history of the area. The missing volume down to the present-day surface was then calculated, where the removed material was constituted dominantly by the Upper Miocene sediments, subordinately by older clastics. The final amount of sand possibly eroded by the wind from the area was calculated by reducing this volume through estimating the portion of sand in the lacustrine succession and the ratio of aeolian and fluvial erosion. Aeolian sand volumes of the sink were calculated using borehole data from publications and original borehole documentations. This approach contains several error sources, including uncertainties in the position of the envelope surface, varying quality of borehole documentations or the distribution of sampling points. As a result, the estimated error margin of the missing volume computation is up to 50% and the provided value is rather a minimum estimation. A similar value can be valid for the sink area. The calculations showed that sand volumes of the source and sink areas are comparable, with the eroded material being about one third to a half of that of the deposited amount (somewhere below 150 km3 and between 300-400 km3, respectively). This result supports the idea that Transdanubia is an important source area of the Kiskunság blown sand field. The portion of sand in the sink not accounted for by the present estimation can be derived from two sources. Probably more blown sand had been delivered to the sink from areas even more upwind from the Transdanubian Range (Danube Basin), now not included in the calculations. The floodplain of the Danube may have also provided sediments, but mostly only in the Late Pleistocene, when the river had already occupied its modern course upwind of the Kiskunság area. Work has been supported by the OTKA projects K 106197 and NK83400.

  12. Predictors of School Garden Integration: Factors Critical to Gardening Success in New York City.

    PubMed

    Burt, Kate Gardner; Burgermaster, Marissa; Jacquez, Raquel

    2018-03-01

    The purpose of this study was to determine the level of integration of school gardens and identify factors that predict integration. 211 New York City schools completed a survey that collected demographic information and utilized the School Garden Integration Scale. A mean garden integration score was calculated, and multiple regression analysis was conducted to determine independent predictors of integration and assess relationships between individual integration characteristics and budget. The average integration score was 34.1 (of 57 points) and ranged from 8 to 53. Operating budget had significant influence on integration score, controlling for all other factors ( p < .0001). Partner organizations, evaluation/feedback, planning the physical space, and characteristics of the physical space were positively and significantly related to budget. The results of this study indicate that any garden can become well integrated, as budget is a modifiable factor. When adequate funding is secured, a well-integrated garden may be established with proper planning and sound implementation.

  13. Impact of Frequent Interruption on Nurses' Patient-Controlled Analgesia Programming Performance.

    PubMed

    Campoe, Kristi R; Giuliano, Karen K

    2017-12-01

    The purpose was to add to the body of knowledge regarding the impact of interruption on acute care nurses' cognitive workload, total task completion times, nurse frustration, and medication administration error while programming a patient-controlled analgesia (PCA) pump. Data support that the severity of medication administration error increases with the number of interruptions, which is especially critical during the administration of high-risk medications. Bar code technology, interruption-free zones, and medication safety vests have been shown to decrease administration-related errors. However, there are few published data regarding the impact of number of interruptions on nurses' clinical performance during PCA programming. Nine acute care nurses completed three PCA pump programming tasks in a simulation laboratory. Programming tasks were completed under three conditions where the number of interruptions varied between two, four, and six. Outcome measures included cognitive workload (six NASA Task Load Index [NASA-TLX] subscales), total task completion time (seconds), nurse frustration (NASA-TLX Subscale 6), and PCA medication administration error (incorrect final programming). Increases in the number of interruptions were associated with significant increases in total task completion time ( p = .003). We also found increases in nurses' cognitive workload, nurse frustration, and PCA pump programming errors, but these increases were not statistically significant. Complex technology use permeates the acute care nursing practice environment. These results add new knowledge on nurses' clinical performance during PCA pump programming and high-risk medication administration.

  14. Lawmakers seek end to budget stalemate

    NASA Astrophysics Data System (ADS)

    Carlowicz, Michael

    On March 15, President Bill Clinton signed into law the tenth short-term spending measure of the now-sixmonth-old 1996 fiscal year (FY'96), funding the federal government through March 22. At the same time, budget negotiators from both the Republican and Democratic parties scurried to put the finishing touches on a $166 billion-omnibus appropriations bill, H.R. 3019, to pay the government's bills for the rest of the fiscal year.As of March 21, parts of a dozen federal cabinet-level departments and agencies still did not have a definitive budget allocation for FY'96. Nearly all national science research and development programs, agencies, and departments remained tied up by that budget struggle, though very little of the debate has anything to do with those programs (Eos, February 20). H.R. 3019 is designed to end the string of short-term spending measures and to fund all federal departments for the remaining six months of the fiscal year. Congressional leaders hoped to have the bill passed by March 22, though many representatives anticipated that at least one more short-term spending resolution would have to be passed before budget negotiations with the President could be completed.

  15. Evaluation of the Modern Era Retrospective-Analysis for Research and Applications (MERRA) Global Water and Energy Budgets

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, F. R.; Chen, J.

    2010-01-01

    The Modern Era Retrospective-analysis for Research and Applications (MERRA) reanalyses has completed 27 years of data) soon to be caught up to present. Here) we present an evaluation of those years currently available) including comparisons with the existing long reanalyses (ERA40) JRA25 and NCEP I and II) as well as with global data sets for the water and energy cycle. Time series shows that the MERRA budgets can change with some of the variations in observing systems, but that the magnitude of energy imbalance in the system is improved with more observations. We will present all terms of the budgets in MERRA including the time rates of change and analysis increments (tendency due to the analysis of observations).

  16. 27 CFR 18.34 - Continuing partnerships.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the winding up of the partnership affairs is completed, and the surviving partner has the exclusive... by the Office of Management and Budget under control number 1512-0046) [T.D. ATF-104, 47 FR 23921...

  17. 27 CFR 18.34 - Continuing partnerships.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the winding up of the partnership affairs is completed, and the surviving partner has the exclusive... by the Office of Management and Budget under control number 1512-0046) [T.D. ATF-104, 47 FR 23921...

  18. 27 CFR 18.34 - Continuing partnerships.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the winding up of the partnership affairs is completed, and the surviving partner has the exclusive... by the Office of Management and Budget under control number 1512-0046) [T.D. ATF-104, 47 FR 23921...

  19. 27 CFR 18.34 - Continuing partnerships.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... the winding up of the partnership affairs is completed, and the surviving partner has the exclusive... by the Office of Management and Budget under control number 1512-0046) [T.D. ATF-104, 47 FR 23921...

  20. Automatic performance budget: towards a risk reduction

    NASA Astrophysics Data System (ADS)

    Laporte, Philippe; Blake, Simon; Schmoll, Jürgen; Rulten, Cameron; Savoie, Denis

    2014-08-01

    In this paper, we discuss the performance matrix of the SST-GATE telescope developed to allow us to partition and allocate the important characteristics to the various subsystems as well as to describe the process in order to verify that the current design will deliver the required performance. Due to the integrated nature of the telescope, a large number of parameters have to be controlled and effective calculation tools must be developed such as an automatic performance budget. Its main advantages consist in alleviating the work of the system engineer when changes occur in the design, in avoiding errors during any re-allocation process and recalculate automatically the scientific performance of the instrument. We explain in this paper the method to convert the ensquared energy (EE) and the signal-to-noise ratio (SNR) required by the science cases into the "as designed" instrument. To ensure successful design, integration and verification of the next generation instruments, it is of the utmost importance to have methods to control and manage the instrument's critical performance characteristics at its very early design steps to limit technical and cost risks in the project development. Such a performance budget is a tool towards this goal.

  1. Water Budget Closure Based on GRACE Measurements and Reconstructed Evapotranspiration Using GLDAS and Water Use Data over the Yellow River and Changjiang River Basins

    NASA Astrophysics Data System (ADS)

    Lv, M.; Ma, Z.; Yuan, X.

    2017-12-01

    It is important to evaluate the water budget closure on the basis of the currently available data including precipitation, evapotranspiration (ET), runoff, and GRACE-derived terrestrial water storage change (TWSC) before using them to resolve water-related issues. However, it remains challenging to achieve the balance without the consideration of human water use (e.g., inter-basin water diversion and irrigation) for the estimation of other water budget terms such as the ET. In this study, the terrestrial water budget closure is tested over the Yellow River Basin (YRB) and Changjiang River Basin (CJB, Yangtze River Basin) of China. First, the actual ET is reconstructed by using the GLDAS-1 land surface models, the high quality observation-based precipitation, naturalized streamflow, and the irrigation water (hereafter, ETrecon). The ETrecon, evaluated using the mean annual water-balance equation, is of good quality with the absolute relative errors less than 1.9% over the two studied basins. The total basin discharge (Rtotal) is calculated as the residual of the water budget among the observation-based precipitation, ETrecon, and the GRACE-TWSC. The value of the Rtotal minus the observed total basin discharge is used to evaluate the budget closure, with the consideration of inter-basin water diversion. After the ET reconstruction, the mean absolute imbalance value reduced from 3.31 cm/year to 1.69 cm/year and from 15.40 cm/year to 1.96 cm/year over the YRB and CJB, respectively. The estimation-to-observation ratios of total basin discharge improved from 180.8% to 86.8% over the YRB, and from 67.0% to 101.1% over the CJB. The proposed ET reconstruction method is applicable to other human-managed river basins to provide an alternative estimation.

  2. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  3. Basin-scale assessment of the land surface energy budget in the National Centers for Environmental Prediction operational and research NLDAS-2 systems

    NASA Astrophysics Data System (ADS)

    Xia, Youlong; Cosgrove, Brian A.; Mitchell, Kenneth E.; Peters-Lidard, Christa D.; Ek, Michael B.; Kumar, Sujay; Mocko, David; Wei, Helin

    2016-01-01

    This paper compares the annual and monthly components of the simulated energy budget from the North American Land Data Assimilation System phase 2 (NLDAS-2) with reference products over the domains of the 12 River Forecast Centers (RFCs) of the continental United States (CONUS). The simulations are calculated from both operational and research versions of NLDAS-2. The reference radiation components are obtained from the National Aeronautics and Space Administration Surface Radiation Budget product. The reference sensible and latent heat fluxes are obtained from a multitree ensemble method applied to gridded FLUXNET data from the Max Planck Institute, Germany. As these references are obtained from different data sources, they cannot fully close the energy budget, although the range of closure error is less than 15% for mean annual results. The analysis here demonstrates the usefulness of basin-scale surface energy budget analysis for evaluating model skill and deficiencies. The operational (i.e., Noah, Mosaic, and VIC) and research (i.e., Noah-I and VIC4.0.5) NLDAS-2 land surface models exhibit similarities and differences in depicting basin-averaged energy components. For example, the energy components of the five models have similar seasonal cycles, but with different magnitudes. Generally, Noah and VIC overestimate (underestimate) sensible (latent) heat flux over several RFCs of the eastern CONUS. In contrast, Mosaic underestimates (overestimates) sensible (latent) heat flux over almost all 12 RFCs. The research Noah-I and VIC4.0.5 versions show moderate-to-large improvements (basin and model dependent) relative to their operational versions, which indicates likely pathways for future improvements in the operational NLDAS-2 system.

  4. Basin-Scale Assessment of the Land Surface Energy Budget in the National Centers for Environmental Prediction Operational and Research NLDAS-2 Systems

    NASA Technical Reports Server (NTRS)

    Xia, Youlong; Peters-Lidard, Christa D.; Cosgrove, Brian A.; Mitchell, Kenneth E.; Peters-Lidard, Christa; Ek, Michael B.; Kumar, Sujay V.; Mocko, David M.; Wei, Helin

    2015-01-01

    This paper compares the annual and monthly components of the simulated energy budget from the North American Land Data Assimilation System phase 2 (NLDAS-2) with reference products over the domains of the 12 River Forecast Centers (RFCs) of the continental United States (CONUS). The simulations are calculated from both operational and research versions of NLDAS-2. The reference radiation components are obtained from the National Aeronautics and Space Administration Surface Radiation Budget product. The reference sensible and latent heat fluxes are obtained from a multitree ensemble method applied to gridded FLUXNET data from the Max Planck Institute, Germany. As these references are obtained from different data sources, they cannot fully close the energy budget, although the range of closure error is less than 15%formean annual results. The analysis here demonstrates the usefulness of basin-scale surface energy budget analysis for evaluating model skill and deficiencies. The operational (i.e., Noah, Mosaic, and VIC) and research (i.e., Noah-I and VIC4.0.5) NLDAS-2 land surface models exhibit similarities and differences in depicting basin-averaged energy components. For example, the energy components of the five models have similar seasonal cycles, but with different magnitudes. Generally, Noah and VIC overestimate (underestimate) sensible (latent) heat flux over several RFCs of the eastern CONUS. In contrast, Mosaic underestimates (overestimates) sensible (latent) heat flux over almost all 12 RFCs. The research Noah-I and VIC4.0.5 versions show moderate-to-large improvements (basin and model dependent) relative to their operational versions, which indicates likely pathways for future improvements in the operational NLDAS-2 system.

  5. Final Technical Report for earmark project "Atmospheric Science Program at the University of Louisville"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowling, Timothy Edward

    2014-02-11

    We have completed a 3-year project to enhance the atmospheric science program at the University of Louisville, KY (est. 2008). The goals were to complete an undergraduate atmospheric science laboratory (Year 1) and to hire and support an assistant professor (Years 2 and 3). Both these goals were met on schedule, and slightly under budget.

  6. Supporting Data for FY 1990/1991 Biennial Budget: Budget Estimates Descriptive Summaries Submitted to Congress January 1989, Research, Development, Test & Evaluation, Navy

    DTIC Science & Technology

    1989-01-01

    necessitate de -emphasizing network interface demonstrations in favor of real-time network interface technologies and slip ICEX demonstration of...Aperture Radar target classi- fication and Fault Diagnosis issues. o Demonstrate a complete, transportable , fully functional software engineering...BLK 3 MS 3B 109D MS 3A Blk 3 MS 2 TASM Milestones MS 3A 109D IOC Flex Upgrade MS 3B SW-3 MS 3B Blk 3 IOC Blk 3 Engineering Eng Dev Eng Dev DES Rev

  7. Assimilating satellite-based canopy height within an ecosystem model to estimate aboveground forest biomass

    NASA Astrophysics Data System (ADS)

    Joetzjer, E.; Pillet, M.; Ciais, P.; Barbier, N.; Chave, J.; Schlund, M.; Maignan, F.; Barichivich, J.; Luyssaert, S.; Hérault, B.; von Poncet, F.; Poulter, B.

    2017-07-01

    Despite advances in Earth observation and modeling, estimating tropical biomass remains a challenge. Recent work suggests that integrating satellite measurements of canopy height within ecosystem models is a promising approach to infer biomass. We tested the feasibility of this approach to retrieve aboveground biomass (AGB) at three tropical forest sites by assimilating remotely sensed canopy height derived from a texture analysis algorithm applied to the high-resolution Pleiades imager in the Organizing Carbon and Hydrology in Dynamic Ecosystems Canopy (ORCHIDEE-CAN) ecosystem model. While mean AGB could be estimated within 10% of AGB derived from census data in average across sites, canopy height derived from Pleiades product was spatially too smooth, thus unable to accurately resolve large height (and biomass) variations within the site considered. The error budget was evaluated in details, and systematic errors related to the ORCHIDEE-CAN structure contribute as a secondary source of error and could be overcome by using improved allometric equations.

  8. 40-Gb/s PAM4 with low-complexity equalizers for next-generation PON systems

    NASA Astrophysics Data System (ADS)

    Tang, Xizi; Zhou, Ji; Guo, Mengqi; Qi, Jia; Hu, Fan; Qiao, Yaojun; Lu, Yueming

    2018-01-01

    In this paper, we demonstrate 40-Gb/s four-level pulse amplitude modulation (PAM4) transmission with 10 GHz devices and low-complexity equalizers for next-generation passive optical network (PON) systems. Simple feed-forward equalizer (FFE) and decision feedback equalizer (DFE) enable 20 km fiber transmission while high-complexity Volterra algorithm in combination with FFE and DFE can extend the transmission distance to 40 km. A simplified Volterra algorithm is proposed for reducing computational complexity. Simulation results show that the simplified Volterra algorithm reduces up to ∼75% computational complexity at a relatively low cost of only 0.4 dB power budget. At a forward error correction (FEC) threshold of 10-3 , we achieve 31.2 dB and 30.8 dB power budget over 40 km fiber transmission using traditional FFE-DFE-Volterra and our simplified FFE-DFE-Volterra, respectively.

  9. Pellicle transmission uniformity requirements

    NASA Astrophysics Data System (ADS)

    Brown, Thomas L.; Ito, Kunihiro

    1998-12-01

    Controlling critical dimensions of devices is a constant battle for the photolithography engineer. Current DUV lithographic process exposure latitude is typically 12 to 15% of the total dose. A third of this exposure latitude budget may be used up by a variable related to masking that has not previously received much attention. The emphasis on pellicle transmission has been focused on increasing the average transmission. Much less, attention has been paid to transmission uniformity. This paper explores the total demand on the photospeed latitude budget, the causes of pellicle transmission nonuniformity and examines reasonable expectations for pellicle performance. Modeling is used to examine how the two primary errors in pellicle manufacturing contribute to nonuniformity in transmission. World-class pellicle transmission uniformity standards are discussed and a comparison made between specifications of other components in the photolithographic process. Specifications for other materials or parameters are used as benchmarks to develop a proposed industry standard for pellicle transmission uniformity.

  10. JASMINE data analysis

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Gouda, N.; Yano, T.; Kobayashi, Y.; Niwa, Y.; Niwa

    2008-07-01

    Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with a 10 μas accuracy. We use z-band CCD or K-band array detector to avoid dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. In this poster, we show the observation strategy, reduction scheme, and error budget. We also show the basic design of the software for the end-to-end simulation of JASMINE, named JASMINE Simulator.

  11. The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.

    DTIC Science & Technology

    1985-05-08

    also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies

  12. Geostationary Operational Environmental Satellite (GOES-N report). Volume 2: Technical appendix

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The contents include: operation with inclinations up to 3.5 deg to extend life; earth sensor improvements to reduce noise; sensor configurations studied; momentum management system design; reaction wheel induced dynamic interaction; controller design; spacecraft motion compensation; analog filtering; GFRP servo design - modern control approach; feedforward compensation as applied to GOES-1 sounder; discussion of allocation of navigation, inframe registration and image-to-image error budget overview; and spatial response and cloud smearing study.

  13. Instrumentation and First Results of the Reflected Solar Demonstration System for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Hair, Jason; McAndrew, Brendan; Jennings, Don; Rabin, Douglas; Daw, Adrian; Lundsford, Allen

    2012-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission key goals include enabling observation of high accuracy long-term climate change trends, use of these observations to test and improve climate forecasts, and calibration of operational and research sensors. The spaceborne instrument suites include a reflected solar spectroradiometer, emitted infrared spectroradiometer, and radio occultation receivers. The requirement for the RS instrument is that derived reflectance must be traceable to Sl standards with an absolute uncertainty of <0.3% and the error budget that achieves this requirement is described in previo1L5 work. This work describes the Solar/Lunar Absolute Reflectance Imaging Spectroradiometer (SOLARIS), a calibration demonstration system for RS instrument, and presents initial calibration and characterization methods and results. SOLARIS is an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm over a full field-of-view of 10 degrees with 0.27 milliradian sampling. Results from laboratory measurements including use of integrating spheres, transfer radiometers and spectral standards combined with field-based solar and lunar acquisitions are presented. These results will be used to assess the accuracy and repeatability of the radiometric and spectral characteristics of SOLARIS, which will be presented against the sensor-level requirements addressed in the CLARREO RS instrument error budget.

  14. The next generation in optical transport semiconductors: IC solutions at the system level

    NASA Astrophysics Data System (ADS)

    Gomatam, Badri N.

    2005-02-01

    In this tutorial overview, we survey some of the challenging problems facing Optical Transport and their solutions using new semiconductor-based technologies. Advances in 0.13um CMOS, SiGe/HBT and InP/HBT IC process technologies and mixed-signal design strategies are the fundamental breakthroughs that have made these solutions possible. In combination with innovative packaging and transponder/transceiver architectures IC approaches have clearly demonstrated enhanced optical link budgets with simultaneously lower (perhaps the lowest to date) cost and manufacturability tradeoffs. This paper will describe: *Electronic Dispersion Compensation broadly viewed as the overcoming of dispersion based limits to OC-192 links and extending link budgets, *Error Control/Coding also known as Forward Error Correction (FEC), *Adaptive Receivers for signal quality monitoring for real-time estimation of Q/OSNR, eye-pattern, signal BER and related temporal statistics (such as jitter). We will discuss the theoretical underpinnings of these receiver and transmitter architectures, provide examples of system performance and conclude with general market trends. These Physical layer IC solutions represent a fundamental new toolbox of options for equipment designers in addressing systems level problems. With unmatched cost and yield/performance tradeoffs, it is expected that IC approaches will provide significant flexibility in turn, for carriers and service providers who must ultimately manage the network and assure acceptable quality of service under stringent cost constraints.

  15. Optomechanical design of the vacuum compatible EXCEDE's mission testbed

    NASA Astrophysics Data System (ADS)

    Bendek, Eduardo A.; Belikov, Ruslan; Lozi, Julien; Schneider, Glenn; Thomas, Sandrine; Pluzhnik, Eugene; Lynch, Dana

    2014-08-01

    In this paper we describe the opto-mechanical design, tolerance error budget an alignment strategies used to build the Starlight Suppression System (SSS) for the Exoplanetary Circumstellar Environments and Disk Explorer (EXCEDE) NASA's mission. EXCEDE is a highly efficient 0.7m space telescope concept designed to directly image and spatially resolve circumstellar disks with as little as 10 zodis of circumstellar dust, as well as large planets. The main focus of this work was the design of a vacuum compatible opto-mechanical system that allows remote alignment and operation of the main components of the EXCEDE. SSS, which are: a Phase Induced Amplitude Apodization (PIAA) coronagraph to provide high throughput and high contrast at an inner working angle (IWA) equal to the diffraction limit (IWA = 1.2 l/D), a wavefront (WF) control system based on a Micro-Electro-Mechanical-System deformable mirror (MEMS DM), and low order wavefront sensor (LOWFS) for fine pointing and centering. We describe in strategy and tolerance error budget for this system, which is especially relevant to achieve the theoretical performance that PIAA coronagraph can offer. We also discuss the vacuum cabling design for the actuators, cameras and the Deformable Mirror. This design has been implemented at the vacuum chamber facility at Lockheed Martin (LM), which is based on successful technology development at the Ames Coronagraph Experiment (ACE) facility.

  16. Stream Discharge and Evapotranspiration Responses to Climate Change and Their Associated Uncertainties in a Large Semi-Arid Basin

    NASA Astrophysics Data System (ADS)

    Bassam, S.; Ren, J.

    2017-12-01

    Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.

  17. Modis Collection 6 Shortwave-Derived Cloud Phase Classification Algorithm and Comparisons with CALIOP

    NASA Technical Reports Server (NTRS)

    Marchant, Benjamin; Platnick, Steven; Meyer, Kerry; Arnold, George Thomas; Riedi, Jerome

    2016-01-01

    Cloud thermodynamic phase (e.g., ice, liquid) classification is an important first step for cloud retrievals from passive sensors such as MODIS (Moderate-Resolution Imaging Spectroradiometer). Because ice and liquid phase clouds have very different scattering and absorbing properties, an incorrect cloud phase decision can lead to substantial errors in the cloud optical and microphysical property products such as cloud optical thickness or effective particle radius. Furthermore, it is well established that ice and liquid clouds have different impacts on the Earth's energy budget and hydrological cycle, thus accurately monitoring the spatial and temporal distribution of these clouds is of continued importance. For MODIS Collection 6 (C6), the shortwave-derived cloud thermodynamic phase algorithm used by the optical and microphysical property retrievals has been completely rewritten to improve the phase discrimination skill for a variety of cloudy scenes (e.g., thin/thick clouds, over ocean/land/desert/snow/ice surface, etc). To evaluate the performance of the C6 cloud phase algorithm, extensive granule-level and global comparisons have been conducted against the heritage C5 algorithm and CALIOP. A wholesale improvement is seen for C6 compared to C5.

  18. Evolution of the Southern Oscillation as observed by the Nimbus-7 ERB experiment

    NASA Technical Reports Server (NTRS)

    Ardanuy, Philip E.; Kyle, H. Lee; Chang, Hyo-Duck

    1987-01-01

    The Nimbus-7 satellite has been in a 955-km, sun-synchronous orbit since October 1978. The Earth Radiation Budget (ERB) experiment has taken approximately 8 years of high-quality data during this time, of which seven complete years have been archived at the National Space Science Data Center. A final reprocessing of the wide-field-of-view channel dataset is underway. Error analyses indicate a long-term stability of 1 percent better over the length of the data record. As part of the validation of the ERB measurements, the archived 7-year Nimbus-7 ERB dataset is examined for the presence and accuracy of interannual variations including the Southern Oscillation signal. Zonal averages of broadband outgoing longwave radiation indicate a terrestrial response of more than 2 years to the oceanic and atmospheric manifestations of the 1982-83 El Nino/Southern Oscillation (ENSO) event, especially in the tropics. This signal is present in monthly and seasonal averages and is shown here to derive primarily from atmospheric responses to adjustments in the Pacific Ocean. The calibration stability of this dataset thus provides a powerful new tool to examine the physics of the ENSO phenomena.

  19. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  20. 43 CFR 2740.0-9 - Information collection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... contained in part 2740 of Group 2700 has been approved by the Office of Management and Budget under 44 U.S.C... instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...

  1. 43 CFR 2740.0-9 - Information collection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... contained in part 2740 of Group 2700 has been approved by the Office of Management and Budget under 44 U.S.C... instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...

  2. 43 CFR 2740.0-9 - Information collection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... contained in part 2740 of Group 2700 has been approved by the Office of Management and Budget under 44 U.S.C... instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...

  3. 7 CFR 1219.53 - Budget and expenses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the Order. (d) With the approval of the Secretary, the Board may borrow money for the payment of... contributions shall be free from any encumbrance by the donor, and the Board shall retain complete control of...

  4. 7 CFR 1219.53 - Budget and expenses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the Order. (d) With the approval of the Secretary, the Board may borrow money for the payment of... contributions shall be free from any encumbrance by the donor, and the Board shall retain complete control of...

  5. 7 CFR 1219.53 - Budget and expenses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the Order. (d) With the approval of the Secretary, the Board may borrow money for the payment of... contributions shall be free from any encumbrance by the donor, and the Board shall retain complete control of...

  6. 34 CFR 668.135 - Institutional procedures for completing secondary confirmation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Within 10 business days after an institution receives the documentary evidence of immigration status... Form G-845 and attachments to the INS District Office. (Approved by the Office of Management and Budget...

  7. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effortmore » has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.« less

  8. Factors associated with reporting of medication errors by Israeli nurses.

    PubMed

    Kagan, Ilya; Barnoy, Sivia

    2008-01-01

    This study investigated medication error reporting among Israeli nurses, the relationship between nurses' personal views about error reporting, and the impact of the safety culture of the ward and hospital on this reporting. Nurses (n = 201) completed a questionnaire related to different aspects of error reporting (frequency, organizational norms of dealing with errors, and personal views on reporting). The higher the error frequency, the more errors went unreported. If the ward nurse manager corrected errors on the ward, error self-reporting decreased significantly. Ward nurse managers have to provide good role models.

  9. Feed-forward alignment correction for advanced overlay process control using a standalone alignment station "Litho Booster"

    NASA Astrophysics Data System (ADS)

    Yahiro, Takehisa; Sawamura, Junpei; Dosho, Tomonori; Shiba, Yuji; Ando, Satoshi; Ishikawa, Jun; Morita, Masahiro; Shibazaki, Yuichi

    2018-03-01

    One of the main components of an On-Product Overlay (OPO) error budget is the process induced wafer error. This necessitates wafer-to-wafer correction in order to optimize overlay accuracy. This paper introduces the Litho Booster (LB), standalone alignment station as a solution to improving OPO. LB can execute high speed alignment measurements without throughput (THP) loss. LB can be installed in any lithography process control loop as a metrology tool, and is then able to provide feed-forward (FF) corrections to the scanners. In this paper, the detailed LB design is described and basic LB performance and OPO improvement is demonstrated. Litho Booster's extendibility and applicability as a solution for next generation manufacturing accuracy and productivity challenges are also outlined

  10. Active Optics: stress polishing of toric mirrors for the VLT SPHERE adaptive optics system.

    PubMed

    Hugot, Emmanuel; Ferrari, Marc; El Hadi, Kacem; Vola, Pascal; Gimenez, Jean Luc; Lemaitre, Gérard R; Rabou, Patrick; Dohlen, Kjetil; Puget, Pascal; Beuzit, Jean Luc; Hubin, Norbert

    2009-05-20

    The manufacturing of toric mirrors for the Very Large Telescope-Spectro-Polarimetric High-Contrast Exoplanet Research instrument (SPHERE) is based on Active Optics and stress polishing. This figuring technique allows minimizing mid and high spatial frequency errors on an aspherical surface by using spherical polishing with full size tools. In order to reach the tight precision required, the manufacturing error budget is described to optimize each parameter. Analytical calculations based on elasticity theory and finite element analysis lead to the mechanical design of the Zerodur blank to be warped during the stress polishing phase. Results on the larger (366 mm diameter) toric mirror are evaluated by interferometry. We obtain, as expected, a toric surface within specification at low, middle, and high spatial frequencies ranges.

  11. Complete Tri-Axis Magnetometer Calibration with a Gyro Auxiliary

    PubMed Central

    Yang, Deng; You, Zheng; Li, Bin; Duan, Wenrui; Yuan, Binwen

    2017-01-01

    Magnetometers combined with inertial sensors are widely used for orientation estimation, and calibrations are necessary to achieve high accuracy. This paper presents a complete tri-axis magnetometer calibration algorithm with a gyro auxiliary. The magnetic distortions and sensor errors, including the misalignment error between the magnetometer and assembled platform, are compensated after calibration. With the gyro auxiliary, the magnetometer linear interpolation outputs are calculated, and the error parameters are evaluated under linear operations of magnetometer interpolation outputs. The simulation and experiment are performed to illustrate the efficiency of the algorithm. After calibration, the heading errors calculated by magnetometers are reduced to 0.5° (1σ). This calibration algorithm can also be applied to tri-axis accelerometers whose error model is similar to tri-axis magnetometers. PMID:28587115

  12. Usability of a CKD educational website targeted to patients and their family members.

    PubMed

    Diamantidis, Clarissa J; Zuckerman, Marni; Fink, Wanda; Hu, Peter; Yang, Shiming; Fink, Jeffrey C

    2012-10-01

    Web-based technology is critical to the future of healthcare. As part of the Safe Kidney Care cohort study evaluating patient safety in CKD, this study determined how effectively a representative sample of patients with CKD or family members could interpret and use the Safe Kidney Care website (www.safekidneycare.org), an informational website on safety in CKD. Between November of 2011 and January of 2012, persons with CKD or their family members underwent formal usability testing administered by a single interviewer with a second recording observer. Each participant was independently provided a list of 21 tasks to complete, with each task rated as either easily completed/noncritical error or critical error (user cannot complete the task without significant interviewer intervention). Twelve participants completed formal usability testing. Median completion time for all tasks was 17.5 minutes (range=10-44 minutes). In total, 10 participants had greater than or equal to one critical error. There were 55 critical errors in 252 tasks (22%), with the highest proportion of critical errors occurring when participants were asked to find information on treatments that may damage kidneys, find the website on the internet, increase font size, and scroll to the bottom of the webpage. Participants were generally satisfied with the content and usability of the website. Web-based educational materials for patients with CKD should target a wide range of computer literacy levels and anticipate variability in competency in use of the computer and internet.

  13. Importance of interpolation and coincidence errors in data fusion

    NASA Astrophysics Data System (ADS)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  14. Individual budgets for people with incontinence: results from a 'shopping' experiment within the British National Health Service.

    PubMed

    Fader, Mandy J; Cottenden, Alan M; Gage, Heather M; Williams, Peter; Getliffe, Katharine; Clarke-O'Neill, Sinead; Jamieson, Katharine M; Green, Nicholas J

    2014-04-01

    Most people with urinary incontinence are given limited choice when provided with absorbent products through the British National Health Service (NHS), even though the available range is large. To investigate users' preferences for four disposable designs (inserts, all-in-ones, belted/T-shaped and pull-ups) and towelling washable/reusable products, day and night. Shopping experiment. Community-dwelling women and men in England with moderate-to-heavy urinary incontinence recruited to a larger trial. Participants tested each design and selected products they would prefer with a range of different budgets. Design preferences (rankings); 'purchasing' decisions from designated budgets. Results  Eighty-five participants (49 men) tested products, 75 completed the shopping experiment. Inserts, most frequently supplied by the NHS, were ranked second to pull-ups by women and lowest by men. When faced with budget constraints, up to 40% of participants opted to 'mix-and-match' designs. Over 15 different combinations of products were selected by participants in the shopping experiment. Most (91%) stated a willingness to 'top-up' assigned budgets from income to secure preferred designs. Participants displayed diverse preferences. Enabling user choice of absorbent product design through individual budgets could improve satisfaction of consumers and efficiency of allocation of limited NHS resources. Recent policy for the NHS seeks to provide consumers with more control in their care. Extension of the concept of individual budgets to continence supplies could be feasible and beneficial for patients and provide better value-for-money within the NHS. Further research is warranted. © 2012 John Wiley & Sons Ltd.

  15. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  16. Forward to the Future: Estimating River Discharge with McFLI

    NASA Astrophysics Data System (ADS)

    Gleason, C. J.; Durand, M. T.; Garambois, P. A.

    2016-12-01

    The global surface water budget is still poorly understood, and improving our understanding of freshwater budgets requires coordination between in situ observations, models, and remote sensing. The upcoming launch of the NASA/CNES Surface Water and Ocean Topography (SWOT) satellite has generated considerable excitement as a new tool enabling hydrologists to tackle some of the most pressing questions facing their discipline. One question in particular which SWOT seems well suited to answer is river discharge (flow rate) estimation in ungauged basins: SWOT's anticipated measurements of river surface height and area have ushered in a new technique in hydrology- what we are here calling Mass conserved Flow Law Inversions, or McFLI. McFLI algorithms leverage classic hydraulic flow expressions (e.g. Manning's Equation, hydraulic geometry) within mass conserved river reaches to construct a simplified but still underconstrained system of equations to be solved for an unknown discharge. Most existing McFLI techniques have been designed to take advantage of SWOT's measurements and Manning's Equation: SWOT will observe changes in cross sectional area and river surface slope over time, so the McFLI need only solve for baseflow area and Manning's roughness coefficient. Recently published preliminary results have indicated that McFLI can be a viable tool in a global hydrologist's toolbox (discharge errors less than 30% as compared to gauges are possible in most cases). Therefore, we here outline the progress to date for McFLI techniques, and highlight three key areas for future development: 1) Maximize the accuracy and robustness of McFLI by incorporating ancillary data from satellites, models, and in situ observations. 2) Develop new McFLI techniques using novel or underutilized flow laws. 3) Systematically test McFLI to define different inversion classes of rivers with well-defined error budgets based on geography and available data for use in gauged and ungauged basins alike.

  17. Use of a mobile terrestrial laser system to quantify the impact of rigid coastal protective structures on sandy beaches, Quebec, Canada

    NASA Astrophysics Data System (ADS)

    Van-Wierts, S.; Bernatchez, P.

    2012-04-01

    Coastal erosion is an important issue within the St-Lawrence estuary and gulf, especially in zones of unconsolidated material. Wide beaches are important coastal environments; they act as a buffer against breaking waves by absorbing and dissipating their energy, thus reducing the rate of coastal erosion. They also offer protection to humans and nearby ecosystems, providing habitat for plants, animals and lifeforms such as algae and microfauna. Conventional methods, such as aerial photograph analysis, fail to adequately quantify the morphosedimentary behavior of beaches at the scale of a hydrosedimentary cells. The lack of reliable and quantitative data leads to considerable errors of overestimation and underestimation of sediment budgets. To address these gaps and to minimize acquisition costs posed by airborne LiDAR survey, a mobile terrestrial LiDAR has been set up to acquire topographic data of the coastal zone. The acquisition system includes a LiDAR sensor, a high precision navigation system (GPS-INS) and a video camera. Comparison of LiDAR data with 1050 DGPS control points shows a vertical mean absolute error of 0.1 m in beach areas. The extracted data is used to calculate sediment volumes, widths, slopes, and a sediment budget index. A high accuracy coastal characterization is achieved through the integration of laser data and video. The main objective of this first project using this system is to quantify the impact of rigid coastal protective structures on sediment budget and beach morphology. Results show that the average sediment volume of beaches located before a rock armour barrier (12 m3/m) were three times narrower than for natural beaches (35,5 m3/m). Natural beaches were also found to have twice the width (25.4 m) of the beaches bordering inhabited areas (12.7 m). The development of sediment budget index for beach areas is an excellent proxy to quickly identify deficit areas and therefore the coastal segments most at risk of erosion. The obtained LiDAR coverage also revealed that beach profiles made at an interval of more than 200 m on diversified coasts lead to results significantly different from reality. However, profile intervals have little impact on long uniform beaches.

  18. Disagreement between Hydrological and Land Surface models on the water budgets in the Arctic: why is this and which of them is right?

    NASA Astrophysics Data System (ADS)

    Blyth, E.; Martinez-de la Torre, A.; Ellis, R.; Robinson, E.

    2017-12-01

    The fresh-water budget of the Artic region has a diverse range of impacts: the ecosystems of the region, ocean circulation response to Arctic freshwater, methane emissions through changing wetland extent as well as the available fresh water for human consumption. But there are many processes that control the budget including a seasonal snow packs building and thawing, freezing soils and permafrost, extensive organic soils and large wetland systems. All these processes interact to create a complex hydrological system. In this study we examine a suite of 10 models that bring all those processes together in a 25 year reanalysis of the global water budget. We assess their performance in the Arctic region. There are two approaches to modelling fresh-water flows at large scales, referred to here as `Hydrological' and `Land Surface' models. While both approaches include a physically based model of the water stores and fluxes, the Land Surface models links the water flows to an energy-based model for processes such as snow melt and soil freezing. This study will analyse the impact of that basic difference on the regional patterns of evapotranspiration, runoff generation and terrestrial water storage. For the evapotranspiration, the Hydrological models tend to have a bigger spatial range in the model bias (difference to observations), implying greater errors compared to the Land-Surface models. For instance, some regions such as Eastern Siberia have consistently lower Evaporation in the Hydrological models than the Land Surface models. For the Runoff however, the results are the other way round with a slightly higher spatial range in bias for the Land Surface models implying greater errors than the Hydrological models. A simple analysis would suggest that Hydrological models are designed to get the runoff right, while Land Surface models designed to get the evapotranspiration right. Tracing the source of the difference suggests that the difference comes from the treatment of snow and evapotranspiration. The study reveals that expertise in the role of snow on runoff generation and evapotranspiration in Hydrological and Land Surface could be combined to improve the representation of the fresh water flows in the Arctic in both approaches. Improved observations are essential to make these modelling advances possible.

  19. 43 CFR 2200.0-9 - Information collection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 2200 of Group 2200 has been approved by the Office of Management and Budget under 44 U.S.C. 3501 et seq..., searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...

  20. 43 CFR 2200.0-9 - Information collection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 2200 of Group 2200 has been approved by the Office of Management and Budget under 44 U.S.C. 3501 et seq..., searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...

  1. 43 CFR 2200.0-9 - Information collection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 2200 of Group 2200 has been approved by the Office of Management and Budget under 44 U.S.C. 3501 et seq..., searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...

  2. 5 CFR 1315.3 - Responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... standardized information and electronic data exchange to the central management agency. Systems shall provide complete, timely, reliable, useful and consistent financial management information. Payment capabilities... Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT PAYMENT § 1315.3 Responsibilities...

  3. 42 CFR 1008.37 - Disclosure of ownership and related information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... requesting an advisory opinion must supply full and complete information as to the identity of each entity...)) and part 420 of this chapter. (Approved by the Office of Management and Budget under control number...

  4. Coast Guard : budget challenges for 2001 and beyond

    DOT National Transportation Integrated Search

    2000-03-15

    The testimony today, which is based on recently completed and ongoing work, addresses two topics: (1) the Coast Guard's progress in justifying and managing its Deepwater Project and (2) opportunities for improving the Coast Guard's operating efficien...

  5. Electric Generation Ownership, Market Concentration and Auction Size

    EPA Pesticide Factsheets

    In this Technical Support Document (TSD), EPA explains in more detail the issues and analyses completed for the alternative remedy of State Budgets/Intrastate Trading described in the proposed Transport Rule preamble in section V.D.5.

  6. Completing below-ground carbon budgets for pastures, recovering forests, and mature forests of Amazonia

    NASA Technical Reports Server (NTRS)

    Davidson, Eric A.; Nepstad, Daniel C.; Trumbore, Susan E.

    1994-01-01

    The objective of this grant was to complete below-ground carbon budgets for pastures and forest soils in the Amazon. Profiles of radon and carbon dioxide were used to estimate depth distribution of CO2 production in soil. This information is necessary for determining the importance of deep roots as sources of carbon inputs. Samples were collected for measuring root biomass from new research sites at Santana de Araguaia and Trombetas. Soil gases will be analyzed for CO2 and (14)CO2, and soil organic matter will be analyzed for C-14. Estimates of soil texture from the RADAMBRASIL database were merged with climate data to calculate soil water extraction by forest canopies during the dry season. In addition, a preliminary map of areas where deep roots are needed for deep soil water was produced. A list of manuscripts and papers prepared during the reporting periods is given.

  7. Uncertainty of the 20th century sea-level rise due to vertical land motion errors

    NASA Astrophysics Data System (ADS)

    Santamaría-Gómez, Alvaro; Gravelle, Médéric; Dangendorf, Sönke; Marcos, Marta; Spada, Giorgio; Wöppelmann, Guy

    2017-09-01

    Assessing the vertical land motion (VLM) at tide gauges (TG) is crucial to understanding global and regional mean sea-level changes (SLC) over the last century. However, estimating VLM with accuracy better than a few tenths of a millimeter per year is not a trivial undertaking and many factors, including the reference frame uncertainty, must be considered. Using a novel reconstruction approach and updated geodetic VLM corrections, we found the terrestrial reference frame and the estimated VLM uncertainty may contribute to the global SLC rate error by ± 0.2 mmyr-1. In addition, a spurious global SLC acceleration may be introduced up to ± 4.8 ×10-3 mmyr-2. Regional SLC rate and acceleration errors may be inflated by a factor 3 compared to the global. The difference of VLM from two independent Glacio-Isostatic Adjustment models introduces global SLC rate and acceleration biases at the level of ± 0.1 mmyr-1 and 2.8 ×10-3 mmyr-2, increasing up to 0.5 mm yr-1 and 9 ×10-3 mmyr-2 for the regional SLC. Errors in VLM corrections need to be budgeted when considering past and future SLC scenarios.

  8. Experimental demonstration of laser tomographic adaptive optics on a 30-meter telescope at 800 nm

    NASA Astrophysics Data System (ADS)

    Ammons, S., Mark; Johnson, Luke; Kupke, Renate; Gavel, Donald T.; Max, Claire E.

    2010-07-01

    A critical goal in the next decade is to develop techniques that will extend Adaptive Optics correction to visible wavelengths on Extremely Large Telescopes (ELTs). We demonstrate in the laboratory the highly accurate atmospheric tomography necessary to defeat the cone effect on ELTs, an essential milestone on the path to this capability. We simulate a high-order Laser Tomographic AO System for a 30-meter telescope with the LTAO/MOAO testbed at UCSC. Eight Sodium Laser Guide Stars (LGSs) are sensed by 99x99 Shack-Hartmann wavefront sensors over 75". The AO system is diffraction-limited at a science wavelength of 800 nm (S ~ 6-9%) over a field of regard of 20" diameter. Openloop WFS systematic error is observed to be proportional to the total input atmospheric disturbance and is nearly the dominant error budget term (81 nm RMS), exceeded only by tomographic wavefront estimation error (92 nm RMS). The total residual wavefront error for this experiment is comparable to that expected for wide-field tomographic adaptive optics systems of similar wavefront sensor order and LGS constellation geometry planned for Extremely Large Telescopes.

  9. Mass load estimation errors utilizing grab sampling strategies in a karst watershed

    USGS Publications Warehouse

    Fogle, A.W.; Taraba, J.L.; Dinger, J.S.

    2003-01-01

    Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

  10. Waffle mode error in the AEOS adaptive optics point-spread function

    NASA Astrophysics Data System (ADS)

    Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.

    2003-02-01

    Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.

  11. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  12. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  13. Didn't You Run the Spell Checker? Effects of Type of Spelling Error and Use of a Spell Checker on Perceptions of the Author

    ERIC Educational Resources Information Center

    Figueredo, Lauren; Varnhagen, Connie K.

    2005-01-01

    We investigated expectations regarding a writer's responsibility to proofread text for spelling errors when using a word processor. Undergraduate students read an essay and completed a questionnaire regarding their perceptions of the author and the quality of the essay. They then manipulated type of spelling error (no error, homophone error,…

  14. Terrestrial Planet Finder Interferometer Technology Status and Plans

    NASA Technical Reports Server (NTRS)

    Lawson, Perter R.; Ahmed, A.; Gappinger, R. O.; Ksendzov, A.; Lay, O. P.; Martin, S. R.; Peters, R. D.; Scharf, D. P.; Wallace, J. K.; Ware, B.

    2006-01-01

    A viewgraph presentation on the technology status and plans for Terrestrial Planet Finder Interferometer is shown. The topics include: 1) The Navigator Program; 2) TPF-I Project Overview; 3) Project Organization; 4) Technology Plan for TPF-I; 5) TPF-I Testbeds; 6) Nulling Error Budget; 7) Nulling Testbeds; 8) Nulling Requirements; 9) Achromatic Nulling Testbed; 10) Single Mode Spatial Filter Technology; 11) Adaptive Nuller Testbed; 12) TPF-I: Planet Detection Testbed (PDT); 13) Planet Detection Testbed Phase Modulation Experiment; and 14) Formation Control Testbed.

  15. Budget Studies of a Prefrontal Convective Rainband in Northern Taiwan Determined from TAMEX Data

    DTIC Science & Technology

    1993-06-01

    storm top accumulates less error in w calculation than an upward integration from the surface. Other Doppler studies, e.g., Chong and Testud (1983), Lin...contribute to the uncertainty of w is a result of the advection problem (Gal-Chen, 1982; Chong and Testud , 1983). Parsons et al (1983) employed an...Boulder, CO., 95-102. Chong, M., and J. Testud , 1983: Three-dimensional wind field analysis from dual-Doppler radar data. Part III: The boundary condition

  16. Development of a sub-miniature rubidium oscillator for SEEKTALK application

    NASA Technical Reports Server (NTRS)

    Fruehauf, H.; Weidemann, W.; Jechart, E.

    1981-01-01

    Warm-up and size challenges to oscillator construction are presented as well as the problems involved in these tasks. The performance of M-100 military rubidium oscillator is compared to that of a subminiture rubididum oscillator (M-1000). Methods of achieving 1.5 minute warm-up are discussed as well as improvements in performance under adverse environmental conditions, including temperature, vibration, and magnetics. An attempt is made to construct an oscillator error budget under a set of arbitrary mission conditions.

  17. Improving the Cost Estimation of Space Systems. Past Lessons and Future Recommendations

    DTIC Science & Technology

    2008-01-01

    a reasonable gauge for the relative propor- tions of cost growth attributable to errors, decisions, and other causes in any MDAP. Analysis of the...program. The program offices visited were the Defense Metrological Satellite Pro- gram (DMSP), Evolved Expendable Launch Vehicle (EELV), Advanced...3 years 1.8 0.9 3–8 years 1.8 0.9 8+ years 3.7 1.8 Staffing Requirement 7.4 3.7 areas represent earned value and budget drills ; the tan area on top

  18. Implementing DRGs at Silas B. Hays Army Community Hospital: Enhancement of Utilization Review

    DTIC Science & Technology

    1990-12-01

    valuable assistance in creating this wordperfect document from both ASCII and ENABLE files. I thank them for their patience. Lastly, I wish to thank COL Jack...34error" predicate is called from a trap. A longmenu should eventually be used to assist in locating the RCMAS file. rcrnas-file:-not(existfile...B. Hays U.S. Army Community Hospital, Fort Ord, California has the potential to lose over $900 thousand in the supply budget category starting in

  19. Rules for Optical Testing

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2014-01-01

    Based on 30 years of optical testing experience, a lot of mistakes, a lot of learning and a lot of experience, I have defined seven guiding principles for optical testing - regardless of how small or how large the optical testing or metrology task: Fully Understand the Task, Develop an Error Budget, Continuous Metrology Coverage, Know where you are, Test like you fly, Independent Cross-Checks, Understand All Anomalies. These rules have been applied with great success to the inprocess optical testing and final specification compliance testing of the JWST mirrors.

  20. Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data

    NASA Technical Reports Server (NTRS)

    Larden, D. R.; Bender, P. L.

    1983-01-01

    The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm. Previously announced in STAR as N83-14605

  1. Modern Era Retrospective-analysis for Research and Applications (MERRA) Global Water and Energy Budgets

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Chen, Junye

    2009-01-01

    In the Summer of 2009, NASA's Modern Era Retrospective-analysis for Research and Applications (MERRA) will have completed 28 years of global satellite data analyses. Here, we characterize the global water and energy budgets of MERRA, compared with available observations and the latest reanalyses. In this analysis, the climatology of the global average components are studied as well as the separate land and ocean averages. In addition, the time series of the global averages are evaluated. For example, the global difference of precipitation and evaporation generally shows the influence of water vapor observations on the system. Since the observing systems change in time, especially remotely sensed observations of water, significant temporal variations can occur across the 28 year record. These then are also closely connected to changes in the atmospheric energy and water budgets. The net imbalance of the energy budget at the surface can be large and different signs for different reanalyses. In MERRA, the imbalance of energy at the surface tends to improve with time being the smallest during the most recent and abundant satellite observations.

  2. Paving the Way to Successful Implementation: Identifying Key Barriers to Use of Technology-Based Therapeutic Tools for Behavioral Health Care.

    PubMed

    Ramsey, Alex; Lord, Sarah; Torrey, John; Marsch, Lisa; Lardiere, Michael

    2016-01-01

    This study aimed to identify barriers to use of technology for behavioral health care from the perspective of care decision makers at community behavioral health organizations. As part of a larger survey of technology readiness, 260 care decision makers completed an open-ended question about perceived barriers to use of technology. Using the Consolidated Framework for Implementation Research (CFIR), qualitative analyses yielded barrier themes related to characteristics of technology (e.g., cost and privacy), potential end users (e.g., technology literacy and attitudes about technology), organization structure and climate (e.g., budget and infrastructure), and factors external to organizations (e.g., broadband accessibility and reimbursement policies). Number of reported barriers was higher among respondents representing agencies with lower annual budgets and smaller client bases relative to higher budget, larger clientele organizations. Individual barriers were differentially associated with budget, size of client base, and geographic location. Results are discussed in light of implementation science frameworks and proactive strategies to address perceived obstacles to adoption and use of technology-based behavioral health tools.

  3. Paving the Way to Successful Implementation: Identifying Key Barriers to Use of Technology-Based Therapeutic Tools for Behavioral Health Care

    PubMed Central

    Ramsey, Alex; Lord, Sarah; Torrey, John; Marsch, Lisa; Lardiere, Michael

    2014-01-01

    This study aimed to identify barriers to use of technology for behavioral health care from the perspective of care decision-makers at community behavioral health organizations. As part of a larger survey of technology readiness, 260 care decision-makers completed an open-ended question about perceived barriers to use of technology. Using the Consolidated Framework for Implementation Research (CFIR), qualitative analyses yielded barrier themes related to characteristics of technology (e.g., cost, privacy), potential end-users (e.g., technology literacy, attitudes about technology), organization structure and climate (e.g., budget, infrastructure), and factors external to organizations (e.g., broadband accessibility, reimbursement policies). Number of reported barriers was higher among respondents representing agencies with lower annual budgets and smaller client bases relative to higher budget, larger clientele organizations. Individual barriers were differentially associated with budget, size of client base, and geographic location. Results are discussed in light of implementation science frameworks and proactive strategies to address perceived obstacles to adoption and use of technology-based behavioral health tools. PMID:25192755

  4. Predicting and interpreting identification errors in military vehicle training using multidimensional scaling.

    PubMed

    Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R

    2014-01-01

    We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.

  5. Expressing the sense of the House of Representatives that the Speaker should immediately request a conference and appoint conferees to complete work on a fiscal year 2014 budget resolution with the Senate.

    THOMAS, 113th Congress

    Rep. Van Hollen, Chris [D-MD-8

    2013-04-23

    House - 06/20/2013 Motion to Discharge Committee filed by Mr. Van Hollen. Petition No: 113-3. (All Actions) Notes: On 6/20/2013, a motion was filed to discharge the Committee on the Budget from the consideration of H.Res.174. A discharge petition requires 218 signatures for further action. (Discharge Petition No. 113-3: text with signatures.) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  6. Advertising Guidelines

    ERIC Educational Resources Information Center

    Riso, Ovid

    1977-01-01

    Advertising should be viewed as a sales-building investment and not simply an element of business outlay that actually is a completely controllable expense. Suggestions deal with the sales budget, profiling the store and its customers, advertising media, promotional ideas, and consumer protection. (LBH)

  7. 76 FR 24049 - Information Collection Request Sent to the Office of Management and Budget (OMB) for Approval...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-29

    ... proposed scope of work. Organizational history in completing the kind of work proposed. Curriculum vitae for principal investigator(s) and project director(s). Written consent by State or tribal authorities...

  8. Assessing primary care data quality.

    PubMed

    Lim, Yvonne Mei Fong; Yusof, Maryati; Sivasampu, Sheamini

    2018-04-16

    Purpose The purpose of this paper is to assess National Medical Care Survey data quality. Design/methodology/approach Data completeness and representativeness were computed for all observations while other data quality measures were assessed using a 10 per cent sample from the National Medical Care Survey database; i.e., 12,569 primary care records from 189 public and private practices were included in the analysis. Findings Data field completion ranged from 69 to 100 per cent. Error rates for data transfer from paper to web-based application varied between 0.5 and 6.1 per cent. Error rates arising from diagnosis and clinical process coding were higher than medication coding. Data fields that involved free text entry were more prone to errors than those involving selection from menus. The authors found that completeness, accuracy, coding reliability and representativeness were generally good, while data timeliness needs to be improved. Research limitations/implications Only data entered into a web-based application were examined. Data omissions and errors in the original questionnaires were not covered. Practical implications Results from this study provided informative and practicable approaches to improve primary health care data completeness and accuracy especially in developing nations where resources are limited. Originality/value Primary care data quality studies in developing nations are limited. Understanding errors and missing data enables researchers and health service administrators to prevent quality-related problems in primary care data.

  9. Usability of a CKD Educational Website Targeted to Patients and Their Family Members

    PubMed Central

    Zuckerman, Marni; Fink, Wanda; Hu, Peter; Yang, Shiming; Fink, Jeffrey C.

    2012-01-01

    Summary Background and objectives Web-based technology is critical to the future of healthcare. As part of the Safe Kidney Care cohort study evaluating patient safety in CKD, this study determined how effectively a representative sample of patients with CKD or family members could interpret and use the Safe Kidney Care website (www.safekidneycare.org), an informational website on safety in CKD. Design, setting, participants, & measurements Between November of 2011 and January of 2012, persons with CKD or their family members underwent formal usability testing administered by a single interviewer with a second recording observer. Each participant was independently provided a list of 21 tasks to complete, with each task rated as either easily completed/noncritical error or critical error (user cannot complete the task without significant interviewer intervention). Results Twelve participants completed formal usability testing. Median completion time for all tasks was 17.5 minutes (range=10–44 minutes). In total, 10 participants had greater than or equal to one critical error. There were 55 critical errors in 252 tasks (22%), with the highest proportion of critical errors occurring when participants were asked to find information on treatments that may damage kidneys, find the website on the internet, increase font size, and scroll to the bottom of the webpage. Participants were generally satisfied with the content and usability of the website. Conclusions Web-based educational materials for patients with CKD should target a wide range of computer literacy levels and anticipate variability in competency in use of the computer and internet. PMID:22798537

  10. Scope, completeness, and accuracy of drug information in Wikipedia.

    PubMed

    Clauson, Kevin A; Polen, Hyla H; Boulos, Maged N Kamel; Dzenowagis, Joan H

    2008-12-01

    With the advent of Web 2.0 technologies, user-edited online resources such as Wikipedia are increasingly tapped for information. However, there is little research on the quality of health information found in Wikipedia. To compare the scope, completeness, and accuracy of drug information in Wikipedia with that of a free, online, traditionally edited database (Medscape Drug Reference [MDR]). Wikipedia and MDR were assessed on 8 categories of drug information. Questions were constructed and answers were verified with authoritative resources. Wikipedia and MDR were evaluated according to scope (breadth of coverage) and completeness. Accuracy was tracked by factual errors and errors of omission. Descriptive statistics were used to summarize the components. Fisher's exact test was used to compare scope and paired Student's t-test was used to compare current results in Wikipedia with entries 90 days prior to the current access. Wikipedia was able to answer significantly fewer drug information questions (40.0%) compared with MDR (82.5%; p < 0.001). Wikipedia performed poorly regarding information on dosing, with a score of 0% versus the MDR score of 90.0%. Answers found in Wikipedia were 76.0% complete, while MDR provided answers that were 95.5% complete; overall, Wikipedia answers were less complete than those in Medscape (p < 0.001). No factual errors were found in Wikipedia, whereas 4 answers in Medscape conflicted with the answer key; errors of omission were higher in Wikipedia (n = 48) than in MDR (n = 14). There was a marked improvement in Wikipedia over time, as current entries were superior to those 90 days prior (p = 0.024). Wikipedia has a more narrow scope, is less complete, and has more errors of omission than the comparator database. Wikipedia may be a useful point of engagement for consumers, but is not authoritative and should only be a supplemental source of drug information.

  11. A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment

    NASA Technical Reports Server (NTRS)

    Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.; hide

    2013-01-01

    Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

  12. Performance and Accuracy of Lightweight and Low-Cost GPS Data Loggers According to Antenna Positions, Fix Intervals, Habitats and Animal Movements

    PubMed Central

    Forin-Wiart, Marie-Amélie; Hubert, Pauline; Sirguey, Pascal; Poulle, Marie-Lazarine

    2015-01-01

    Recently developed low-cost Global Positioning System (GPS) data loggers are promising tools for wildlife research because of their affordability for low-budget projects and ability to simultaneously track a greater number of individuals compared with expensive built-in wildlife GPS. However, the reliability of these devices must be carefully examined because they were not developed to track wildlife. This study aimed to assess the performance and accuracy of commercially available GPS data loggers for the first time using the same methods applied to test built-in wildlife GPS. The effects of antenna position, fix interval and habitat on the fix-success rate (FSR) and location error (LE) of CatLog data loggers were investigated in stationary tests, whereas the effects of animal movements on these errors were investigated in motion tests. The units operated well and presented consistent performance and accuracy over time in stationary tests, and the FSR was good for all antenna positions and fix intervals. However, the LE was affected by the GPS antenna and fix interval. Furthermore, completely or partially obstructed habitats reduced the FSR by up to 80% in households and increased the LE. Movement across habitats had no effect on the FSR, whereas forest habitat influenced the LE. Finally, the mean FSR (0.90 ± 0.26) and LE (15.4 ± 10.1 m) values from low-cost GPS data loggers were comparable to those of built-in wildlife GPS collars (71.6% of fixes with LE < 10 m for motion tests), thus confirming their suitability for use in wildlife studies. PMID:26086958

  13. Vorticity imbalance and stability in relation to convection

    NASA Technical Reports Server (NTRS)

    Read, W. L.; Scoggins, J. R.

    1977-01-01

    A complete synoptic-scale vorticity budget was related to convection storm development in the eastern two-thirds of the United States. The 3-h sounding interval permitted a study of time changes of the vorticity budget in areas of convective storms. Results of analyses revealed significant changes in values of terms in the vorticity equation at different stages of squall line development. Average budgets for all areas of convection indicate systematic imbalance in the terms in the vorticity equation. This imbalance resulted primarily from sub-grid scale processes. Potential instability in the lower troposphere was analyzed in relation to the development of convective activity. Instability was related to areas of convection; however, instability alone was inadequate for forecast purposes. Combinations of stability and terms in the vorticity equation in the form of indices succeeded in depicting areas of convection better than any one item separately.

  14. Conference OKs science budgets

    NASA Astrophysics Data System (ADS)

    With the budget process all but complete for next fiscal year, the National Science Foundation and the National Aeronautics and Space Administration observers were saying that science had not done that badly in Congress, for an election year. NSF got half the budget increase it requested, NASA two-thirds. The Space Station did well, at the expense of environmental and social programs, which are funded by Congress from the same pot of money as NASA and NSF.A House-Senate conference finished work on a $59 billion appropriations bill for the Department of Housing and Urban Development and independent agencies, including EPA, NASA, and NSF, in early August. The House and Senate then quickly passed the measure before their recess; the President is expected to sign it soon. Included in the Fiscal Year 1989 spending bill are $1,885 billion for NSF, a 9.8% increase over FY 1988, and $10.7 billion for NASA, 18.5% more than the year before.

  15. Financial savvy: the value of business acumen in oncology nursing.

    PubMed

    Rishel, Cindy J

    2014-05-01

    Have you given serious thought to your individual ability to affect the high cost of health care? If so, you may have determined that the opportunity to have any meaningful effect on cost of services for patients with cancer is limited. You may believe that budgets are the responsibility of nursing leadership. Indeed, the development of the unit or department budget is an activity that many of us have no direct (or even indirect) role in completing. Once the budget is finalized, we are frequently given directives to control our costs and improve the financial bottom line for our employers. One could argue that this is a particularly difficult missive for oncology nurses with the soaring costs of chemotherapy and biotherapy drugs, the expenses incurred to provide supportive care needed by patients with cancer, and the need to provide services to the increasing number of cancer survivors.

  16. Differential tracking data types for accurate and efficient Mars planetary navigation

    NASA Technical Reports Server (NTRS)

    Edwards, C. D., Jr.; Kahn, R. D.; Folkner, W. M.; Border, J. S.

    1991-01-01

    Ways in which high-accuracy differential observations of two or more deep space vehicles can dramatically extend the power of earth-based tracking over conventional range and Doppler tracking are discussed. Two techniques - spacecraft-spacecraft differential very long baseline interferometry (S/C-S/C Delta(VLBI)) and same-beam interferometry (SBI) - are discussed. The tracking and navigation capabilities of conventional range, Doppler, and quasar-relative Delta(VLBI) are reviewed, and the S/C-S/C Delta (VLBI) and SBI types are introduced. For each data type, the formation of the observable is discussed, an error budget describing how physical error sources manifest themselves in the observable is presented, and potential applications of the technique for Space Exploration Initiative scenarios are examined. Requirements for spacecraft and ground systems needed to enable and optimize these types of observations are discussed.

  17. 77 FR 42304 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-18

    ..., Office of Management and Budget (OMB), Attention: Desk Officer for EPA, 725 17th Street NW., Washington..., and submit this information to EPA. EPA uses this information to construct a complete picture of the...

  18. Patterning control strategies for minimum edge placement error in logic devices

    NASA Astrophysics Data System (ADS)

    Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim

    2017-03-01

    In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.

  19. Alignment error of mirror modules of advanced telescope for high-energy astrophysics due to wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Zocchi, Fabio E.

    2017-10-01

    One of the approaches that is being tested for the integration of the mirror modules of the advanced telescope for high-energy astrophysics x-ray mission of the European Space Agency consists in aligning each module on an optical bench operated at an ultraviolet wavelength. The mirror module is illuminated by a plane wave and, in order to overcome diffraction effects, the centroid of the image produced by the module is used as a reference to assess the accuracy of the optical alignment of the mirror module itself. Among other sources of uncertainty, the wave-front error of the plane wave also introduces an error in the position of the centroid, thus affecting the quality of the mirror module alignment. The power spectral density of the position of the point spread function centroid is here derived from the power spectral density of the wave-front error of the plane wave in the framework of the scalar theory of Fourier diffraction. This allows the defining of a specification on the collimator quality used for generating the plane wave starting from the contribution to the error budget allocated for the uncertainty of the centroid position. The theory generally applies whenever Fourier diffraction is a valid approximation, in which case the obtained result is identical to that derived by geometrical optics considerations.

  20. Associations among selective attention, memory bias, cognitive errors and symptoms of anxiety in youth.

    PubMed

    Watts, Sarah E; Weems, Carl F

    2006-12-01

    The purpose of this study was to examine the linkages among selective attention, memory bias, cognitive errors, and anxiety problems by testing a model of the interrelations among these cognitive variables and childhood anxiety disorder symptoms. A community sample of 81 youth (38 females and 43 males) aged 9-17 years and their parents completed measures of the child's anxiety disorder symptoms. Youth completed assessments measuring selective attention, memory bias, and cognitive errors. Results indicated that selective attention, memory bias, and cognitive errors were each correlated with childhood anxiety problems and provide support for a cognitive model of anxiety which posits that these three biases are associated with childhood anxiety problems. Only limited support for significant interrelations among selective attention, memory bias, and cognitive errors was found. Finally, results point towards an effective strategy for moving the assessment of selective attention to younger and community samples of youth.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Amanda S.; Brosha, Eric

    This is a progress report for the demonstration of a prototype hydrogen sensor and electronics package. There are five tasks associated with this, and four have been completed as of August 2016: Station Demonstration and Site Recommendation, Order Sensor Equipment, Build Sensors, and Install Sensors. The final task to be completed is Sensor Demonstration and Data Analysis, and expected completion date is January 26, 2017. This progress report details each of the tasks and goes into detail about what is currently being worked on, along with the budget and planned work for July 27, 2016 to January 26, 2017.

  2. Nimbus-7 Earth radiation budget calibration history. Part 1: The solar channels

    NASA Technical Reports Server (NTRS)

    Kyle, H. Lee; Hoyt, Douglas V.; Hickey, John R.; Maschhoff, Robert H.; Vallette, Brenda J.

    1993-01-01

    The Earth Radiation Budget (ERB) experiment on the Nimbus-7 satellite measured the total solar irradiance plus broadband spectral components on a nearly daily basis from 16 Nov. 1978, until 16 June 1992. Months of additional observations were taken in late 1992 and in 1993. The emphasis is on the electrically self calibrating cavity radiometer, channel 10c, which recorded accurate total solar irradiance measurements over the whole period. The spectral channels did not have inflight calibration adjustment capabilities. These channels can, with some additional corrections, be used for short-term studies (one or two solar rotations - 27 to 60 days), but not for long-term trend analysis. For channel 10c, changing radiometer pointing, the zero offsets, the stability of the gain, the temperature sensitivity, and the influences of other platform instruments are all examined and their effects on the measurements considered. Only the question of relative accuracy (not absolute) is examined. The final channel 10c product is also compared with solar measurements made by independent experiments on other satellites. The Nimbus experiment showed that the mean solar energy was about 0.1 percent (1.4 W/sqm) higher in the excited Sun years of 1979 and 1991 than in the quiet Sun years of 1985 and 1986. The error analysis indicated that the measured long-term trends may be as accurate as +/- 0.005 percent. The worse-case error estimate is +/- 0.03 percent.

  3. Evaluating Micrometeorological Estimates of Groundwater Discharge from Great Basin Desert Playas.

    PubMed

    Jackson, Tracie R; Halford, Keith J; Gardner, Philip M

    2018-03-06

    Groundwater availability studies in the arid southwestern United States traditionally have assumed that groundwater discharge by evapotranspiration (ET g ) from desert playas is a significant component of the groundwater budget. However, desert playa ET g rates are poorly constrained by Bowen ratio energy budget (BREB) and eddy-covariance (EC) micrometeorological measurement approaches. Best attempts by previous studies to constrain ET g from desert playas have resulted in ET g rates that are within the measurement error of micrometeorological approaches. This study uses numerical models to further constrain desert playa ET g rates that are within the measurement error of BREB and EC approaches, and to evaluate the effect of hydraulic properties and salinity-based groundwater density contrasts on desert playa ET g rates. Numerical models simulated ET g rates from desert playas in Death Valley, California and Dixie Valley, Nevada. Results indicate that actual ET g rates from desert playas are significantly below the uncertainty thresholds of BREB- and EC-based micrometeorological measurements. Discharge from desert playas likely contributes less than 2% of total groundwater discharge from Dixie and Death Valleys, which suggests discharge from desert playas also is negligible in other basins. Simulation results also show that ET g from desert playas primarily is limited by differences in hydraulic properties between alluvial fan and playa sediments and, to a lesser extent, by salinity-based groundwater density contrasts. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  4. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  5. Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.

    2014-01-01

    This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

  6. Nurses' behaviors and visual scanning patterns may reduce patient identification errors.

    PubMed

    Marquard, Jenna L; Henneman, Philip L; He, Ze; Jo, Junghee; Fisher, Donald L; Henneman, Elizabeth A

    2011-09-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20) administered medications to 3 patients in a simulated clinical setting, with 1 patient having an embedded ID error. Error-identifying nurses tended to complete more process steps in a similar amount of time than non-error-identifying nurses and tended to scan information across artifacts (e.g., ID band, patient chart, medication label) rather than fixating on several pieces of information on a single artifact before fixating on another artifact. Non-error-indentifying nurses tended to increase their durations of off-topic conversations-a type of process interruption-over the course of the trials; the difference between groups was significant in the trial with the embedded ID error. Error-identifying nurses tended to have their most fixations in a row on the patient's chart, whereas non-error-identifying nurses did not tend to have a single artifact on which they consistently fixated. Finally, error-identifying nurses tended to have predictable eye fixation sequences across artifacts, whereas non-error-identifying nurses tended to have seemingly random eye fixation sequences. This finding has implications for nurse training and the design of tools and technologies that support nurses as they complete the medication administration process. (c) 2011 APA, all rights reserved.

  7. Importance of measuring discharge and sediment transport in lesser tributaries when closing sediment budgets

    NASA Astrophysics Data System (ADS)

    Griffiths, Ronald E.; Topping, David J.

    2017-11-01

    Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a river reach is in a state of sediment accumulation, deficit or stasis. Many sediment-budget studies have estimated the sediment loads of ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of loads in regions where rainfall events, contributing geology, and vegetation have large spatial and/or temporal variability. Previous estimates of the combined mean-annual sediment load of all ungaged tributaries to the Colorado River downstream from Glen Canyon Dam vary by over a factor of three; this range in estimated sediment loads has resulted in different researchers reaching opposite conclusions on the sign (accumulation or deficit) of the sediment budget for particular reaches of the Colorado River. To better evaluate the supply of fine sediment (sand, silt, and clay) from these tributaries to the Colorado River, eight gages were established on previously ungaged tributaries in Glen, Marble, and Grand canyons. Results from this sediment-monitoring network show that previous estimates of the annual sediment loads of these tributaries were too high and that the sediment budget for the Colorado River below Glen Canyon Dam is more negative than previously calculated by most researchers. As a result of locally intense rainfall events with footprints smaller than the receiving basin, floods from a single tributary in semi-arid regions can have large (≥ 10 ×) differences in sediment concentrations between equal magnitude flows. Because sediment loads do not necessarily correlate with drainage size, and may vary by two orders of magnitude on an annual basis, using techniques such as sediment-yield equations to estimate the sediment loads of ungaged tributaries may lead to large errors in sediment budgets.

  8. Importance of measuring discharge and sediment transport in lesser tributaries when closing sediment budgets

    USGS Publications Warehouse

    Griffiths, Ronald; Topping, David

    2017-01-01

    Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a river reach is in a state of sediment accumulation, deficit or stasis. Many sediment-budget studies have estimated the sediment loads of ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of loads in regions where rainfall events, contributing geology, and vegetation have large spatial and/or temporal variability.Previous estimates of the combined mean-annual sediment load of all ungaged tributaries to the Colorado River downstream from Glen Canyon Dam vary by over a factor of three; this range in estimated sediment loads has resulted in different researchers reaching opposite conclusions on the sign (accumulation or deficit) of the sediment budget for particular reaches of the Colorado River. To better evaluate the supply of fine sediment (sand, silt, and clay) from these tributaries to the Colorado River, eight gages were established on previously ungaged tributaries in Glen, Marble, and Grand canyons. Results from this sediment-monitoring network show that previous estimates of the annual sediment loads of these tributaries were too high and that the sediment budget for the Colorado River below Glen Canyon Dam is more negative than previously calculated by most researchers. As a result of locally intense rainfall events with footprints smaller than the receiving basin, floods from a single tributary in semi-arid regions can have large (≥ 10 ×) differences in sediment concentrations between equal magnitude flows. Because sediment loads do not necessarily correlate with drainage size, and may vary by two orders of magnitude on an annual basis, using techniques such as sediment-yield equations to estimate the sediment loads of ungaged tributaries may lead to large errors in sediment budgets.

  9. Cost-effectiveness and budget impact analyses of a colorectal cancer screening programme in a high adenoma prevalence scenario using MISCAN-Colon microsimulation model.

    PubMed

    Arrospide, Arantzazu; Idigoras, Isabel; Mar, Javier; de Koning, Harry; van der Meulen, Miriam; Soto-Gordoa, Myriam; Martinez-Llorente, Jose Miguel; Portillo, Isabel; Arana-Arri, Eunate; Ibarrondo, Oliver; Lansdorp-Vogelaar, Iris

    2018-04-25

    The Basque Colorectal Cancer Screening Programme began in 2009 and the implementation has been complete since 2013. Faecal immunological testing was used for screening in individuals between 50 and 69 years old. Colorectal Cancer in Basque country is characterized by unusual epidemiological features given that Colorectal Cancer incidence is similar to other European countries while adenoma prevalence is higher. The object of our study was to economically evaluate the programme via cost-effectiveness and budget impact analyses with microsimulation models. We applied the Microsimulation Screening Analysis (MISCAN)-Colon model to predict trends in Colorectal Cancer incidence and mortality and to quantify the short- and long-term effects and costs of the Basque Colorectal Cancer Screening Programme. The model was calibrated to the Basque demographics in 2008 and age-specific Colorectal Cancer incidence data in the Basque Cancer Registry from 2005 to 2008 before the screening begun. The model was also calibrated to the high adenoma prevalence observed for the Basque population in a previously published study. The multi-cohort approach used in the model included all the cohorts in the programme during 30 years of implementation, with lifetime follow-up. Unit costs were obtained from the Basque Health Service and both cost-effectiveness analysis and budget impact analysis were carried out. The goodness-of-fit of the model adaptation to observed programme data was evidence of validation. In the cost-effectiveness analysis, the savings from treatment were larger than the added costs due to screening. Thus, the Basque programme was dominant compared to no screening, as life expectancy increased by 29.3 days per person. The savings in the budget analysis appeared 10 years after the complete implementation of the programme. The average annual budget was €73.4 million from year 2023 onwards. This economic evaluation showed a screening intervention with a major health gain that also produced net savings when a long follow-up was used to capture the late economic benefit. The number of colonoscopies required was high but remain within the capacity of the Basque Health Service. So far in Europe, no other population Colorectal Cancer screening programme has been evaluated by budget impact analysis.

  10. A statistical study of radio-source structure effects on astrometric very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1989-01-01

    Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.

  11. Solar adaptive optics with the DKIST: status report

    NASA Astrophysics Data System (ADS)

    Johnson, Luke C.; Cummings, Keith; Drobilek, Mark; Gregory, Scott; Hegwer, Steve; Johansson, Erik; Marino, Jose; Richards, Kit; Rimmele, Thomas; Sekulic, Predrag; Wöger, Friedrich

    2014-08-01

    The DKIST wavefront correction system will be an integral part of the telescope, providing active alignment control, wavefront correction, and jitter compensation to all DKIST instruments. The wavefront correction system will operate in four observing modes, diffraction-limited, seeing-limited on-disk, seeing-limited coronal, and limb occulting with image stabilization. Wavefront correction for DKIST includes two major components: active optics to correct low-order wavefront and alignment errors, and adaptive optics to correct wavefront errors and high-frequency jitter caused by atmospheric turbulence. The adaptive optics system is built around a fast tip-tilt mirror and a 1600 actuator deformable mirror, both of which are controlled by an FPGA-based real-time system running at 2 kHz. It is designed to achieve on-axis Strehl of 0.3 at 500 nm in median seeing (r0 = 7 cm) and Strehl of 0.6 at 630 nm in excellent seeing (r0 = 20 cm). We present the current status of the DKIST high-order adaptive optics, focusing on system design, hardware procurements, and error budget management.

  12. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  13. Guidelines for conducting vulnerability assessments. [Susceptibility of programs to unauthorized use of resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-06-01

    The US General Accounting Office and executive agency Inspectors General have reported losses of millions of dollars in government funds resulting from fraud, waste and error. The Administration and the Congress have initiated determined efforts to eliminate such losses from government programs and activities. Primary emphasis in this effort is on the strengthening of accounting and administrative controls. Accordingly, the Office of Management and Budget (OMB) issued Circular No. A-123, Internal Control Systems, on October 28, 1981. The campaign to improve internal controls was endorsed by the Secretary of Energy in a memorandum to Heads of Departmental Components, dated Marchmore » 13, 1981, Subject: Internal Control as a Deterrent to Fraud, Waste and Error. A vulnerability assessment is a review of the susceptibility of a program or function to unauthorized use of resources, errors in reports and information, and illegal or unethical acts. It is based on considerations of the environment in which the program or function is carried out, the inherent riskiness of the program or function, and a preliminary evaluation as to whether adequate safeguards exist and are functioning.« less

  14. Comparison of TRMM 2A25 Products Version 6 and Version 7 with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstetter, Pierre-Emmanuel; Hong, Y.; Gourley, J. J.; Schwaller, M.; Petersen, W; Zhang, J.

    2012-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving spaceborne passive and active microwave measurements for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem was addressed in a previous paper by comparison of 2A25 version 6 (V6) product with reference values derived from NOAA/NSSL's ground radar-based National Mosaic and QPE system (NMQ/Q2). The primary contribution of this study is to compare the new 2A25 version 7 (V7) products that were recently released as a replacement of V6. This new version is considered superior over land areas. Several aspects of the two versions are compared and quantified including rainfall rate distributions, systematic biases, and random errors. All analyses indicate V7 is an improvement over V6.

  15. An audit of request forms submitted in a multidisciplinary diagnostic center in Lagos.

    PubMed

    Oyedeji, Olufemi Abiola; Ogbenna, Abiola Ann; Iwuala, Sandra Omozehio

    2015-01-01

    Request forms are important means of communication between physicians and diagnostic service providers. Pre-analytical errors account for over two thirds of errors encountered in diagnostic service provision. The importance of adequate completion of request forms is usually underestimated by physicians which may result in medical errors or delay in instituting appropriate treatment. The aim of this study was to audit the level of completion of request forms presented at a multidisciplinary diagnostic center. A review of all requests forms for investigations which included radiologic, laboratory and cardiac investigations received between July and December 2011 was performed to assess their level of completeness. The data was entered into a spreadsheet and analyzed. Only 1.3% of the 7,841 request forms reviewed were fully completed. Patient's names, the referring physician's name and gender were the most completed information on the forms evaluated with 99.0%, 99.0% and 90.3% completion respectively. Patient's age was provided in 68.0%, request date in 88.2%, and clinical notes/ diagnosis in 65.9% of the requests. Patient's full address was provided in only 5.6% of requests evaluated. This study shows that investigation request forms are inadequately filled by physicians in our environment. Continuous medical education of physicians on the need for adequate completion of request forms is needed.

  16. 76 FR 30094 - Announcement of Grant and Loan Application Deadlines and Funding Levels

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-24

    ... organizational type in order to prepare the budget and complete other parts of the application. You also must... points. costs. 3. Extent to which the work plan clearly Up to 40 points. articulates a well thought out...

  17. Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.

    PubMed

    Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick

    2016-02-01

    Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.

  18. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  19. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  20. Retrieval of carbon dioxide vertical profiles from solar occultation observations and associated error budgets for ACE-FTS and CASS-FTS

    NASA Astrophysics Data System (ADS)

    Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.

    2014-02-01

    An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier Transform Spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information is not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs are typically within 60 m of those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (Collision-Induced Absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC, CONTRAIL and HIPPO, yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS dataset is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.5 ± 0.7 ppm yr-1, in agreement with the currently accepted global growth rate based on ground-based measurements.

  1. Mitigating Inadvertent Insider Threats with Incentives

    NASA Astrophysics Data System (ADS)

    Liu, Debin; Wang, Xiaofeng; Camp, L. Jean

    Inadvertent insiders are trusted insiders who do not have malicious intent (as with malicious insiders) but do not responsibly managing security. The result is often enabling a malicious outsider to use the privileges of the inattentive insider to implement an insider attack. This risk is as old as conversion of a weak user password into root access, but the term inadvertent insider is recently coined to identify the link between the behavior and the vulnerability. In this paper, we propose to mitigate this threat using a novel risk budget mechanism that offers incentives to an insider to behave according to the risk posture set by the organization. We propose assigning an insider a risk budget, which is a specific allocation of risk points, allowing employees to take a finite number of risk-seeking choice. In this way, the employee can complete her tasks without subverting the security system, as with absolute prohibitions. In the end, the organization penalizes the insider if she fails to accomplish her task within the budget while rewards her in the presence of a surplus. Most importantly. the risk budget requires that the user make conscious visible choices to take electronic risks. We describe the theory behind the system, including specific work on the insider threats. We evaluated this approach using human-subject experiments, which demonstrate the effectiveness of our risk budget mechanism. We also present a game theoretic analysis of the mechanism.

  2. A New Design of the Test Rig to Measure the Transmission Error of Automobile Gearbox

    NASA Astrophysics Data System (ADS)

    Hou, Yixuan; Zhou, Xiaoqin; He, Xiuzhi; Liu, Zufei; Liu, Qiang

    2017-12-01

    Noise and vibration affect the performance of automobile gearbox. And transmission error has been regarded as an important excitation source in gear system. Most of current research is focused on the measurement and analysis of single gear drive, and few investigations on the transmission error measurement in complete gearbox were conducted. In order to measure transmission error in a complete automobile gearbox, a kind of electrically closed test rig is developed. Based on the principle of modular design, the test rig can be used to test different types of gearbox by adding necessary modules. The test rig for front engine, rear-wheel-drive gearbox is constructed. And static and modal analysis methods are taken to verify the performance of a key component.

  3. Resilience in carbonate production despite three coral bleaching events in 5 years on an inshore patch reef in the Florida Keys.

    PubMed

    Manzello, Derek P; Enochs, Ian C; Kolodziej, Graham; Carlton, Renée; Valentino, Lauren

    2018-01-01

    The persistence of coral reef frameworks requires that calcium carbonate (CaCO 3 ) production by corals and other calcifiers outpaces CaCO 3 loss via physical, chemical, and biological erosion. Coral bleaching causes declines in CaCO 3 production, but this varies with bleaching severity and the species impacted. We conducted census-based CaCO 3 budget surveys using the established ReefBudget approach at Cheeca Rocks, an inshore patch reef in the Florida Keys, annually from 2012 to 2016. This site experienced warm-water bleaching in 2011, 2014, and 2015. In 2017, we obtained cores of the dominant calcifying coral at this site, Orbicella faveolata , to understand how calcification rates were impacted by bleaching and how they affected the reef-wide CaCO 3 budget. Bleaching depressed O. faveolata growth and the decline of this one species led to an overestimation of mean (± std. error) reef-wide CaCO 3 production by + 0.68 (± 0.167) to + 1.11 (± 0.236) kg m -2  year -1 when using the static ReefBudget coral growth inputs. During non-bleaching years, the ReefBudget inputs slightly underestimated gross production by - 0.10 (± 0.022) to - 0.43 (± 0.100) kg m -2  year -1 . Carbonate production declined after the first year of back-to-back bleaching in 2014, but then increased after 2015 to values greater than the initial surveys in 2012. Cheeca Rocks is an outlier in the Caribbean and Florida Keys in terms of coral cover, carbonate production, and abundance of O. faveolata , which is threatened under the Endangered Species Act. Given the resilience of this site to repeated bleaching events, it may deserve special management attention.

  4. Statistical analysis of the surface figure of the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Lightsey, Paul A.; Chaney, David; Gallagher, Benjamin B.; Brown, Bob J.; Smith, Koby; Schwenker, John

    2012-09-01

    The performance of an optical system is best characterized by either the point spread function (PSF) or the optical transfer function (OTF). However, for system budgeting purposes, it is convenient to use a single scalar metric, or a combination of a few scalar metrics to track performance. For the James Webb Space Telescope, the Observatory level requirements were expressed in metrics of Strehl Ratio, and Encircled Energy. These in turn were converted to the metrics of total rms WFE and rms WFE within spatial frequency domains. The 18 individual mirror segments for the primary mirror segment assemblies (PMSA), the secondary mirror (SM), tertiary mirror (TM), and Fine Steering Mirror have all been fabricated. They are polished beryllium mirrors with a protected gold reflective coating. The statistical analysis of the resulting Surface Figure Error of these mirrors has been analyzed. The average spatial frequency distribution and the mirror-to-mirror consistency of the spatial frequency distribution are reported. The results provide insight to system budgeting processes for similar optical systems.

  5. Simulating the water budget of a Prairie Potholes complex from LiDAR and hydrological models in North Dakota, USA

    USGS Publications Warehouse

    Huang, Shengli; Young, Claudia; Abdul-Aziz, Omar I.; Dahal, Devendra; Feng, Min; Liu, Shuguang

    2013-01-01

    Hydrological processes of the wetland complex in the Prairie Pothole Region (PPR) are difficult to model, partly due to a lack of wetland morphology data. We used Light Detection And Ranging (LiDAR) data sets to derive wetland features; we then modelled rainfall, snowfall, snowmelt, runoff, evaporation, the “fill-and-spill” mechanism, shallow groundwater loss, and the effect of wet and dry conditions. For large wetlands with a volume greater than thousands of cubic metres (e.g. about 3000 m3), the modelled water volume agreed fairly well with observations; however, it did not succeed for small wetlands (e.g. volume less than 450 m3). Despite the failure for small wetlands, the modelled water area of the wetland complex coincided well with interpretation of aerial photographs, showing a linear regression with R2 of around 0.80 and a mean average error of around 0.55 km2. The next step is to improve the water budget modelling for small wetlands.

  6. Simultaneous DPSK demodulation and chirp management using delay interferometer in symmetric 40-Gb/s capability TWDM-PON system.

    PubMed

    Bi, Meihua; Xiao, Shilin; He, Hao; Yi, Lilin; Li, Zhengxuan; Li, Jun; Yang, Xuelin; Hu, Weisheng

    2013-07-15

    We propose a symmetric 40-Gb/s aggregate rate time and wavelength division multiplexed passive optical network (TWDM-PON) system with the capability of simultaneous downstream differential phase shift keying (DPSK) signal demodulation and upstream signal chirp management based on delay interferometer (DI). With the bi-pass characteristic of DI, we experimentally demonstrate the bidirectional transmission of signals at 10-Gb/s per wavelength, and achieve negligible power penalties after 50-km single mode fiber (SMF). For the uplink transmission with DI, a ~11-dB optical power budget improvement at a bit error ratio of 1e-3 is obtained and the extinction ratio (ER) of signal is also improved from 3.4 dB to 13.75 dB. Owing to this high ER, the upstream burst-mode transmitting is successfully presented in term of time-division multiplexing. Moreover, in our experiment, a ~38-dB power budget is obtained to support 256 users with 50-km SMF transmission.

  7. Reducing uncertainties in decadal variability of the global carbon budget with multiple datasets

    PubMed Central

    Li, Wei; Ciais, Philippe; Wang, Yilong; Peng, Shushi; Broquet, Grégoire; Ballantyne, Ashley P.; Canadell, Josep G.; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Peters, Glen P.; Piao, Shilong; Pongratz, Julia

    2016-01-01

    Conventional calculations of the global carbon budget infer the land sink as a residual between emissions, atmospheric accumulation, and the ocean sink. Thus, the land sink accumulates the errors from the other flux terms and bears the largest uncertainty. Here, we present a Bayesian fusion approach that combines multiple observations in different carbon reservoirs to optimize the land (B) and ocean (O) carbon sinks, land use change emissions (L), and indirectly fossil fuel emissions (F) from 1980 to 2014. Compared with the conventional approach, Bayesian optimization decreases the uncertainties in B by 41% and in O by 46%. The L uncertainty decreases by 47%, whereas F uncertainty is marginally improved through the knowledge of natural fluxes. Both ocean and net land uptake (B + L) rates have positive trends of 29 ± 8 and 37 ± 17 Tg C⋅y−2 since 1980, respectively. Our Bayesian fusion of multiple observations reduces uncertainties, thereby allowing us to isolate important variability in global carbon cycle processes. PMID:27799533

  8. Planetary Spatial Analyst

    NASA Technical Reports Server (NTRS)

    Keely, Leslie

    2008-01-01

    This is a status report for the project entitled Planetary Spatial Analyst (PSA). This report covers activities from the project inception on October 1, 2007 to June 1, 2008. Originally a three year proposal, PSA was awarded funding for one year and required a revised work statement and budget. At the time of this writing the project is well on track both for completion of work as well as budget. The revised project focused on two objectives: build a solid connection with the target community and implement a prototype software application that provides 3D visualization and spatial analysis technologies for that community. Progress has been made for both of these objectives.

  9. Evaluating the Sustainability of School-Based Health Centers.

    PubMed

    Navarro, Stephanie; Zirkle, Dorothy L; Barr, Donald A

    2017-01-01

    The United States is facing a surge in the number of school-based health centers (SBHCs) owing to their success in delivering positive health outcomes and increasing access to care. To preserve this success, experts have developed frameworks for creating sustainable SBHCs; however, little research has affirmed or added to these models. This research seeks to analyze elements of sustainability in a case study of three SBHCs in San Diego, California, with the purpose of creating a research-based framework of SBHC sustainability to supplement expertly derived models. Using a mixed methods study design, data were collected from interviews with SBHC stakeholders, observations in SBHCs, and SBHC budgets. A grounded theory qualitative analysis and a quantitative budget analysis were completed to develop a theoretical framework for the sustainability of SBHCs. Forty-one interviews were conducted, 6 hours of observations were completed, and 3 years of SBHC budgets were analyzed to identify care coordination, community buy-in, community awareness, and SBHC partner cooperation as key themes of sustainability promoting patient retention for sustainable billing and reimbursement levels. These findings highlight the unique ways in which SBHCs gain community buy-in and awareness by becoming trusted sources of comprehensive and coordinated care within communities and among vulnerable populations. Findings also support ideas from expert models of SBHC sustainability calling for well-defined and executed community partnerships and quality coordinated care in the procurement of sustainable SBHC funding.

  10. Evaluating software development characteristics: A comparison of software errors in different environments

    NASA Technical Reports Server (NTRS)

    Weiss, D. M.

    1981-01-01

    Error data obtained from two different software development environments are compared. To obtain data that was complete, accurate, and meaningful, a goal-directed data collection methodology was used. Changes made to software were monitored concurrently with its development. Similarities common to both environments are included: (1) the principal error was in the design and implementation of single routines; (2) few errors were the result of changes, required more than one attempt to correct, and resulted in other errors; (3) relatively few errors took more than a day to correct.

  11. Complete energetic description of hydrokinetic turbine impact on flow channel dynamics

    NASA Astrophysics Data System (ADS)

    Brasseale, E.; Kawase, M.

    2016-02-01

    Energy budget analysis on tidal channels quantifies and demarcates the impacts of marine renewables on environmental fluid dynamics. Energy budget analysis assumes the change in total kinetic energy within a volume of fluid can be described by the work done by each force acting on the flow. In a numerically simulated channel, the balance between energy change and work done has been validated up to 5% error.The forces doing work on the flow include pressure, turbulent dissipation, and stress from the estuary floor. If hydrokinetic turbines are installed in an estuarine channel to convert tidal energy into usable power, the dynamics of the channel change. Turbines provide additional pressure work against the flow of the channel which will slow the current and lessen turbulent dissipation and bottom stress. These losses may negatively impact estuarine circulation, seafloor scour, and stratification.The environmental effects of turbine deployment have been quantified using a three dimensional, Reynolds-averaged, Navier-Stokes model of an idealized flow channel situated between the ocean and a large estuarine basin. The channel is five kilometers wide, twenty kilometers long and fifty meters deep, and resolved to a grid size of 10 meters by 10 meters by 1 meter. Tidal currents are simulated by an initial difference in sea surface height across the channel of 160 centimeters from the channel entrance to the channel exit. This creates a pressure gradient which drives flow through the channel. Tidal power turbines are represented as disks that force the channel in proportion to the strength of the current. Three tidal turbines twenty meters in diameters have been included in the model to simulate the impacts of a pilot scale test deployment.This study is the first to appreciate the energetic impact of marine renewables in a three dimensional model through the energy equation's constituent terms. This study provides groundwork for understanding and predicting the environmental impacts of marine renewables.

  12. SMOS: a satellite mission to measure ocean surface salinity

    NASA Astrophysics Data System (ADS)

    Font, Jordi; Kerr, Yann H.; Srokosz, Meric A.; Etcheto, Jacqueline; Lagerloef, Gary S.; Camps, Adriano; Waldteufel, Philippe

    2001-01-01

    The ESA's SMOS (Soil Moisture and Ocean Salinity) Earth Explorer Opportunity Mission will be launched by 2005. Its baseline payload is a microwave L-band (21 cm, 1.4 GHz) 2D interferometric radiometer, Y shaped, with three arms 4.5 m long. This frequency allows the measurement of brightness temperature (Tb) under the best conditions to retrieve soil moisture and sea surface salinity (SSS). Unlike other oceanographic variables, until now it has not been possible to measure salinity from space. However, large ocean areas lack significant salinity measurements. The 2D interferometer will measure Tb at large and different incidence angles, for two polarizations. It is possible to obtain SSS from L-band passive microwave measurements if the other factors influencing Tb (SST, surface roughness, foam, sun glint, rain, ionospheric effects and galactic/cosmic background radiation) can be accounted for. Since the radiometric sensitivity is low, SSS cannot be recovered to the required accuracy from a single measurement as the error is about 1-2 psu. If the errors contributing to the uncertainty in Tb are random, averaging the independent data and views along the track, and considering a 200 km square, allow the error to be reduced to 0.1-0.2 pus, assuming all ancillary errors are budgeted.

  13. Traceability of On-Machine Tool Measurement: A Review.

    PubMed

    Mutilba, Unai; Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor; Yagüe-Fabra, Jose A

    2017-07-11

    Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand.

  14. 14 CFR 1260.71 - Supplements and renewals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., continued research relevance, and progress made by the recipient. (2) To insure uninterrupted programs, the technical office should forward to the grant office a completed award package, including a funded procurement request, technical evaluation of the proposed budget, and other support documentation, at least 29...

  15. Analysis of change orders in geotechnical engineering work at INDOT : [technical summary].

    DOT National Transportation Integrated Search

    2011-01-01

    There was a perception at INDOT that the number of change orders connected with geotechnical work was excessive, and that, as a consequence, geotechnical projects were not completed on time or within budget. It was reported that INDOT construction pr...

  16. 29 CFR 1952.101 - Developmental schedule.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... effective July 1, 1973. (b) Complete revision of all occupational safety and health codes as proposed within... budget. (e) Establishment of specific occupational safety and health goals by July 1, 1974. These goals...

  17. 29 CFR 1952.101 - Developmental schedule.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... effective July 1, 1973. (b) Complete revision of all occupational safety and health codes as proposed within... budget. (e) Establishment of specific occupational safety and health goals by July 1, 1974. These goals...

  18. Quality of death notification forms in North West Bank/Palestine: a descriptive study.

    PubMed

    Qaddumi, Jamal A S; Nazzal, Zaher; Yacoup, Allam R S; Mansour, Mahmoud

    2017-04-11

    The death notification forms (DNFs) are important documents. Thus, inability to fill it properly by physicians will affect the national mortality report and, consequently, the evidence-based decision making. The errors in filling DNFs are common all over the world and are different in types and causes. We aimed to evaluate the quality of DNFs in terms of completeness and types of errors in the cause of death section. A descriptive study was conducted to review 2707 DNFs in North West Bank/Palestine during the year 2012 using data abstraction sheets. SPSS 17.0 was used to show the frequency of major and minor errors committed in filling the DNFs. Surprisingly, only 1% of the examined DNFs had their cause of death section filled completely correct. The immediate cause of death was correctly identified in 5.9% of all DNFs and the underlying cause of death was correctly reported in 55.4% of them. The sequence was incorrect in 41.5% of the DNFs. The most frequently documented minor error was "Not writing Time intervals" error (97.0%). Almost all DNFs contained at least one minor or major error. This high percentage of errors may affect the mortality and morbidity statistics, public health research and the process of providing evidence for health policy. Training workshops on DNF completion for newly recruited employees and at the beginning of the residency program are recommended on a regular basis. As well, we recommend reviewing the national DNFs to simplify it and make it consistent with updated evidence-based guidelines and recommendation.

  19. The Test and Evaluation of Non-Chromate Finishing Agent

    NASA Technical Reports Server (NTRS)

    Okhio, Cyril; Gulley, Harold; Steele, Gynelle (Technical Monitor)

    2002-01-01

    Clark Atlanta University (CAU) has engaged a team comprised of university faculty members, student research assistants, and consultants from industrial and technology firms in order to ensure that NASA receives a quality product on time and within budget. In addition to the current related expertise which CAU possess, we have enlisted materials and aid from several other agencies, and machine manufacturers to assist us in completing the project on time and according to plan and budget. The research team is employing sound engineering principles in its approach to this project. Each phase of the project is preceded by a thorough and complete planning session that details the work plan and provide a comprehensive review of the work to be accomplished during the particular phase and by whom. Maximum use is being made of existing or already planned capabilities to ensure maximum economy, and frequent online reviews are being held to measure progress and to identify a potential problem in sufficient time to allow for corrective action.

  20. Addiction treatment centers' progress in preparing for health care reform

    PubMed Central

    Molfenter, Todd D.

    2013-01-01

    The Patient Protection and Affordable Care Act (PPACA) is expected to significantly alter addiction treatment service delivery. Researchers designed the Health Reform Readiness Index (HRRI) for addiction treatment organizations to assess their readiness for the PPACA. Four-hundred twenty-seven organizations completed the HRRI throughout a three-year period, using a four-point scale to rank their readiness on 13 conditions. HRRI results completed during two different time periods (between 10/1/2010–6/30/2011 and 9/1/2011–9/30/2012) were analyzed and compared. Most respondents self-assessed as being in the early stages of preparation for 9 of the 13 conditions. Survey results showed that organizations with annual budgets <$5 million (n = 295) were less likely to be prepared for the PPACA than organizations with annual budgets >$5 million (n = 132). The HRRI results suggest that the addiction field, and in particular smaller organizations, is not preparing adequately for health care reform; organizations that are making preparations are making only modest gains. PMID:24074851

  1. Optical tolerances for the PICTURE-C mission: error budget for electric field conjugation, beam walk, surface scatter, and polarization aberration

    NASA Astrophysics Data System (ADS)

    Mendillo, Christopher B.; Howe, Glenn A.; Hewawasam, Kuravi; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya

    2017-09-01

    The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. Four leakage sources owing to the optical fabrication tolerances and optical coatings are: electric field conjugation (EFC) residuals, beam walk on the secondary and tertiary mirrors, optical surface scattering, and polarization aberration. Simulations and analysis of these four leakage sources for the PICTUREC optical design are presented here.

  2. GPS (Global Positioning System) Error Budgets, Accuracy and Applications Considerations for Test and Training Ranges.

    DTIC Science & Technology

    1982-12-01

    RELATIONSHIP OF POOP AND HOOP WITH A PRIORI ALTITUDE UNCERTAINTY IN 3 DIMENSIONAL NAVIGATION. 4Satellite configuration ( AZEL ), (00,100), (900,10O), (180,10O...RELATIONSHIP OF HOOP WITH A PRIORI ALTITUDE UNCERTAINTY IN 2 DIMENSIONAL NAVIGATION. Satellite configuration ( AZEL ), (°,lO), (90,10), (180,lOO), (27o8...UNCERTAINTY IN 2 DIMENSIONAL NAVIGATION. Satellite configuration ( AZEL ), (00,100), (909,200), (l80*,30*), (270*,40*) 4.4-12 4.t 78 " 70 " 30F 20F 4S, a

  3. A new model for yaw attitude of Global Positioning System satellites

    NASA Technical Reports Server (NTRS)

    Bar-Sever, Y. E.

    1995-01-01

    Proper modeling of the Global Positioning System (GPS) satellite yaw attitude is important in high-precision applications. A new model for the GPS satellite yaw attitude is introduced that constitutes a significant improvement over the previously available model in terms of efficiency, flexibility, and portability. The model is described in detail, and implementation issues, including the proper estimation strategy, are addressed. The performance of the new model is analyzed, and an error budget is presented. This is the first self-contained description of the GPS yaw attitude model.

  4. Development of a validation model for the defense meteorological satellite program's special sensor microwave imager

    NASA Technical Reports Server (NTRS)

    Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.

    1990-01-01

    The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.

  5. System Error Budgets, Target Distributions and Hitting Performance Estimates for General-Purpose Rifles and Sniper Rifles of 7.62 x 51 mm and Larger Calibers

    DTIC Science & Technology

    1990-05-01

    CLASSIFICATION AUTPOVITY 3. DISTRIBUTION IAVAILABILITY OF REPORT 2b. P OCLASSIFICATION/OOWNGRADING SC14DULE Approved for public release; distribution 4...in the Red Book should obtain a copy of the Engineering Design Handbook, Army Weapon System Analysis, Part One, DARCOM- P 706-101, November 1977; a...companion volume: Army Weapon System Analysis, Part Two, DARCOM- P 706-102, October 1979, also makes worthwhile study. Both of these documents, written by

  6. The Global Energy Balance of Titan

    NASA Technical Reports Server (NTRS)

    Li, Liming; Nixon, Conor A.; Achterberg, Richard K.; Smith, Mark A.; Gorius, Nicolas J. P.; Jiang, Xun; Conrath, Barney J.; Gierasch, Peter J.; Simon-Miller, Amy A.; Flasar, F. Michael; hide

    2011-01-01

    We report the first measurement of the global emitted power of Titan. Longterm (2004-2010) observations conducted by the Composite Infrared Spectrometer (CIRS) onboard Cassini reveal that the total emitted power by Titan is (2.84 plus or minus 0.01) x 10(exp 8) watts. Together with previous measurements of the global absorbed solar power of Titan, the CIRS measurements indicate that the global energy budget of Titan is in equilibrium within measurement error. The uncertainty in the absorbed solar energy places an upper limit on the energy imbalance of 5.3%.

  7. Assessment of meteorological uncertainties as they apply to the ASCENDS mission

    NASA Astrophysics Data System (ADS)

    Snell, H. E.; Zaccheo, S.; Chase, A.; Eluszkiewicz, J.; Ott, L. E.; Pawson, S.

    2011-12-01

    Many environment-oriented remote sensing and modeling applications require precise knowledge of the atmospheric state (temperature, pressure, water vapor, surface pressure, etc.) on a fine spatial grid with a comprehensive understanding of the associated errors. Coincident atmospheric state measurements may be obtained via co-located remote sensing instruments or by extracting these data from ancillary models. The appropriate technique for a given application depends upon the required accuracy. State-of-the-art mesoscale/regional numerical weather prediction (NWP) models operate on spatial scales of a few kilometers resolution, and global scale NWP models operate on scales of tens of kilometers. Remote sensing measurements may be made on spatial scale comparable to the measurement of interest. These measurements normally require a separate sensor, which increases the overall size, weight, power and complexity of the satellite payload. Thus, a comprehensive understanding of the errors associated with each of these approaches is a critical part of the design/characterization of a remote-sensing system whose measurement accuracy depends on knowledge of the atmospheric state. One of the requirements as part of the overall ASCENDS (Active Sensing of CO2 Emissions over Nights, Days, and Seasons) mission development is to develop a consistent set of atmospheric state variables (vertical temperature and water vapor profiles, and surface pressure) for use in helping to constrain overall retrieval error budget. If the error budget requires tighter uncertainties on ancillary atmospheric parameters than can be provided by NWP models and analyses, additional sensors may be required to reduce the overall measurement error and meet mission requirements. To this end we have used NWP models and reanalysis information to generate a set of atmospheric profiles which contain reasonable variability. This data consists of a "truth" set and a companion "measured" set of profiles. The truth set contains climatologically-relevant profiles of pressure, temperature and humidity with an accompanying surface pressure. The measured set consists of some number of instances of the truth set which have been perturbed to represent realistic measurement uncertainty for the truth profile using measurement error covariance matrices. The primary focus has been to develop matrices derived using information about the profile retrieval accuracy as documented for on-orbit sensor systems including AIRS, AMSU, ATMS, and CrIS. Surface pressure variability and uncertainty was derived from globally-compiled station pressure information. We generated an additional measurement set of profiles which represent the overall error within NWP models. These profile sets will allow for comprehensive trade studies for sensor system design and provide a basis for setting measurement requirements for co-located temperature, humidity sounders, determine the utility of NWP data to either replace or supplement collocated measurements, and to assess the overall end-to-end system performance of the sensor system. In this presentation we discuss the process by which we created these data sets and show their utility in performing trade studies for sensor system concepts and designs.

  8. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.

  9. Interactions between moist heating and dynamics in atmospheric predictability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straus, D.M.; Huntley, M.A.

    1994-02-01

    The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less

  10. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  11. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  12. Marine ARM GPCI Investigation of Clouds Infrared Sea Surface Temperature Autonomous Radiometer (ISAR) Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, R. Michael; Long, Charles N.

    Sea surface temperature (SST) is one of the most appropriate and important climate parameters: a widespread increase is an indicator of global warming and modifications of the geographical distribution of SST are an extremely sensitive indicator of climate change. There is high demand for accurate, reliable, high-spatial-and-temporal-resolution SST measurements for the parameterization of ocean-atmosphere heat, momentum, and gas (SST is therefore critical to understanding the processes controlling the global carbon dioxide budget) fluxes, for detailed diagnostic and process-orientated studies to better understand the behavior of the climate system, as model boundary conditions, for assimilation into climate models, and for themore » rigorous validation of climate model output. In order to achieve an overall net flux uncertainty < 10 W/m 2 (Bradley and Fairall, 2006), the sea surface (skin) temperature (SSST) must be measured to an error < 0.1 C and a precision of 0.05 C. Anyone experienced in shipboard meteorological measurements will recognize this is a tough specification. These demands require complete confidence in the content, interpretation, accuracy, reliability, and continuity of observational SST data—criteria that can only be fulfilled by the successful implementation of an ongoing data product validation strategy.« less

  13. Black hole masses in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Denney, Kelly D.

    2010-11-01

    We present the complete results from two, high sampling-rate, multi-month, spectrophotometric reverberation mapping campaigns undertaken to obtain either new or improved Hbeta reverberation lag measurements for several relatively low-luminosity active galactic nuclei (AGNs). We have reliably measured the time delay between variations in the continuum and Hbeta emission line in seven local Seyfert 1 galaxies. These measurements are used to calculate the mass of the supermassive black hole at the center of each of these AGNs. We place our results in context to the most current calibration of the broad-line region (BLR) RBLR-L relationship, where our results remove many outliers and significantly reduce the scatter at the low-luminosity end of this relationship. A detailed analysis of the data from our high sampling rate, multi-month reverberation mapping campaign in 2007 reveals that the Hbeta emission region within the BLRs of several nearby AGNs exhibit a variety of kinematic behaviors. Through a velocity-resolved reverberation analysis of the broad Hbeta emission-line flux variations in our sample, we reconstruct velocity-resolved kinematic signals for our entire sample and clearly see evidence for outflowing, infalling, and virialized BLR gas motions in NGC 3227, NGC 3516, and NGC 5548, respectively. Finally, we explore the nature of systematic errors that can arise in measurements of black hole masses from single-epoch spectra of AGNs by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping databases. In particular, we examine systematics due to AGN variability, contamination due to constant spectral components (i.e., narrow lines and host galaxy flux), data quality (i.e., signal-to-noise ratio, S/N), and blending of spectral features. We investigate the effect that each of these systematics has on the precision and accuracy of single-epoch masses calculated from two commonly-used line-width measures by comparing these results to recent reverberation mapping studies. We then present an error budget which summarizes the minimum observable uncertainties as well as the amount of additional scatter and/or systematic offset that can be expected from the individual sources of error investigated.

  14. Workshops Increase Students' Proficiency at Identifying General and APA-Style Writing Errors

    ERIC Educational Resources Information Center

    Jorgensen, Terrence D.; Marek, Pam

    2013-01-01

    To determine the effectiveness of 20- to 30-min workshops on recognition of errors in American Psychological Association-style writing, 58 introductory psychology students attended one of the three workshops (on grammar, mechanics, or references) and completed error recognition tests (pretest, initial posttest, and three follow-up tests). As a…

  15. Development of a press and drag method for hyperlink selection on smartphones.

    PubMed

    Chang, Joonho; Jung, Kihyo

    2017-11-01

    The present study developed a novel touch method for hyperlink selection on smartphones consisting of two sequential finger interactions: press and drag motions. The novel method requires a user to press a target hyperlink, and if a touch error occurs he/she can immediately correct the touch error by dragging the finger without releasing it in the middle. The method was compared with two existing methods in terms of completion time, error rate, and subjective rating. Forty college students participated in the experiments with different hyperlink sizes (4-pt, 6-pt, 8-pt, and 10-pt) on a touch-screen device. When hyperlink size was small (4-pt and 6-pt), the novel method (time: 826 msec; error: 0.6%) demonstrated better completion time and error rate than the current method (time: 1194 msec; error: 22%). In addition, the novel method (1.15, slightly satisfied, in 7-pt bipolar scale) had significantly higher satisfaction scores than the two existing methods (0.06, neutral). Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Visual error augmentation enhances learning in three dimensions.

    PubMed

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  17. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  18. Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Illinois

    USGS Publications Warehouse

    Mades, D.M.; Oberg, K.A.

    1984-01-01

    Data uses and funding sources were identified for 138 continuous-record discharge-gaging stations currently (1983) operated as part of the stream-gaging program in Illinois. Streamflow data from five of those stations are used only for regional hydrology studies. Most streamflow data are used for defining regional hydrology, defining rainfall-runoff relations, flood forecasting, regulating navigation systems, and water-quality sampling. Based on the evaluations of data use and of alternative methods for determining streamflow in place of stream gaging, no stations in the 1983 stream-gaging program should be deactivated. The current budget (in 1983 dollars) for operating the 138-station program is $768,000 per year. The average standard error of instantaneous discharge for the current practice for visiting the gaging stations is 36.5 percent. Missing stage record accounts for one-third of the 36.5 percent average standard error. (USGS)

  19. Sensitivity analysis for high-contrast missions with segmented telescopes

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; Sauvage, Jean-François; Pueyo, Laurent; Fusco, Thierry; Soummer, Rémi; N'Diaye, Mamadou; St. Laurent, Kathryn

    2017-09-01

    Segmented telescopes enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures, and segment gaps, makes high-contrast imaging very challenging. In this context, we present an analytical model that will enable to establish a comprehensive error budget to evaluate the constraints on the segments and the influence of the error terms on the final image and contrast. Indeed, the target contrast of 1010 to image Earth-like planets requires drastic conditions, both in term of segment alignment and telescope stability. Despite space telescopes evolving in a more friendly environment than ground-based telescopes, remaining vibrations and resonant modes on the segments can still deteriorate the contrast. In this communication, we develop and validate the analytical model, and compare its outputs to images issued from end-to-end simulations.

  20. Internal robustness: systematic search for systematic bias in SN Ia data

    NASA Astrophysics Data System (ADS)

    Amendola, Luca; Marra, Valerio; Quartin, Miguel

    2013-04-01

    A great deal of effort is currently being devoted to understanding, estimating and removing systematic errors in cosmological data. In the particular case of Type Ia supernovae, systematics are starting to dominate the error budget. Here we propose a Bayesian tool for carrying out a systematic search for systematic contamination. This serves as an extension to the standard goodness-of-fit tests and allows not only to cross-check raw or processed data for the presence of systematics but also to pin-point the data that are most likely contaminated. We successfully test our tool with mock catalogues and conclude that the Union2.1 data do not possess a significant amount of systematics. Finally, we show that if one includes in Union2.1 the supernovae that originally failed the quality cuts, our tool signals the presence of systematics at over 3.8σ confidence level.

Top