Wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope system
NASA Astrophysics Data System (ADS)
Wei, Kai; Zhang, Xuejun; Xian, Hao; Rao, Changhui; Zhang, Yudong
2010-05-01
We present the wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope. The error budget accounts for aberrations induced by optical design residual, manufacturing error, mounting effects, and misalignments. The initial error budget has been generated from the top-down. There will also be an ongoing effort to track the errors from the bottom-up. This will aid in identifying critical areas of concern. The resolution of conflicts will involve a continual process of review and comparison of the top-down and bottom-up approaches, modifying both as needed to meet the top level requirements in the end. As we all know, the adaptive optical system will correct for some of the telescope system imperfections but it cannot be assumed that all errors will be corrected. Therefore, two kinds of error budgets will be presented, one is non-AO top-down error budget and the other is with-AO system error budget. The main advantage of the method is that at the same time it describes the final performance of the telescope, and gives to the optical manufacturer the maximum freedom to define and possibly modify its own manufacturing error budget.
NASA Technical Reports Server (NTRS)
Miller, J. M.
1980-01-01
ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS
Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...
Cost-effectiveness of the streamflow-gaging program in Wyoming
Druse, S.A.; Wahl, K.L.
1988-01-01
This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.
1998-01-01
Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.
Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.
Earth radiation budget measurement from a spinning satellite: Conceptual design of detectors
NASA Technical Reports Server (NTRS)
Sromovsky, L. A.; Revercomb, H. E.; Suomi, V. E.
1975-01-01
The conceptual design, sensor characteristics, sensor performance and accuracy, and spacecraft and orbital requirements for a spinning wide-field-of-view earth energy budget detector were investigated. The scientific requirements for measurement of the earth's radiative energy budget are presented. Other topics discussed include the observing system concept, solar constant radiometer design, plane flux wide FOV sensor design, fast active cavity theory, fast active cavity design and error analysis, thermopile detectors as an alternative, pre-flight and in-flight calibration plane, system error summary, and interface requirements.
Kinetic energy budgets in areas of intense convection
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Berecek, E. M.; Ebel, D. M.; Jedlovec, G. J.
1980-01-01
A kinetic energy budget analysis of the AVE-SESAME 1 period which coincided with the deadly Red River Valley tornado outbreak is presented. Horizontal flux convergence was found to be the major kinetic energy source to the region, while cross contour destruction was the major sink. Kinetic energy transformations were dominated by processes related to strong jet intrusion into the severe storm area. A kinetic energy budget of the AVE 6 period also is presented. The effects of inherent rawinsonde data errors on widely used basic kinematic parameters, including velocity divergence, vorticity advection, and kinematic vertical motion are described. In addition, an error analysis was performed in terms of the kinetic energy budget equation. Results obtained from downward integration of the continuity equation to obtain kinematic values of vertical motion are described. This alternate procedure shows promising results in severe storm situations.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
Developing Performance Estimates for High Precision Astrometry with TMT
NASA Astrophysics Data System (ADS)
Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana
2013-12-01
Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.
Cost effectiveness of the US Geological Survey's stream-gaging program in New York
Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.
1986-01-01
The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)
Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana
Stewart, J.A.; Miller, R.L.; Butch, G.K.
1986-01-01
Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
NASA Astrophysics Data System (ADS)
Wu, Guocan; Zheng, Xiaogu; Dan, Bo
2016-04-01
The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.
Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont
Smath, J.A.; Blackey, F.E.
1986-01-01
Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)
Kinetic energy budget during strong jet stream activity over the eastern United States
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Scoggins, J. R.
1980-01-01
Kinetic energy budgets are computed during a cold air outbreak in association with strong jet stream activity over the eastern United States. The period is characterized by large generation of kinetic energy due to cross-contour flow. Horizontal export and dissipation of energy to subgrid scales of motion constitute the important energy sinks. Rawinsonde data at 3 and 6 h intervals during a 36 h period are used in the analysis and reveal that energy fluctuations on a time scale of less than 12 h are generally small even though the overall energy balance does change considerably during the period in conjunction with an upper level trough which moves through the region. An error analysis of the energy budget terms suggests that this major change in the budget is not due to random errors in the input data but is caused by the changing synoptic situation. The study illustrates the need to consider the time and space scales of associated weather phenomena in interpreting energy budgets obtained through use of higher frequency data.
Cost effectiveness of stream-gaging program in Michigan
Holtschlag, D.J.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Meteorological Error Budget Using Open Source Data
2016-09-01
ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using Open- Source Data by J Cogan, J Smith, P...needed. Do not return it to the originator. ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using...Error Budget Using Open-Source Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) J Cogan, J Smith, P Haines
Zonal average earth radiation budget measurements from satellites for climate studies
NASA Technical Reports Server (NTRS)
Ellis, J. S.; Haar, T. H. V.
1976-01-01
Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.
General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.
2011-01-01
The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
Quantifying uncertainty in forest nutrient budgets
Ruth D. Yanai; Carrie R. Levine; Mark B. Green; John L. Campbell
2012-01-01
Nutrient budgets for forested ecosystems have rarely included error analysis, in spite of the importance of uncertainty to interpretation and extrapolation of the results. Uncertainty derives from natural spatial and temporal variation and also from knowledge uncertainty in measurement and models. For example, when estimating forest biomass, researchers commonly report...
Imaging phased telescope array study
NASA Technical Reports Server (NTRS)
Harvey, James E.
1989-01-01
The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.
Balancing the books - a statistical theory of prospective budgets in Earth System science
NASA Astrophysics Data System (ADS)
O'Kane, J. Philip
An honest declaration of the error in a mass, momentum or energy balance, ɛ, simply raises the question of its acceptability: "At what value of ɛ is the attempted balance to be rejected?" Answering this question requires a reference quantity against which to compare ɛ. This quantity must be a mathematical function of all the data used in making the balance. To deliver this function, a theory grounded in a workable definition of acceptability is essential. A distinction must be drawn between a retrospective balance and a prospective budget in relation to any natural space-filling body. Balances look to the past; budgets look to the future. The theory is built on the application of classical sampling theory to the measurement and closure of a prospective budget. It satisfies R.A. Fisher's "vital requirement that the actual and physical conduct of experiments should govern the statistical procedure of their interpretation". It provides a test, which rejects, or fails to reject, the hypothesis that the closing error on the budget, when realised, was due to sampling error only. By increasing the number of measurements, the discrimination of the test can be improved, controlling both the precision and accuracy of the budget and its components. The cost-effective design of such measurement campaigns is discussed briefly. This analysis may also show when campaigns to close a budget on a particular space-filling body are not worth the effort for either scientific or economic reasons. Other approaches, such as those based on stochastic processes, lack this finality, because they fail to distinguish between different types of error in the mismatch between a set of realisations of the process and the measured data.
NASA Technical Reports Server (NTRS)
Thome, K.
2016-01-01
Knowledge of uncertainties and errors are essential for comparisons of remote sensing data across time, space, and spectral domains. Vicarious radiometric calibration is used to demonstrate the need for uncertainty knowledge and to provide an example error budget. The sample error budget serves as an example of the questions and issues that need to be addressed by the calibrationvalidation community as accuracy requirements for imaging spectroscopy data will continue to become more stringent in the future. Error budgets will also be critical to ensure consistency between the range of imaging spectrometers expected to be launched in the next five years.
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.
2017-01-01
In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.
Sensitivity of planetary cruise navigation to earth orientation calibration errors
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Folkner, W. M.
1995-01-01
A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.
Benhamou, Dan; Piriou, Vincent; De Vaumas, Cyrille; Albaladejo, Pierre; Malinovsky, Jean-Marc; Doz, Marianne; Lafuma, Antoine; Bouaziz, Hervé
2017-04-01
Patient safety is improved by the use of labelled, ready-to-use, pre-filled syringes (PFS) when compared to conventional methods of syringe preparation (CMP) of the same product from an ampoule. However, the PFS presentation costs more than the CMP presentation. To estimate the budget impact for French hospitals of switching from atropine in ampoules to atropine PFS for anaesthesia care. A model was constructed to simulate the financial consequences of the use of atropine PFS in operating theatres, taking into account wastage and medication errors. The model tested different scenarios and a sensitivity analysis was performed. In a reference scenario, the systematic use of atropine PFS rather than atropine CMP yielded a net one-year budget saving of €5,255,304. Medication errors outweighed other cost factors relating to the use of atropine CMP (€9,425,448). Avoidance of wastage in the case of atropine CMP (prepared and unused) was a major source of savings (€1,167,323). Significant savings were made by means of other scenarios examined. The sensitivity analysis suggests that the results obtained are robust and stable for a range of parameter estimates and assumptions. The financial model was based on data obtained from the literature and expert opinions. The budget impact analysis shows that even though atropine PFS is more expensive than atropine CMP, its use would lead to significant cost savings. Savings would mainly be due to fewer medication errors and their associated consequences and the absence of wastage when atropine syringes are prepared in advance. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.
Modeling and analysis of pinhole occulter experiment
NASA Technical Reports Server (NTRS)
Ring, J. R.
1986-01-01
The objectives were to improve pointing control system implementation by converting the dynamic compensator from a continuous domain representation to a discrete one; to determine pointing stability sensitivites to sensor and actuator errors by adding sensor and actuator error models to treetops and by developing an error budget for meeting pointing stability requirements; and to determine pointing performance for alternate mounting bases (space station for example).
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers
NASA Technical Reports Server (NTRS)
Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen;
2016-01-01
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
Cost-effectiveness of the Federal stream-gaging program in Virginia
Carpenter, D.H.
1985-01-01
Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
NASA Technical Reports Server (NTRS)
Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce
1985-01-01
An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
Performance analysis of next-generation lunar laser retroreflectors
NASA Astrophysics Data System (ADS)
Ciocci, Emanuele; Martini, Manuele; Contessa, Stefania; Porcelli, Luca; Mastrofini, Marco; Currie, Douglas; Delle Monache, Giovanni; Dell'Agnello, Simone
2017-09-01
Starting from 1969, Lunar Laser Ranging (LLR) to the Apollo and Lunokhod Cube Corner Retroreflectors (CCRs) provided several tests of General Relativity (GR). When deployed, the Apollo/Lunokhod CCRs design contributed only a negligible fraction of the ranging error budget. Today the improvement over the years in the laser ground stations makes the lunar libration contribution relevant. So the libration now dominates the error budget limiting the precision of the experimental tests of gravitational theories. The MoonLIGHT-2 project (Moon Laser Instrumentation for General relativity High-accuracy Tests - Phase 2) is a next-generation LLR payload developed by the Satellite/lunar/GNSS laser ranging/altimetry and Cube/microsat Characterization Facilities Laboratory (SCF _ Lab) at the INFN-LNF in collaboration with the University of Maryland. With its unique design consisting of a single large CCR unaffected by librations, MoonLIGHT-2 can significantly reduce error contribution of the reflectors to the measurement of the lunar geodetic precession and other GR tests compared to Apollo/Lunokhod CCRs. This paper treats only this specific next-generation lunar laser retroreflector (MoonLIGHT-2) and it is by no means intended to address other contributions to the global LLR error budget. MoonLIGHT-2 is approved to be launched with the Moon Express 1(MEX-1) mission and will be deployed on the Moon surface in 2018. To validate/optimize MoonLIGHT-2, the SCF _ Lab is carrying out a unique experimental test called SCF-Test: the concurrent measurement of the optical Far Field Diffraction Pattern (FFDP) and the temperature distribution of the CCR under thermal conditions produced with a close-match solar simulator and simulated space environment. The focus of this paper is to describe the SCF _ Lab specialized characterization of the performance of our next-generation LLR payload. While this payload will improve the contribution of the error budget of the space segment (MoonLIGHT-2) to GR tests and to constraints on new gravitational theories (like non-minimally coupled gravity and spacetime torsion), the description of the associated physics analysis and global LLR error budget is outside of the chosen scope of present paper. We note that, according to Reasenberg et al. (2016), software models used for LLR physics and lunar science cannot process residuals with an accuracy better than few centimeters and that, in order to process millimeter ranging data (or better) coming from (not only) future reflectors, it is necessary to update and improve the respective models inside the software package. The work presented here on results of the SCF-test thermal and optical analysis shows that a good performance is expected by MoonLIGHT-2 after its deployment on the Moon. This in turn will stimulate improvements in LLR ground segment hardware and help refine the LLR software code and models. Without a significant improvement of the LLR space segment, the acquisition of improved ground LLR hardware and challenging LLR software refinements may languish for lack of motivation, since the librations of the old generation LLR payloads largely dominate the global LLR error budget.
Multidisciplinary Analysis of the NEXUS Precursor Space Telescope
NASA Astrophysics Data System (ADS)
de Weck, Olivier L.; Miller, David W.; Mosier, Gary E.
2002-12-01
A multidisciplinary analysis is demonstrated for the NEXUS space telescope precursor mission. This mission was originally designed as an in-space technology testbed for the Next Generation Space Telescope (NGST). One of the main challenges is to achieve a very tight pointing accuracy with a sub-pixel line-of-sight (LOS) jitter budget and a root-mean-square (RMS) wavefront error smaller than λ/50 despite the presence of electronic and mechanical disturbances sources. The analysis starts with the assessment of the performance for an initial design, which turns out not to meet the requirements. Twentyfive design parameters from structures, optics, dynamics and controls are then computed in a sensitivity and isoperformance analysis, in search of better designs. Isoperformance allows finding an acceptable design that is well "balanced" and does not place undue burden on a single subsystem. An error budget analysis shows the contributions of individual disturbance sources. This paper might be helpful in analyzing similar, innovative space telescope systems in the future.
Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric
2009-01-01
The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert
2012-01-01
The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.
Ultraspectral sounding retrieval error budget and estimation
NASA Astrophysics Data System (ADS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping
2011-11-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).
Ultraspectral Sounding Retrieval Error Budget and Estimation
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping
2011-01-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..
Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage
Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José
2016-01-01
Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014
First-order error budgeting for LUVOIR mission
NASA Astrophysics Data System (ADS)
Lightsey, Paul A.; Knight, J. Scott; Feinberg, Lee D.; Bolcar, Matthew R.; Shaklan, Stuart B.
2017-09-01
Future large astronomical telescopes in space will have architectures that will have complex and demanding requirements to meet the science goals. The Large UV/Optical/IR Surveyor (LUVOIR) mission concept being assessed by the NASA/Goddard Space Flight Center is expected to be 9 to 15 meters in diameter, have a segmented primary mirror and be diffraction limited at a wavelength of 500 nanometers. The optical stability is expected to be in the picometer range for minutes to hours. Architecture studies to support the NASA Science and Technology Definition teams (STDTs) are underway to evaluate systems performance improvements to meet the science goals. To help define the technology needs and assess performance, a first order error budget has been developed. Like the JWST error budget, the error budget includes the active, adaptive and passive elements in spatial and temporal domains. JWST performance is scaled using first order approximations where appropriate and includes technical advances in telescope control.
Geometric error characterization and error budgets. [thematic mapper
NASA Technical Reports Server (NTRS)
Beyer, E.
1982-01-01
Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.
Onorbit IMU alignment error budget
NASA Technical Reports Server (NTRS)
Corson, R. W.
1980-01-01
The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
Towards the 1 mm/y stability of the radial orbit error at regional scales
NASA Astrophysics Data System (ADS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-François; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2015-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West “order-1” pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales
NASA Technical Reports Server (NTRS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2015-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales
NASA Technical Reports Server (NTRS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2014-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS,SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
The Terrestrial Planet Finder coronagraph dynamics error budget
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Green, Joseph J.; Lay, Oliver P.
2005-01-01
The Terrestrial Planet Finder Coronagraph (TPF-C) demands extreme wave front control and stability to achieve its goal of detecting earth-like planets around nearby stars. We describe the performance models and error budget used to evaluate image plane contrast and derive engineering requirements for this challenging optical system.
Cost-effectiveness of the stream-gaging program in Nebraska
Engel, G.B.; Wahl, K.L.; Boohar, J.A.
1984-01-01
This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.
A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.; Shaklan, Stuart B.
2009-01-01
This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.
Pattern uniformity control in integrated structures
NASA Astrophysics Data System (ADS)
Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Biesemans, Serge; Enomoto, Masashi
2017-03-01
In our previous paper dealing with multi-patterning, we proposed a new indicator to quantify the quality of final wafer pattern transfer, called interactive pattern fidelity error (IPFE). It detects patterning failures resulting from any source of variation in creating integrated patterns. IPFE is a function of overlay and edge placement error (EPE) of all layers comprising the final pattern (i.e. lower and upper layers). In this paper, we extend the use cases with Via in additional to the bridge case (Block on Spacer). We propose an IPFE budget and CD budget using simple geometric and statistical models with analysis of a variance (ANOVA). In addition, we validate the model with experimental data. From the experimental results, improvements in overlay, local-CDU (LCDU) of contact hole (CH) or pillar patterns (especially, stochastic pattern noise (SPN)) and pitch walking are all critical to meet budget requirements. We also provide a special note about the importance of the line length used in analyzing LWR. We find that IPFE and CD budget requirements are consistent to the table of the ITRS's technical requirement. Therefore the IPFE concept can be adopted for a variety of integrated structures comprising digital logic circuits. Finally, we suggest how to use IPFE for yield management and optimization requirements for each process.
Ma, H. -Y.; Klein, S. A.; Xie, S.; ...
2018-02-27
Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less
NASA Astrophysics Data System (ADS)
Ma, H.-Y.; Klein, S. A.; Xie, S.; Zhang, C.; Tang, S.; Tang, Q.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.; Ahlgrimm, M.; Berg, L. K.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Liu, Y.; Merryfield, W.; Qian, Y.; Roehrig, R.; Wang, Y.-C.
2018-03-01
Many weather forecast and climate models simulate warm surface air temperature (T2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions of surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, H. -Y.; Klein, S. A.; Xie, S.
Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less
Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.
2004-01-01
Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.
Observing the earth radiation budget from satellites - Past, present, and a look to the future
NASA Technical Reports Server (NTRS)
House, F. B.
1985-01-01
Satellite measurements of the radiative exchange between the planet earth and space have been the objective of many experiments since the beginning of the space age in the late 1950's. The on-going mission of the Earth Radiation Budget (ERB) experiments has been and will be to consider flight hardware, data handling and scientific analysis methods in a single design strategy. Research and development on observational data has produced an analysis model of errors associated with ERB measurement systems on polar satellites. Results show that the variability of reflected solar radiation from changing meteorology dominates measurement uncertainties. As an application, model calculations demonstrate that measurement requirements for the verification of climate models may be satisfied with observations from one polar satellite, provided there is information on diurnal variations of the radiation budget from the ERBE mission.
NASA Astrophysics Data System (ADS)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2008-07-01
This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert
2011-01-01
The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.
Prediction errors in wildland fire situation analyses.
Geoffrey H. Donovan; Peter Noordijk
2005-01-01
Wildfires consume budgets and put the heat on fire managers to justify and control suppression costs. To determine the appropriate suppression strategy, land managers must conduct a wildland fire situation analysis (WFSA) when:A wildland fire is expected to or does escape initial attack,A wildland fire managed for resource benefits...
Weighing Rocky Exoplanets with Improved Radial Velocimetry
NASA Astrophysics Data System (ADS)
Xuesong Wang, Sharon; Wright, Jason; California Planet Survey Consortium
2016-01-01
The synergy between Kepler and the ground-based radial velocity (RV) surveys have made numerous discoveries of small and rocky exoplanets, opening the age of Earth analogs. However, most (29/33) of the RV-detected exoplanets that are smaller than 3 Earth radii do not have their masses constrained to better than 20% - limited by the current RV precision (1-2 m/s). Our work improves the RV precision of the Keck telescope, which is responsible for most of the mass measurements for small Kepler exoplanets. We have discovered and verified, for the first time, two of the dominant terms in Keck's RV systematic error budget: modeling errors (mostly in deconvolution) and telluric contamination. These two terms contribute 1 m/s and 0.6 m/s, respectively, to the RV error budget (RMS in quadrature), and they create spurious signals at periods of one sidereal year and its harmonics with amplitudes of 0.2-1 m/s. Left untreated, these errors can mimic the signals of Earth-like or Super-Earth planets in the Habitable Zone. Removing these errors will bring better precision to ten-year worth of Keck data and better constraints on the masses and compositions of small Kepler planets. As more precise RV instruments coming online, we need advanced data analysis tools to overcome issues like these in order to detect the Earth twin (RV amplitude 8 cm/s). We are developing a new, open-source RV data analysis tool in Python, which uses Bayesian MCMC and Gaussian processes, to fully exploit the hardware improvements brought by new instruments like MINERVA and NASA's WIYN/EPDS.
Cost effectiveness of the stream-gaging program in Ohio
Shindel, H.L.; Bartlett, W.P.
1986-01-01
This report documents the results of the cost effectiveness of the stream-gaging program in Ohio. Data uses and funding sources were identified for 107 continuous stream gages currently being operated by the U.S. Geological Survey in Ohio with a budget of $682,000; this budget includes field work for other projects and excludes stations jointly operated with the Miami Conservancy District. No stream gage were identified as having insufficient reason to continue their operation; nor were any station identified as having uses specifically only for short-term studies. All 107 station should be maintained in the program for the foreseeable future. The average standard error of estimation of stream flow records is 29.2 percent at its present level of funding. A minimum budget of $679,000 is required to operate the 107-gage program; a budget less than this does no permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 31.1 percent The maximum budget analyzed was $1,282,000, which resulted in an average standard error of 11.1 percent. A need for additional gages has been identified by the other agencies that cooperate in the program. It is suggested that these gage be installed as funds can be made available.
Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.
1985-01-01
The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)
NASA Technical Reports Server (NTRS)
Stoll, John C.
1995-01-01
The performance of an unaided attitude determination system based on GPS interferometry is examined using linear covariance analysis. The modelled system includes four GPS antennae onboard a gravity gradient stabilized spacecraft, specifically the Air Force's RADCAL satellite. The principal error sources are identified and modelled. The optimal system's sensitivities to these error sources are examined through an error budget and by varying system parameters. The effects of two satellite selection algorithms, Geometric and Attitude Dilution of Precision (GDOP and ADOP, respectively) are examined. The attitude performance of two optimal-suboptimal filters is also presented. Based on this analysis, the limiting factors in attitude accuracy are the knowledge of the relative antenna locations, the electrical path lengths from the antennae to the receiver, and the multipath environment. The performance of the system is found to be fairly insensitive to torque errors, orbital inclination, and the two satellite geometry figures-of-merit tested.
Sunrise/sunset thermal shock disturbance analysis and simulation for the TOPEX satellite
NASA Technical Reports Server (NTRS)
Dennehy, C. J.; Welch, R. V.; Zimbelman, D. F.
1990-01-01
It is shown here that during normal on-orbit operations the TOPEX low-earth orbiting satellite is subjected to an impulsive disturbance torque caused by rapid heating of its solar array when entering and exiting the earth's shadow. Error budgets and simulation results are used to demonstrate that this sunrise/sunset torque disturbance is the dominant Normal Mission Mode (NMM) attitude error source. The detailed thermomechanical modeling, analysis, and simulation of this torque is described, and the predicted on-orbit performance of the NMM attitude control system in the face of the sunrise/sunset disturbance is presented. The disturbance results in temporary attitude perturbations that exceed NMM pointing requirements. However, they are below the maximum allowable pointing error which would cause the radar altimeter to break lock.
Adverse effects in dual-feed interferometry
NASA Astrophysics Data System (ADS)
Colavita, M. Mark
2009-11-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.
New Methods for Assessing and Reducing Uncertainty in Microgravity Studies
NASA Astrophysics Data System (ADS)
Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.
2017-12-01
Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2013-01-01
A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.
Cost effectiveness of the stream-gaging program in Nevada
Arteaga, F.E.
1990-01-01
The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)
JASMINE: Data analysis and simulation
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Sako, Nobutada; Jasmine Working Group
JASMINE will study the structure and evolution of the Milky Way Galaxy. To accomplish these objectives JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μas at z = 14 mag. In this paper methods for data analysis and error budgets, on-board data handling such as sampling strategy and data compression, and simulation software for end-to-end simulation are presented.
1985-12-20
Report) Approved for Public Disemination I 17. DISTRIBUTION STATEMENT (of the abstract entered In Block 20, It different from Report) I1. SUPPLEMENTARY...Continue an riverl. aid. It neceseary ind Idoni..•y by block number) Fix Estimation Statistical Assumptions, Error Budget, Unnodclcd Errors, Coding...llgedl i t Eh’ fI) t r !". 1 I ’ " r, tl 1: a Icr it h m hc ro ,, ] y zcd arc Csedil other Current TIV! Sysem ’ he report examines the underlying
Performance of the Keck Observatory adaptive-optics system.
van Dam, Marcos A; Le Mignant, David; Macintosh, Bruce A
2004-10-10
The adaptive-optics (AO) system at the W. M. Keck Observatory is characterized. We calculate the error budget of the Keck AO system operating in natural guide star mode with a near-infrared imaging camera. The measurement noise and bandwidth errors are obtained by modeling the control loops and recording residual centroids. Results of sky performance tests are presented: The AO system is shown to deliver images with average Strehl ratios of as much as 0.37 at 1.58 microm when a bright guide star is used and of 0.19 for a magnitude 12 star. The images are consistent with the predicted wave-front error based on our error budget estimates.
Cost-effectiveness of the stream-gaging program in North Carolina
Mason, R.R.; Jackson, N.M.
1985-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, H. -Y.; Klein, S. A.; Xie, S.
Many weather forecasting and climate models simulate a warm surface air temperature (T2m) bias over mid-latitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multi-model intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to T2m bias using a short-term hindcast approach with observations mainly from the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site during the period of April to August 2011. The present study examines the contributionmore » of surface energy budget errors to the bias. All participating models simulate higher net shortwave and longwave radiative fluxes at the surface but there is no consistency on signs of biases in latent and sensible heat fluxes over the Central U.S. and ARM SGP. Nevertheless, biases in net shortwave and downward longwave fluxes, as well as surface evaporative fraction (EF) are the main contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF is affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis suggests that radiation errors are always an important source of T2m error for long-term climate runs with EF errors either of equal or lesser importance. However, for the short-term hindcasts, EF errors are more important provided a model has a substantial EF bias.« less
Cost-effectiveness of the stream-gaging program in Maine; a prototype for nationwide implementation
Fontaine, Richard A.; Moss, M.E.; Smath, J.A.; Thomas, W.O.
1984-01-01
This report documents the results of a cost-effectiveness study of the stream-gaging program in Maine. Data uses and funding sources were identified for the 51 continuous stream gages currently being operated in Maine with a budget of $211,000. Three stream gages were identified as producing data no longer sufficiently needed to warrant continuing their operation. Operation of these stations should be discontinued. Data collected at three other stations were identified as having uses specific only to short-term studies; it is recommended that these stations be discontinued at the end of the data-collection phases of the studies. The remaining 45 stations should be maintained in the program for the foreseeable future. The current policy for operation of the 45-station program would require a budget of $180,300 per year. The average standard error of estimation of streamflow records is 17.7 percent. It was shown that this overall level of accuracy at the 45 sites could be maintained with a budget of approximately $170,000 if resources were redistributed among the gages. A minimum budget of $155,000 is required to operate the 45-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 25.1 percent. The maximum budget analyzed was $350,000, which resulted in an average standard error of 8.7 percent. Large parts of Maine's interior were identified as having sparse streamflow data. It was determined that this sparsity be remedied as funds become available.
Comparison of direct and heterodyne detection optical intersatellite communication links
NASA Technical Reports Server (NTRS)
Chen, C. C.; Gardner, C. S.
1987-01-01
The performance of direct and heterodyne detection optical intersatellite communication links are evaluated and compared. It is shown that the performance of optical links is very sensitive to the pointing and tracking errors at the transmitter and receiver. In the presence of random pointing and tracking errors, optimal antenna gains exist that will minimize the required transmitter power. In addition to limiting the antenna gains, random pointing and tracking errors also impose a power penalty in the link budget. This power penalty is between 1.6 to 3 dB for a direct detection QPPM link, and 3 to 5 dB for a heterodyne QFSK system. For the heterodyne systems, the carrier phase noise presents another major factor of performance degradation that must be considered. In contrast, the loss due to synchronization error is small. The link budgets for direct and heterodyne detection systems are evaluated. It is shown that, for systems with large pointing and tracking errors, the link budget is dominated by the spatial tracking error, and the direct detection system shows a superior performance because it is less sensitive to the spatial tracking error. On the other hand, for systems with small pointing and tracking jitters, the antenna gains are in general limited by the launch cost, and suboptimal antenna gains are often used in practice. In which case, the heterodyne system has a slightly higher power margin because of higher receiver sensitivity.
A Starshade Petal Error Budget for Exo-Earth Detection and Characterization
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Lisman, P. Douglas; Cady, Eric; Martin, Stefan; Thomson, Mark; Dumont, Philip; Kasdin, N. Jeremy
2011-01-01
We present a starshade error budget with engineering requirements that are well within the current manufacturing and metrology capabilities. The error budget is based on an observational scenario in which the starshade spins about its axis on timescales short relative to the zodi-limited integration time, typically several hours. The scatter from localized petal errors is smoothed into annuli around the center of the image plane, resulting in a large reduction in the background flux variation while reducing thermal gradients caused by structural shadowing. Having identified the performance sensitivity to petal shape errors with spatial periods of 3-4 cycles/petal as the most challenging aspect of the design, we have adopted and modeled a manufacturing approach that mitigates these perturbations with 1-meter-long precision edge segments positioned using commercial metrology that readily meets assembly requirements. We have performed detailed thermal modeling and show that the expected thermal deformations are well within the requirements as well. We compare the requirements for four cases: a 32 meter diameter starshade with a 1.5 meter telescope, analyzed at 75 and 90 milliarcseconds, and a 40 meter diameter starshade with a 4 meter telescope, analyzed at 60 and 75 milliarcseconds.
Oriented Scintillation Spectrometer Experiment (OSSE). Revision A. Volume 1
1988-05-19
SYSTEM-LEVEL ENVIRONMENTAL TESTS ................... 108 3.5.1 OPERATION REPORT, PROOF MODEL STRUCTURE TESTS.. .108 3.5.1.1 PROOF MODEL MODAL SURVEY...81 3-21 ALIGNMENT ERROR BUDGET, FOV, A4 ................ 82 3-22 ALIGNMENT ERROR BUDGET, ROTATION AXIS, A4 ...... 83 3-23 OSSE PROOF MODEL MODAL SURVEY...PROOF MODEL MODAL SURVEY .................. 112 3-27-1 OSSE PROOF MODEL STATIC LOAD TEST ............. 116 3-27-2 OSSE PROOF MODEL STATIC LOAD TEST
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.
2016-12-01
This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.
Application of Monte-Carlo Analyses for the Microwave Anisotropy Probe (MAP) Mission
NASA Technical Reports Server (NTRS)
Mesarch, Michael A.; Rohrbaugh, David; Schiff, Conrad; Bauer, Frank H. (Technical Monitor)
2001-01-01
The Microwave Anisotropy Probe (MAP) is the third launch in the National Aeronautics and Space Administration's (NASA's) a Medium Class Explorers (MIDEX) program. MAP will measure, in greater detail, the cosmic microwave background radiation from an orbit about the Sun-Earth-Moon L2 Lagrangian point. Maneuvers will be required to transition MAP from it's initial highly elliptical orbit to a lunar encounter which will provide the remaining energy to send MAP out to a lissajous orbit about L2. Monte-Carlo analysis methods were used to evaluate the potential maneuver error sources and determine their effect of the fixed MAP propellant budget. This paper will discuss the results of the analyses on three separate phases of the MAP mission - recovering from launch vehicle errors, responding to phasing loop maneuver errors, and evaluating the effect of maneuver execution errors and orbit determination errors on stationkeeping maneuvers at L2.
Cost effectiveness of the stream-gaging program in Louisiana
Herbert, R.A.; Carlson, D.D.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Louisiana. Data uses and funding sources were identified for the 68 continuous-record stream gages currently (1984) in operation with a budget of $408,700. Three stream gages have uses specific to a short-term study with no need for continued data collection beyond the study. The remaining 65 stations should be maintained in the program for the foreseeable future. In addition to the current operation of continuous-record stations, a number of wells, flood-profile gages, crest-stage gages, and stage stations, are serviced on the continuous-record station routes; thus, increasing the current budget to $423,000. The average standard error of estimate for data collected at the stations is 34.6%. Standard errors computed in this study are one measure of streamflow errors, and can be used as guidelines in comparing the effectiveness of alternative networks. By using the routes and number of measurements prescribed by the ' Traveling Hydrographer Program, ' the standard error could be reduced to 31.5% with the current budget of $423,000. If the gaging resources are redistributed, the 34.6% overall level of accuracy at the 68 continuous-record sites and the servicing of the additional wells or gages could be maintained with a budget of approximately $410,000. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Dam, M A; Mignant, D L; Macintosh, B A
In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitudemore » 12 star.« less
1982-04-25
the Directorate of Programs (AFLC/ XRP ), and 11-4 * the Directorate of Logistics Plans and Programs, Aircraft/Missiles Program Division of the Air Staff...OWRM). * The P-18 Exhibit/Budget Estimate Submission (BES), a document developed by AFLC/LOR, is reviewed by AFLC/ XRP , and is presented to HQ USAF
A manual to identify sources of fluvial sediment
Gellis, Allen C.; Fitzpatrick, Faith A.; Schubauer-Berigan, Joseph
2016-01-01
Sediment is an important pollutant of concern that can degrade and alter aquatic habitat. A sediment budget is an accounting of the sources, storage, and export of sediment over a defined spatial and temporal scale. This manual focuses on field approaches to estimate a sediment budget. We also highlight the sediment fingerprinting approach to attribute sediment to different watershed sources. Determining the sources and sinks of sediment is important in developing strategies to reduce sediment loads to water bodies impaired by sediment. Therefore, this manual can be used when developing a sediment TMDL requiring identification of sediment sources.The manual takes the user through the seven necessary steps to construct a sediment budget:Decision-making for watershed scale and time period of interestFamiliarization with the watershed by conducting a literature review, compiling background information and maps relevant to study questions, conducting a reconnaissance of the watershedDeveloping partnerships with landowners and jurisdictionsCharacterization of watershed geomorphic settingDevelopment of a sediment budget designData collectionInterpretation and construction of the sediment budgetGenerating products (maps, reports, and presentations) to communicate findings.Sediment budget construction begins with examining the question(s) being asked and whether a sediment budget is necessary to answer these question(s). If undertaking a sediment budget analysis is a viable option, the next step is to define the spatial scale of the watershed and the time scale needed to answer the question(s). Of course, we understand that monetary constraints play a big role in any decision.Early in the sediment budget development process, we suggest getting to know your watershed by conducting a reconnaissance and meeting with local stakeholders. The reconnaissance aids in understanding the geomorphic setting of the watershed and potential sources of sediment. Identifying the potential sediment sources early in the design of the sediment budget will help later in deciding which tools are necessary to monitor erosion and/or deposition at these sources. Tools can range from rapid inventories to estimate the sediment budget or quantifying sediment erosion, deposition, and export through more rigorous field monitoring. In either approach, data are gathered and erosion and deposition calculations are determined and compared to the sediment export with a description of the error uncertainty. Findings are presented to local stakeholders and management officials.Sediment fingerprinting is a technique that apportions the sources of fine-grained sediment in a watershed using tracers or fingerprints. Due to different geologic and anthropogenic histories, the chemical and physical properties of sediment in a watershed may vary and often represent a unique signature (or fingerprint) for each source within the watershed. Fluvial sediment samples (the target sediment) are also collected and exhibit a composite of the source properties that can be apportioned through various statistical techniques. Using an unmixing-model and error analysis, the final apportioned sediment is determined.
NASA Astrophysics Data System (ADS)
Khan, Yousaf; Afridi, Muhammad Idrees; Khan, Ahmed Mudassir; Rehman, Waheed Ur; Khan, Jahanzeb
2014-09-01
Hybrid wavelength-division multiplexed/time-division multiplexed passive optical access networks (WDM/TDM-PONs) combine the advance features of both WDM and TDM PONs to provide a cost-effective access network solution. We demonstrate and analyze the transmission performances and power budget issues of a colorless hybrid WDM/TDM-PON scheme. A 10-Gb/s downstream differential phase shift keying (DPSK) and remodulated upstream on/off keying (OOK) data signals are transmitted over 25 km standard single mode fiber. Simulation results show error free transmission having adequate power margins in both downstream and upstream transmission, which prove the applicability of the proposed scheme to future passive optical access networks. The power budget confines both the PON splitting ratio and the distance between the Optical Line Terminal (OLT) and Optical Network Unit (ONU).
The DiskMass Survey. II. Error Budget
NASA Astrophysics Data System (ADS)
Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas
2010-06-01
We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.
76 FR 55139 - Order Making Fiscal Year 2012 Annual Adjustments to Registration Fee Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
... Congressional Budget Office (``CBO'') and Office of Management and Budget (``OMB'') to project the aggregate... given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n-step...
Wright, S.A.; Schoellhamer, D.H.
2005-01-01
[1] Where rivers encounter estuaries, a transition zone develops where riverine and tidal processes both affect sediment transport processes. One such transition zone is the Sacramento-San Joaquin River Delta, a large, complex system where several rivers meet to form an estuary (San Francisco Bay). Herein we present the results of a detailed sediment budget for this river/estuary transitional system. The primary regional goal of the study was to measure sediment transport rates and pathways in the delta in support of ecosystem restoration efforts. In addition to achieving this regional goal, the study has produced general methods to collect, edit, and analyze (including error analysis) sediment transport data at the interface of rivers and estuaries. Estimating sediment budgets for these systems is difficult because of the mixed nature of riverine versus tidal transport processes, the different timescales of transport in fluvial and tidal environments, and the sheer complexity and size of systems such as the Sacramento-San Joaquin River Delta. Sediment budgets also require error estimates in order to assess whether differences in inflows and outflows, which could be small compared to overall fluxes, are indeed distinguishable from zero. Over the 4 year period of this study, water years 1999-2002, 6.6 ?? 0.9 Mt of sediment entered the delta and 2.2 ?? 0.7 Mt exited, resulting in 4.4 ?? 1.1 Mt (67 ?? 17%) of deposition. The estimated deposition rate corresponding to this mass of sediment compares favorably with measured inorganic sediment accumulation on vegetated wetlands in the delta.
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
Space shuttle post-entry and landing analysis. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Crawford, B. S.; Duiven, E. M.
1973-01-01
Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.
Demonstrating Starshade Performance as Part of NASA's Technology Development for Exoplanet Missions
NASA Astrophysics Data System (ADS)
Kasdin, N. Jeremy; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M. W.; Walkemeyer, P. E.; Bach, V. M.; Oakes, E.; Cady, E. J.; Martin, S. R.; Marchen, L. F.; Macintosh, B.; Rudd, R.; Mikula, J. A.; Lynch, D. H.
2012-01-01
In this poster we describe the results of our project to design, manufacture, and measure a prototype starshade petal as part of the Technology Development for Exoplanet Missions program. An external occult is a satellite employing a large screen, or starshade,that flies in formation with a spaceborne telescope to provide the starlight suppression needed for detecting and characterizing exoplanets. Among the advantages of using an occulter are the broadband allowed for characterization and the removal of light for the observatory, greatly relaxing the requirements on the telescope and instrument. In this first two-year phase we focused on the key requirement of manufacturing a precision petal with the precise tolerances needed to meet the overall error budget. These tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We show the results of this analysis and a representative error budget. We also present the final manufactured occulter petal and the metrology on its shape that demonstrates it meets requirements. We show that a space occulter built of petals with the same measured shape would achieve better than 1e-9 contrast. We also show our progress in building and testing sample edges with the sharp radius of curvature needed for limiting solar glint. Finally, we describe our plans for the second TDEM phase.
NASA Astrophysics Data System (ADS)
Nunes, A.; Ivanov, V. Y.
2014-12-01
Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.
Assessing and measuring wetland hydrology
Rosenberry, Donald O.; Hayashi, Masaki; Anderson, James T.; Davis, Craig A.
2013-01-01
Virtually all ecological processes that occur in wetlands are influenced by the water that flows to, from, and within these wetlands. This chapter provides the “how-to” information for quantifying the various source and loss terms associated with wetland hydrology. The chapter is organized from a water-budget perspective, with sections associated with each of the water-budget components that are common in most wetland settings. Methods for quantifying the water contained within the wetland are presented first, followed by discussion of each separate component. Measurement accuracy and sources of error are discussed for each of the methods presented, and a separate section discusses the cumulative error associated with determining a water budget for a wetland. Exercises and field activities will provide hands-on experience that will facilitate greater understanding of these processes.
Error Budgets for the Exoplanet Starshade (exo-s) Probe-Class Mission Study
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin
2015-01-01
Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 millimeters starshade co-launched with a 1.1 millimeter commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 millimeter starshade intended to work with a 2.4 millimeters telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 millimeter starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.
Error budgets for the Exoplanet Starshade (Exo-S) probe-class mission study
NASA Astrophysics Data System (ADS)
Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin
2015-09-01
Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 m starshade co-launched with a 1.1 m commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 m starshade intended to work with a 2.4 m telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 m starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.
Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument
NASA Astrophysics Data System (ADS)
Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory
2014-10-01
The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.
1993-04-01
determining effective group functioning, leader-group interaction , and decision making; (2) factors that determine effective, low error human performance...infectious disease and biological defense vaccines and drugs , vision, neurotxins, neurochemistry, molecular neurobiology, neurodegenrative diseases...Potential Rotor/Comprehensive Analysis Model for Rotor Aerodynamics-Johnson Aeronautics (FPR/CAMRAD-JA) code to predict Blade Vortex Interaction (BVI
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making such...
Improved Calibration through SMAP RFI Change Detection
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng
2017-01-01
Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.
Budgets of divergent and rotational kinetic energy during two periods of intense convection
NASA Technical Reports Server (NTRS)
Buechler, D. E.; Fuelberg, H. E.
1986-01-01
The derivations of the energy budget equations for divergent and rotational components of kinetic energy are provided. The intense convection periods studied are: (1) synoptic scale data of 3 or 6 hour intervals and (2) mesoalphascale data every 3 hours. Composite energies and averaged budgets for the periods are presented; the effects of random data errors on derived energy parameters is investigated. The divergent kinetic energy and rotational kinetic energy budgets are compared; good correlation of the data is observed. The kinetic energies and budget terms increase with convective development; however, the conversion of the divergent and rotational energies are opposite.
22 CFR 96.33 - Budget, audit, insurance, and risk assessment requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... its governing body, if applicable, for management of its funds. The budget discloses all remuneration (including perquisites) paid to the agency's or person's board of directors, managers, employees, and... determining the type and amount of professional, general, directors' and officers', errors and omissions, and...
Propagation of angular errors in two-axis rotation systems
NASA Astrophysics Data System (ADS)
Torrington, Geoffrey K.
2003-10-01
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.
ERIC Educational Resources Information Center
Marcoulides, George A.
1995-01-01
A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-28
... Identifier: CMS-10003] Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB); Correction AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Correction of notice. SUMMARY: This document corrects a technical error in the notice [Document Identifier: CMS...
1990-05-01
CLASSIFICATION AUTPOVITY 3. DISTRIBUTION IAVAILABILITY OF REPORT 2b. P OCLASSIFICATION/OOWNGRADING SC14DULE Approved for public release; distribution 4...in the Red Book should obtain a copy of the Engineering Design Handbook, Army Weapon System Analysis, Part One, DARCOM- P 706-101, November 1977; a...companion volume: Army Weapon System Analysis, Part Two, DARCOM- P 706-102, October 1979, also makes worthwhile study. Both of these documents, written by
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
A comparison of advanced overlay technologies
NASA Astrophysics Data System (ADS)
Dasari, Prasad; Smith, Nigel; Goelzer, Gary; Liu, Zhuan; Li, Jie; Tan, Asher; Koh, Chin Hwee
2010-03-01
The extension of optical lithography to 22nm and beyond by Double Patterning Technology is often challenged by CDU and overlay control. With reduced overlay measurement error budgets in the sub-nm range, relying on traditional Total Measurement Uncertainty (TMU) estimates alone is no longer sufficient. In this paper we will report scatterometry overlay measurements data from a set of twelve test wafers, using four different target designs. The TMU of these measurements is under 0.4nm, within the process control requirements for the 22nm node. Comparing the measurement differences between DBO targets (using empirical and model based analysis) and with image-based overlay data indicates the presence of systematic and random measurement errors that exceeds the TMU estimate.
Atmospheric energetics as related to cyclogenesis over the eastern United States. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
West, P. W.
1973-01-01
A method is presented to investigate the atmospheric energy budget as related to cyclogenesis. Energy budget equations are developed that are shown to be advantageous because the individual terms represent basic physical processes which produce changes in atmospheric energy, and the equations provide a means to study the interaction of the cyclone with the larger scales of motion. The work presented represents an extension of previous studies because all of the terms of the energy budget equations were evaluated throughout the development period of the cyclone. Computations are carried out over a limited atmospheric volume which encompasses the cyclone, and boundary fluxes of energy that were ignored in most previous studies are evaluated. Two examples of cyclogenesis over the eastern United States were chosen for study. One of the cases (1-4 November, 1966) represented an example of vigorous development, while the development in the other case (5-8 December, 1969) was more modest. Objectively analyzed data were used in the evaluation of the energy budget terms in order to minimize computational errors, and an objective analysis scheme is described that insures that all of the resolution contained in the rawinsonde observations is incorporated in the analyses.
Developing an Earth system Inverse model for the Earth's energy and water budgets.
NASA Astrophysics Data System (ADS)
Haines, K.; Thomas, C.; Liu, C.; Allan, R. P.; Carneiro, D. M.
2017-12-01
The CONCEPT-Heat project aims at developing a consistent energy budget for the Earth system in order to better understand and quantify global change. We advocate a variational "Earth system inverse" solution as the best methodology to bring the necessary expertise from different disciplines together. L'Ecuyer et al (2015) and Rodell et al (2015) first used a variational approach to adjust multiple satellite data products for air-sea-land vertical fluxes of heat and freshwater, achieving closed budgets on a regional and global scale. However their treatment of horizontal energy and water redistribution and its uncertainties was limited. Following the recent work of Liu et al (2015, 2017) which used atmospheric reanalysis convergences to derive a new total surface heat flux product from top of atmosphere fluxes, we have revisited the variational budget approach introducing a more extensive analysis of the role of horizontal transports of heat and freshwater, using multiple atmospheric and ocean reanalysis products. We find considerable improvements in fluxes in regions such as the North Atlantic and Arctic, for example requiring higher atmospheric heat and water convergences over the Arctic than given by ERA-Interim, thereby allowing lower and more realistic oceanic transports. We explore using the variational uncertainty analysis to produce lower resolution corrections to higher resolution flux products and test these against in situ flux data. We also explore the covariance errors implied between component fluxes that are imposed by the regional budget constraints. Finally we propose this as a valuable methodology for developing consistent observational constraints on the energy and water budgets in climate models. We take a first look at the same regional budget quantities in CMIP5 models and consider the implications of the differences for the processes and biases active in the models. Many further avenues of investigation are possible focused on better valuing the uncertainties in observational flux products and setting requirement targets for future observation programs.
Logging-related increases in stream density in a northern California watershed
Matthew S. Buffleben
2012-01-01
Although many sediment budgets estimate the effects of logging, few have considered the potential impact of timber harvesting on stream density. Failure to consider changes in stream density could lead to large errors in the sediment budget, particularly between the allocation of natural and anthropogenic sources of sediment.This study...
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
NASA Astrophysics Data System (ADS)
Yamada, Y.; Gouda, N.; Yano, T.; Kobayashi, Y.; Niwa, Y.; Niwa
2008-07-01
Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with a 10 μas accuracy. We use z-band CCD or K-band array detector to avoid dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. In this poster, we show the observation strategy, reduction scheme, and error budget. We also show the basic design of the software for the end-to-end simulation of JASMINE, named JASMINE Simulator.
Overview of the TOPEX/Poseidon Platform Harvest Verification Experiment
NASA Technical Reports Server (NTRS)
Morris, Charles S.; DiNardo, Steven J.; Christensen, Edward J.
1995-01-01
An overview is given of the in situ measurement system installed on Texaco's Platform Harvest for verification of the sea level measurement from the TOPEX/Poseidon satellite. The prelaunch error budget suggested that the total root mean square (RMS) error due to measurements made at this verification site would be less than 4 cm. The actual error budget for the verification site is within these original specifications. However, evaluation of the sea level data from three measurement systems at the platform has resulted in unexpectedly large differences between the systems. Comparison of the sea level measurements from the different tide gauge systems has led to a better understanding of the problems of measuring sea level in relatively deep ocean. As of May 1994, the Platform Harvest verification site has successfully supported 60 TOPEX/Poseidon overflights.
Extended Kalman filter for attitude estimation of the earth radiation budget satellite
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack Y.
1989-01-01
The design and testing of an Extended Kalman Filter (EKF) for ground attitude determination, misalignment estimation and sensor calibration of the Earth Radiation Budget Satellite (ERBS) are described. Attitude is represented by the quaternion of rotation and the attitude estimation error is defined as an additive error. Quaternion normalization is used for increasing the convergence rate and for minimizing the need for filter tuning. The development of the filter dynamic model, the gyro error model and the measurement models of the Sun sensors, the IR horizon scanner and the magnetometers which are used to generate vector measurements are also presented. The filter is applied to real data transmitted by ERBS sensors. Results are presented and analyzed and the EKF advantages as well as sensitivities are discussed. On the whole the filter meets the expected synergism, accuracy and robustness.
NASA Astrophysics Data System (ADS)
Kleinherenbrink, Marcel; Riva, Riccardo; Sun, Yu
2016-11-01
In this study, for the first time, an attempt is made to close the sea level budget on a sub-basin scale in terms of trend and amplitude of the annual cycle. We also compare the residual time series after removing the trend, the semiannual and the annual signals. To obtain errors for altimetry and Argo, full variance-covariance matrices are computed using correlation functions and their errors are fully propagated. For altimetry, we apply a geographically dependent intermission bias [Ablain et al.(2015)], which leads to differences in trends up to 0.8 mm yr-1. Since Argo float measurements are non-homogeneously spaced, steric sea levels are first objectively interpolated onto a grid before averaging. For the Gravity Recovery And Climate Experiment (GRACE), gravity fields full variance-covariance matrices are used to propagate errors and statistically filter the gravity fields. We use four different filtered gravity field solutions and determine which post-processing strategy is best for budget closure. As a reference, the standard 96 degree Dense Decorrelation Kernel-5 (DDK5)-filtered Center for Space Research (CSR) solution is used to compute the mass component (MC). A comparison is made with two anisotropic Wiener-filtered CSR solutions up to degree and order 60 and 96 and a Wiener-filtered 90 degree ITSG solution. Budgets are computed for 10 polygons in the North Atlantic Ocean, defined in a way that the error on the trend of the MC plus steric sea level remains within 1 mm yr-1. Using the anisotropic Wiener filter on CSR gravity fields expanded up to spherical harmonic degree 96, it is possible to close the sea level budget in 9 of 10 sub-basins in terms of trend. Wiener-filtered Institute of Theoretical geodesy and Satellite Geodesy (ITSG) and the standard DDK5-filtered CSR solutions also close the trend budget if a glacial isostatic adjustment (GIA) correction error of 10-20 % is applied; however, the performance of the DDK5-filtered solution strongly depends on the orientation of the polygon due to residual striping. In 7 of 10 sub-basins, the budget of the annual cycle is closed, using the DDK5-filtered CSR or the Wiener-filtered ITSG solutions. The Wiener-filtered 60 and 96 degree CSR solutions, in combination with Argo, lack amplitude and suffer from what appears to be hydrological leakage in the Amazon and Sahel regions. After reducing the trend, the semiannual and the annual signals, 24-53 % of the residual variance in altimetry-derived sea level time series is explained by the combination of Argo steric sea levels and the Wiener-filtered ITSG MC. Based on this, we believe that the best overall solution for the MC of the sub-basin-scale budgets is the Wiener-filtered ITSG gravity fields. The interannual variability is primarily a steric signal in the North Atlantic Ocean, so for this the choice of filter and gravity field solution is not really significant.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.
Advancing Technology for Starlight Suppression via an External Occulter
NASA Technical Reports Server (NTRS)
Kasdin, N. J.; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Walkemeyer, P.; Bach, V.; Oakes, E.; Cady, E.;
2011-01-01
External occulters provide the starlight suppression needed for detecting and characterizing exoplanets with a much simpler telescope and instrument than is required for the equivalent performing coronagraph. In this paper we describe progress on our Technology Development for Exoplanet Missions project to design, manufacture, and measure a prototype occulter petal. We focus on the key requirement of manufacturing a precision petal while controlling its shape within precise tolerances. The required tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We discuss the deployable starshade design, representative error budget, thermal analysis, and prototype manufacturing. We also present our meteorology system and methodology for verifying that the petal shape meets the contrast requirement. Finally, we summarize the progress to date building the prototype petal.
Space shuttle entry and landing navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Crawford, B. S.
1974-01-01
A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.
Cost-effectiveness of the stream-gaging program in Missouri
Waite, L.A.
1987-01-01
This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)
Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data
NASA Technical Reports Server (NTRS)
Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen
2006-01-01
This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.
Michael Köhl; Charles Scott; Daniel Plugge
2013-01-01
Uncertainties are a composite of errors arising from observations and the appropriateness of models. An error budget approach can be used to identify and accumulate the sources of errors to estimate change in emissions between two points in time. Various forest monitoring approaches can be used to estimate the changes in emissions due to deforestation and forest...
Statistical analysis of the surface figure of the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Lightsey, Paul A.; Chaney, David; Gallagher, Benjamin B.; Brown, Bob J.; Smith, Koby; Schwenker, John
2012-09-01
The performance of an optical system is best characterized by either the point spread function (PSF) or the optical transfer function (OTF). However, for system budgeting purposes, it is convenient to use a single scalar metric, or a combination of a few scalar metrics to track performance. For the James Webb Space Telescope, the Observatory level requirements were expressed in metrics of Strehl Ratio, and Encircled Energy. These in turn were converted to the metrics of total rms WFE and rms WFE within spatial frequency domains. The 18 individual mirror segments for the primary mirror segment assemblies (PMSA), the secondary mirror (SM), tertiary mirror (TM), and Fine Steering Mirror have all been fabricated. They are polished beryllium mirrors with a protected gold reflective coating. The statistical analysis of the resulting Surface Figure Error of these mirrors has been analyzed. The average spatial frequency distribution and the mirror-to-mirror consistency of the spatial frequency distribution are reported. The results provide insight to system budgeting processes for similar optical systems.
NASA Astrophysics Data System (ADS)
Xia, Youlong; Cosgrove, Brian A.; Mitchell, Kenneth E.; Peters-Lidard, Christa D.; Ek, Michael B.; Kumar, Sujay; Mocko, David; Wei, Helin
2016-01-01
This paper compares the annual and monthly components of the simulated energy budget from the North American Land Data Assimilation System phase 2 (NLDAS-2) with reference products over the domains of the 12 River Forecast Centers (RFCs) of the continental United States (CONUS). The simulations are calculated from both operational and research versions of NLDAS-2. The reference radiation components are obtained from the National Aeronautics and Space Administration Surface Radiation Budget product. The reference sensible and latent heat fluxes are obtained from a multitree ensemble method applied to gridded FLUXNET data from the Max Planck Institute, Germany. As these references are obtained from different data sources, they cannot fully close the energy budget, although the range of closure error is less than 15% for mean annual results. The analysis here demonstrates the usefulness of basin-scale surface energy budget analysis for evaluating model skill and deficiencies. The operational (i.e., Noah, Mosaic, and VIC) and research (i.e., Noah-I and VIC4.0.5) NLDAS-2 land surface models exhibit similarities and differences in depicting basin-averaged energy components. For example, the energy components of the five models have similar seasonal cycles, but with different magnitudes. Generally, Noah and VIC overestimate (underestimate) sensible (latent) heat flux over several RFCs of the eastern CONUS. In contrast, Mosaic underestimates (overestimates) sensible (latent) heat flux over almost all 12 RFCs. The research Noah-I and VIC4.0.5 versions show moderate-to-large improvements (basin and model dependent) relative to their operational versions, which indicates likely pathways for future improvements in the operational NLDAS-2 system.
NASA Technical Reports Server (NTRS)
Xia, Youlong; Peters-Lidard, Christa D.; Cosgrove, Brian A.; Mitchell, Kenneth E.; Peters-Lidard, Christa; Ek, Michael B.; Kumar, Sujay V.; Mocko, David M.; Wei, Helin
2015-01-01
This paper compares the annual and monthly components of the simulated energy budget from the North American Land Data Assimilation System phase 2 (NLDAS-2) with reference products over the domains of the 12 River Forecast Centers (RFCs) of the continental United States (CONUS). The simulations are calculated from both operational and research versions of NLDAS-2. The reference radiation components are obtained from the National Aeronautics and Space Administration Surface Radiation Budget product. The reference sensible and latent heat fluxes are obtained from a multitree ensemble method applied to gridded FLUXNET data from the Max Planck Institute, Germany. As these references are obtained from different data sources, they cannot fully close the energy budget, although the range of closure error is less than 15%formean annual results. The analysis here demonstrates the usefulness of basin-scale surface energy budget analysis for evaluating model skill and deficiencies. The operational (i.e., Noah, Mosaic, and VIC) and research (i.e., Noah-I and VIC4.0.5) NLDAS-2 land surface models exhibit similarities and differences in depicting basin-averaged energy components. For example, the energy components of the five models have similar seasonal cycles, but with different magnitudes. Generally, Noah and VIC overestimate (underestimate) sensible (latent) heat flux over several RFCs of the eastern CONUS. In contrast, Mosaic underestimates (overestimates) sensible (latent) heat flux over almost all 12 RFCs. The research Noah-I and VIC4.0.5 versions show moderate-to-large improvements (basin and model dependent) relative to their operational versions, which indicates likely pathways for future improvements in the operational NLDAS-2 system.
Systems engineering analysis of five 'as-manufactured' SXI telescopes
NASA Astrophysics Data System (ADS)
Harvey, James E.; Atanassova, Martina; Krywonos, Andrey
2005-09-01
Four flight models and a spare of the Solar X-ray Imager (SXI) telescope mirrors have been fabricated. The first of these is scheduled to be launched on the NOAA GOES- N satellite on July 29, 2005. A complete systems engineering analysis of the "as-manufactured" telescope mirrors has been performed that includes diffraction effects, residual design errors (aberrations), surface scatter effects, and all of the miscellaneous errors in the mirror manufacturer's error budget tree. Finally, a rigorous analysis of mosaic detector effects has been included. SXI is a staring telescope providing full solar disc images at X-ray wavelengths. For wide-field applications such as this, a field-weighted-average measure of resolution has been modeled. Our performance predictions have allowed us to use metrology data to model the "as-manufactured" performance of the X-ray telescopes and to adjust the final focal plane location to optimize the number of spatial resolution elements in a given operational field-of-view (OFOV) for either the aerial image or the detected image. The resulting performance predictions from five separate mirrors allow us to evaluate and quantify the optical fabrication process for producing these very challenging grazing incidence X-ray optics.
Focus control enhancement and on-product focus response analysis methodology
NASA Astrophysics Data System (ADS)
Kim, Young Ki; Chen, Yen-Jen; Hao, Xueli; Samudrala, Pavan; Gomez, Juan-Manuel; Mahoney, Mark O.; Kamalizadeh, Ferhad; Hanson, Justin K.; Lee, Shawn; Tian, Ye
2016-03-01
With decreasing CDOF (Critical Depth Of Focus) for 20/14nm technology and beyond, focus errors are becoming increasingly critical for on-product performance. Current on product focus control techniques in high volume manufacturing are limited; It is difficult to define measurable focus error and optimize focus response on product with existing methods due to lack of credible focus measurement methodologies. Next to developments in imaging and focus control capability of scanners and general tool stability maintenance, on-product focus control improvements are also required to meet on-product imaging specifications. In this paper, we discuss focus monitoring, wafer (edge) fingerprint correction and on-product focus budget analysis through diffraction based focus (DBF) measurement methodology. Several examples will be presented showing better focus response and control on product wafers. Also, a method will be discussed for a focus interlock automation system on product for a high volume manufacturing (HVM) environment.
Precise and Scalable Static Program Analysis of NASA Flight Software
NASA Technical Reports Server (NTRS)
Brat, G.; Venet, A.
2005-01-01
Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
Ballantyne, A. P.; Andres, R.; Houghton, R.; ...
2015-04-30
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less
Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models
NASA Astrophysics Data System (ADS)
Southworth, John
2010-11-01
I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the effective temperature of the star. I find that the correlations of planetary surface gravity and mass with orbital period have significance levels of only 3.1σ and 2.3σ, respectively. The significance of the latter has not increased with the addition of new data since Paper II. The division of planets into two classes based on Safronov number is increasingly blurred. Most of the objects studied here would benefit from improved photometric and spectroscopic observations, as well as improvements in our understanding of low-mass stars and their effective temperature scale.
CBO’s Revenue Forecasting Record
2015-11-01
1983 1988 1993 1998 2003 2008 2013 -10 0 10 20 30 CBO Administration CBO’s Mean Forecast Error (1.1%) Forecast Errors for CBO’s and the...Administration’s Two-Year Revenue Projections CONGRESS OF THE UNITED STATES CONGRESSIONAL BUDGET OFFICE CBO CBO’s Revenue Forecasting Record NOVEMBER 2015...
Budget Studies of a Prefrontal Convective Rainband in Northern Taiwan Determined from TAMEX Data
1993-06-01
storm top accumulates less error in w calculation than an upward integration from the surface. Other Doppler studies, e.g., Chong and Testud (1983), Lin...contribute to the uncertainty of w is a result of the advection problem (Gal-Chen, 1982; Chong and Testud , 1983). Parsons et al (1983) employed an...Boulder, CO., 95-102. Chong, M., and J. Testud , 1983: Three-dimensional wind field analysis from dual-Doppler radar data. Part III: The boundary condition
Improving the Cost Estimation of Space Systems. Past Lessons and Future Recommendations
2008-01-01
a reasonable gauge for the relative propor- tions of cost growth attributable to errors, decisions, and other causes in any MDAP. Analysis of the...program. The program offices visited were the Defense Metrological Satellite Pro- gram (DMSP), Evolved Expendable Launch Vehicle (EELV), Advanced...3 years 1.8 0.9 3–8 years 1.8 0.9 8+ years 3.7 1.8 Staffing Requirement 7.4 3.7 areas represent earned value and budget drills ; the tan area on top
NASA Technical Reports Server (NTRS)
De Boer, G.; Shupe, M.D.; Caldwell, P.M.; Bauer, Susanne E.; Persson, O.; Boyle, J.S.; Kelley, M.; Klein, S.A.; Tjernstrom, M.
2014-01-01
Atmospheric measurements from the Arctic Summer Cloud Ocean Study (ASCOS) are used to evaluate the performance of three atmospheric reanalyses (European Centre for Medium Range Weather Forecasting (ECMWF)- Interim reanalysis, National Center for Environmental Prediction (NCEP)-National Center for Atmospheric Research (NCAR) reanalysis, and NCEP-DOE (Department of Energy) reanalysis) and two global climate models (CAM5 (Community Atmosphere Model 5) and NASA GISS (Goddard Institute for Space Studies) ModelE2) in simulation of the high Arctic environment. Quantities analyzed include near surface meteorological variables such as temperature, pressure, humidity and winds, surface-based estimates of cloud and precipitation properties, the surface energy budget, and lower atmospheric temperature structure. In general, the models perform well in simulating large-scale dynamical quantities such as pressure and winds. Near-surface temperature and lower atmospheric stability, along with surface energy budget terms, are not as well represented due largely to errors in simulation of cloud occurrence, phase and altitude. Additionally, a development version of CAM5, which features improved handling of cloud macro physics, has demonstrated to improve simulation of cloud properties and liquid water amount. The ASCOS period additionally provides an excellent example of the benefits gained by evaluating individual budget terms, rather than simply evaluating the net end product, with large compensating errors between individual surface energy budget terms that result in the best net energy budget.
NASA Astrophysics Data System (ADS)
Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.
2016-11-01
Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.
NASA Technical Reports Server (NTRS)
Fisher, Brad; Wolff, David B.
2010-01-01
Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.
Levesque, Eric; Hoti, Emir; de La Serna, Sofia; Habouchi, Houssam; Ichai, Philippe; Saliba, Faouzi; Samuel, Didier; Azoulay, Daniel
2013-03-01
In the French healthcare system, the intensive care budget allocated is directly dependent on the activity level of the center. To evaluate this activity level, it is necessary to code the medical diagnoses and procedures performed on Intensive Care Unit (ICU) patients. The aim of this study was to evaluate the effects of using an Intensive Care Information System (ICIS) on the incidence of coding errors and its impact on the ICU budget allocated. Since 2005, the documentation on and monitoring of every patient admitted to our ICU has been carried out using an ICIS. However, the coding process was performed manually until 2008. This study focused on two periods: the period of manual coding (year 2007) and the period of computerized coding (year 2008) which covered a total of 1403 ICU patients. The time spent on the coding process, the rate of coding errors (defined as patients missed/not coded or wrongly identified as undergoing major procedure/s) and the financial impact were evaluated for these two periods. With computerized coding, the time per admission decreased significantly (from 6.8 ± 2.8 min in 2007 to 3.6 ± 1.9 min in 2008, p<0.001). Similarly, a reduction in coding errors was observed (7.9% vs. 2.2%, p<0.001). This decrease in coding errors resulted in a reduced difference between the potential and real ICU financial supplements obtained in the respective years (€194,139 loss in 2007 vs. a €1628 loss in 2008). Using specific computer programs improves the intensive process of manual coding by shortening the time required as well as reducing errors, which in turn positively impacts the ICU budget allocation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Advanced CD-SEM solution for edge placement error characterization of BEOL pitch 32nm metal layers
NASA Astrophysics Data System (ADS)
Charley, A.; Leray, P.; Lorusso, G.; Sutani, T.; Takemasa, Y.
2018-03-01
Metrology plays an important role in edge placement error (EPE) budgeting. Control for multi-patterning applications as new critical distances needs to be measured (edge to edge) and requirements become tighter and tighter in terms of accuracy and precision. In this paper we focus on imec iN7 BEOL platform and particularly on M2 patterning scheme using SAQP + block EUV for a 7.5 track logic design. Being able to characterize block to SAQP edge misplacement is important in a budgeting exercise (1) but is also extremely difficult due to challenging edge detection with CD-SEM (similar materials, thin layers, short distances, 3D features). In this study we develop an advanced solution to measure block to SAQP placement, we characterize it in terms of sensitivity, precision and accuracy through the comparison to reference metrology. In a second phase, the methodology is applied to budget local effects and the results are compared to the characterization of the SAQP and block independently.
Performance of the Gemini Planet Imager’s adaptive optics system
Poyneer, Lisa A.; Palmer, David W.; Macintosh, Bruce; ...
2016-01-07
The Gemini Planet Imager’s adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. We give a definitive description of the system’s algorithms and technologies as built. Ultimately, the error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term.
Nimbus-7 Earth radiation budget calibration history. Part 1: The solar channels
NASA Technical Reports Server (NTRS)
Kyle, H. Lee; Hoyt, Douglas V.; Hickey, John R.; Maschhoff, Robert H.; Vallette, Brenda J.
1993-01-01
The Earth Radiation Budget (ERB) experiment on the Nimbus-7 satellite measured the total solar irradiance plus broadband spectral components on a nearly daily basis from 16 Nov. 1978, until 16 June 1992. Months of additional observations were taken in late 1992 and in 1993. The emphasis is on the electrically self calibrating cavity radiometer, channel 10c, which recorded accurate total solar irradiance measurements over the whole period. The spectral channels did not have inflight calibration adjustment capabilities. These channels can, with some additional corrections, be used for short-term studies (one or two solar rotations - 27 to 60 days), but not for long-term trend analysis. For channel 10c, changing radiometer pointing, the zero offsets, the stability of the gain, the temperature sensitivity, and the influences of other platform instruments are all examined and their effects on the measurements considered. Only the question of relative accuracy (not absolute) is examined. The final channel 10c product is also compared with solar measurements made by independent experiments on other satellites. The Nimbus experiment showed that the mean solar energy was about 0.1 percent (1.4 W/sqm) higher in the excited Sun years of 1979 and 1991 than in the quiet Sun years of 1985 and 1986. The error analysis indicated that the measured long-term trends may be as accurate as +/- 0.005 percent. The worse-case error estimate is +/- 0.03 percent.
Active Optics: stress polishing of toric mirrors for the VLT SPHERE adaptive optics system.
Hugot, Emmanuel; Ferrari, Marc; El Hadi, Kacem; Vola, Pascal; Gimenez, Jean Luc; Lemaitre, Gérard R; Rabou, Patrick; Dohlen, Kjetil; Puget, Pascal; Beuzit, Jean Luc; Hubin, Norbert
2009-05-20
The manufacturing of toric mirrors for the Very Large Telescope-Spectro-Polarimetric High-Contrast Exoplanet Research instrument (SPHERE) is based on Active Optics and stress polishing. This figuring technique allows minimizing mid and high spatial frequency errors on an aspherical surface by using spherical polishing with full size tools. In order to reach the tight precision required, the manufacturing error budget is described to optimize each parameter. Analytical calculations based on elasticity theory and finite element analysis lead to the mechanical design of the Zerodur blank to be warped during the stress polishing phase. Results on the larger (366 mm diameter) toric mirror are evaluated by interferometry. We obtain, as expected, a toric surface within specification at low, middle, and high spatial frequencies ranges.
NASA Astrophysics Data System (ADS)
Carton, James; Chepurin, Gennady
2017-04-01
While atmospheric reanalyses do not ingest data from the subsurface ocean they must produce fluxes consistent with, for example, ocean storage and divergence of heat transport. Here we present a test of the consistency of two different atmospheric reanalyses with 2.5 million global ocean temperature observations during the data-rich eight year period 2007-2014. The examination is carried out by using atmospheric reanalysis variables to drive the SODA3 ocean reanalysis system, and then collecting and analyzing the temperature analysis increments (observation misfits). For the widely used MERRA2 and ERA-Int atmospheric reanalyses the temperature analysis increments reveal inconsistencies between those atmospheric fluxes and the ocean observations in the range of 10-30 W/m2. In the interior basins excess heat during a single assimilation cycle is stored primarily locally within the mixed layer, a simplification of the heat budget that allows us to identify the source of the error as the specified net surface heat flux. Along the equator the increments are primarily confined to thermocline depths indicating the primary source of the error is dominated by heat transport divergence. The error in equatorial heat transport divergence, in turn, can be traced to errors in the strength of the equatorial trade winds. We test our conclusions by introducing modifications of the atmospheric reanalyses based on analysis of ocean temperature analysis increments and repeating the ocean reanalysis experiments using the modified surface fluxes. Comparison of the experiments reveals that the modified fluxes reduce the misfit to ocean observations as well as the differences between the different atmospheric reanalyses.
Thiboonboon, Kittiphong; Leelahavarong, Pattara; Wattanasirichaigoon, Duangrurdee; Vatanavicharn, Nithiwat; Wasant, Pornswan; Shotelersuk, Vorasuk; Pangkanon, Suthipong; Kuptanon, Chulaluck; Chaisomchit, Sumonta; Teerawattananon, Yot
2015-01-01
Inborn errors of metabolism (IEM) are a rare group of genetic diseases which can lead to several serious long-term complications in newborns. In order to address these issues as early as possible, a process called tandem mass spectrometry (MS/MS) can be used as it allows for rapid and simultaneous detection of the diseases. This analysis was performed to determine whether newborn screening by MS/MS is cost-effective in Thailand. A cost-utility analysis comprising a decision-tree and Markov model was used to estimate the cost in Thai baht (THB) and health outcomes in life-years (LYs) and quality-adjusted life year (QALYs) presented as an incremental cost-effectiveness ratio (ICER). The results were also adjusted to international dollars (I$) using purchasing power parities (PPP) (1 I$ = 17.79 THB for the year 2013). The comparisons were between 1) an expanded neonatal screening programme using MS/MS screening for six prioritised diseases: phenylketonuria (PKU); isovaleric acidemia (IVA); methylmalonic acidemia (MMA); propionic acidemia (PA); maple syrup urine disease (MSUD); and multiple carboxylase deficiency (MCD); and 2) the current practice that is existing PKU screening. A comparison of the outcome and cost of treatment before and after clinical presentations were also analysed to illustrate the potential benefit of early treatment for affected children. A budget impact analysis was conducted to illustrate the cost of implementing the programme for 10 years. The ICER of neonatal screening using MS/MS amounted to 1,043,331 THB per QALY gained (58,647 I$ per QALY gained). The potential benefits of early detection compared with late detection yielded significant results for PKU, IVA, MSUD, and MCD patients. The budget impact analysis indicated that the implementation cost of the programme was expected at approximately 2,700 million THB (152 million I$) over 10 years. At the current ceiling threshold, neonatal screening using MS/MS in the Thai context is not cost-effective. However, the treatment of patients who were detected early for PKU, IVA, MSUD, and MCD, are considered favourable. The budget impact analysis suggests that the implementation of the programme will incur considerable expenses under limited resources. A long-term epidemiological study on the incidence of IEM in Thailand is strongly recommended to ascertain the magnitude of problem.
Thiboonboon, Kittiphong; Leelahavarong, Pattara; Wattanasirichaigoon, Duangrurdee; Vatanavicharn, Nithiwat; Wasant, Pornswan; Shotelersuk, Vorasuk; Pangkanon, Suthipong; Kuptanon, Chulaluck; Chaisomchit, Sumonta; Teerawattananon, Yot
2015-01-01
Background Inborn errors of metabolism (IEM) are a rare group of genetic diseases which can lead to several serious long-term complications in newborns. In order to address these issues as early as possible, a process called tandem mass spectrometry (MS/MS) can be used as it allows for rapid and simultaneous detection of the diseases. This analysis was performed to determine whether newborn screening by MS/MS is cost-effective in Thailand. Method A cost-utility analysis comprising a decision-tree and Markov model was used to estimate the cost in Thai baht (THB) and health outcomes in life-years (LYs) and quality-adjusted life year (QALYs) presented as an incremental cost-effectiveness ratio (ICER). The results were also adjusted to international dollars (I$) using purchasing power parities (PPP) (1 I$ = 17.79 THB for the year 2013). The comparisons were between 1) an expanded neonatal screening programme using MS/MS screening for six prioritised diseases: phenylketonuria (PKU); isovaleric acidemia (IVA); methylmalonic acidemia (MMA); propionic acidemia (PA); maple syrup urine disease (MSUD); and multiple carboxylase deficiency (MCD); and 2) the current practice that is existing PKU screening. A comparison of the outcome and cost of treatment before and after clinical presentations were also analysed to illustrate the potential benefit of early treatment for affected children. A budget impact analysis was conducted to illustrate the cost of implementing the programme for 10 years. Results The ICER of neonatal screening using MS/MS amounted to 1,043,331 THB per QALY gained (58,647 I$ per QALY gained). The potential benefits of early detection compared with late detection yielded significant results for PKU, IVA, MSUD, and MCD patients. The budget impact analysis indicated that the implementation cost of the programme was expected at approximately 2,700 million THB (152 million I$) over 10 years. Conclusion At the current ceiling threshold, neonatal screening using MS/MS in the Thai context is not cost-effective. However, the treatment of patients who were detected early for PKU, IVA, MSUD, and MCD, are considered favourable. The budget impact analysis suggests that the implementation of the programme will incur considerable expenses under limited resources. A long-term epidemiological study on the incidence of IEM in Thailand is strongly recommended to ascertain the magnitude of problem. PMID:26258410
The Extended HANDS Characterization and Analysis of Metric Biases
NASA Astrophysics Data System (ADS)
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.
Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma
NASA Technical Reports Server (NTRS)
Fisher, Brad L.
2004-01-01
The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.
van Walbeek, Corné
2014-01-01
Background The tobacco industry claims that illicit trade in cigarettes has increased sharply since the 1990s and that government has lost substantial tax revenue. Objectives (1) To determine whether cigarette excise tax revenue has been below budget in recent years, compared with previous decades. (2) To determine trends in the size of the illicit market since 1995. Methods For (1), mean percentage errors and root mean square percentage errors were calculated for budget revenue deviation for three products (cigarettes, beer and spirits), for various subperiods. For (2), predicted changes in total consumption, using actual cigarette price and GDP changes and previously published price and income elasticity estimates, were calculated and compared with changes in tax-paid consumption. Results Cigarette excise revenues were 0.7% below budget for 2000–2012 on average, compared with 3.0% below budget for beer and 4.7% below budget for spirits. There is no evidence that illicit trade in cigarettes in South Africa increased between 2002 and 2009. There is a substantial increase in illicit trade in 2010, probably peaking in 2011. In 2012 tax-paid consumption of cigarettes increased 2.6%, implying that the illicit market share decreased an estimated 0.6 percentage points. Conclusions Other than in 2010, there is no evidence that illicit trade is significantly undermining government revenue. Claims that illicit trade has consistently increased over the past 15 years, and has continued its sharp increase since 2010, are not supported. PMID:24431121
NASA Astrophysics Data System (ADS)
Joetzjer, E.; Pillet, M.; Ciais, P.; Barbier, N.; Chave, J.; Schlund, M.; Maignan, F.; Barichivich, J.; Luyssaert, S.; Hérault, B.; von Poncet, F.; Poulter, B.
2017-07-01
Despite advances in Earth observation and modeling, estimating tropical biomass remains a challenge. Recent work suggests that integrating satellite measurements of canopy height within ecosystem models is a promising approach to infer biomass. We tested the feasibility of this approach to retrieve aboveground biomass (AGB) at three tropical forest sites by assimilating remotely sensed canopy height derived from a texture analysis algorithm applied to the high-resolution Pleiades imager in the Organizing Carbon and Hydrology in Dynamic Ecosystems Canopy (ORCHIDEE-CAN) ecosystem model. While mean AGB could be estimated within 10% of AGB derived from census data in average across sites, canopy height derived from Pleiades product was spatially too smooth, thus unable to accurately resolve large height (and biomass) variations within the site considered. The error budget was evaluated in details, and systematic errors related to the ORCHIDEE-CAN structure contribute as a secondary source of error and could be overcome by using improved allometric equations.
NASA Technical Reports Server (NTRS)
Stowe, Larry; Ardanuy, Philip; Hucek, Richard; Abel, Peter; Jacobowitz, Herbert
1991-01-01
A set of system simulations was performed to evaluate candidate scanner configurations to fly as a part of the Earth Radiation Budget Instrument (ERBI) on the polar platforms during the 1990's. The simulation is considered of instantaneous sampling (without diurnal averaging) of the longwave and shortwave fluxes at the top of the atmosphere (TOA). After measurement and subsequent inversion to the TOA, the measured fluxes were compared to the reference fluxes for 2.5 deg lat/long resolution targets. The reference fluxes at this resolution are obtained by integrating over the 25 x 25 = 625 grid elements in each target. The differences between each of these two resultant spatially averaged sets of target measurements (errors) are taken and then statistically summarized. Five instruments are considered: (1) the Conically Scanning Radiometer (CSR); (2) the ERBE Cross Track Scanner; (3) the Nimbus-7 Biaxial Scanner; (4) the Clouds and Earth's Radiant Energy System Instrument (CERES-1); and (5) the Active Cavity Array (ACA). Identical studies of instantaneous error were completed for many days, two seasons, and several satellite equator crossing longitudes. The longwave flux errors were found to have the same space and time characteristics as for the shortwave fluxes, but the errors are only about 25 pct. of the shortwave errors.
NASA Astrophysics Data System (ADS)
Mendillo, Christopher B.; Howe, Glenn A.; Hewawasam, Kuravi; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya
2017-09-01
The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. Four leakage sources owing to the optical fabrication tolerances and optical coatings are: electric field conjugation (EFC) residuals, beam walk on the secondary and tertiary mirrors, optical surface scattering, and polarization aberration. Simulations and analysis of these four leakage sources for the PICTUREC optical design are presented here.
Determination of the carbon budget of a pasture: effect of system boundaries and flux uncertainties
NASA Astrophysics Data System (ADS)
Felber, R.; Bretscher, D.; Münger, A.; Neftel, A.; Ammann, C.
2015-12-01
Carbon (C) sequestration in the soil is considered as a potential important mechanism to mitigate greenhouse gas (GHG) emissions of the agricultural sector. It can be quantified by the net ecosystem carbon budget (NECB) describing the change of soil C as the sum of all relevant import and export fluxes. NECB was investigated here in detail for an intensively grazed dairy pasture in Switzerland. Two budget approaches with different system boundaries were applied: NECBtot for system boundaries including the grazing cows and NECBpast for system boundaries excluding the cows. CO2 and CH4 exchange induced by soil/vegetation processes as well as direct emissions by the animals were derived from eddy covariance measurements. Other C fluxes were either measured (milk yield, concentrate feeding) or derived based on animal performance data (intake, excreta). For the investigated year, both approaches resulted in a small non-significant C loss: NECBtot - 13 ± 61 g C m-2 yr-1 and NECBpast - 17 ± 81 g C m-2 yr-1. The considerable uncertainties, depending on the approach, were mainly due to errors in the CO2 exchange or in the animal related fluxes. The associated GHG budget revealed CH4 emissions from the cows to be the major contributor, but with much lower uncertainty compared to NECB. Although only one year of data limit the representativeness of the carbon budget results, they demonstrated the important contribution of the non-CO2 fluxes depending on the chosen system boundaries and the effect of their propagated uncertainty in an exemplary way. The simultaneous application and comparison of both NECB approaches provides a useful consistency check for the carbon budget determination and can help to identify and eliminate systematic errors.
Determination of the carbon budget of a pasture: effect of system boundaries and flux uncertainties
NASA Astrophysics Data System (ADS)
Felber, Raphael; Bretscher, Daniel; Münger, Andreas; Neftel, Albrecht; Ammann, Christof
2016-05-01
Carbon (C) sequestration in the soil is considered as a potential important mechanism to mitigate greenhouse gas (GHG) emissions of the agricultural sector. It can be quantified by the net ecosystem carbon budget (NECB) describing the change of soil C as the sum of all relevant import and export fluxes. NECB was investigated here in detail for an intensively grazed dairy pasture in Switzerland. Two budget approaches with different system boundaries were applied: NECBtot for system boundaries including the grazing cows and NECBpast for system boundaries excluding the cows. CO2 and CH4 exchange induced by soil/vegetation processes as well as direct emissions by the animals were derived from eddy covariance measurements. Other C fluxes were either measured (milk yield, concentrate feeding) or derived based on animal performance data (intake, excreta). For the investigated year, both approaches resulted in a small near-neutral C budget: NECBtot -27 ± 62 and NECBpast 23 ± 76 g C m-2 yr-1. The considerable uncertainties, depending on the approach, were mainly due to errors in the CO2 exchange or in the animal-related fluxes. The comparison of the NECB results with the annual exchange of other GHG revealed CH4 emissions from the cows to be the major contributor in terms of CO2 equivalents, but with much lower uncertainty compared to NECB. Although only 1 year of data limit the representativeness of the carbon budget results, they demonstrate the important contribution of the non-CO2 fluxes depending on the chosen system boundaries and the effect of their propagated uncertainty in an exemplary way. The simultaneous application and comparison of both NECB approaches provides a useful consistency check for the carbon budget determination and can help to identify and eliminate systematic errors.
NASA Astrophysics Data System (ADS)
Doytchinov, I.; Tonnellier, X.; Shore, P.; Nicquevert, B.; Modena, M.; Mainaud Durand, H.
2018-05-01
Micrometric assembly and alignment requirements for future particle accelerators, and especially large assemblies, create the need for accurate uncertainty budgeting of alignment measurements. Measurements and uncertainties have to be accurately stated and traceable, to international standards, for metre-long sized assemblies, in the range of tens of µm. Indeed, these hundreds of assemblies will be produced and measured by several suppliers around the world, and will have to be integrated into a single machine. As part of the PACMAN project at CERN, we proposed and studied a practical application of probabilistic modelling of task-specific alignment uncertainty by applying a simulation by constraints calibration method. Using this method, we calibrated our measurement model using available data from ISO standardised tests (10360 series) for the metrology equipment. We combined this model with reference measurements and analysis of the measured data to quantify the actual specific uncertainty of each alignment measurement procedure. Our methodology was successfully validated against a calibrated and traceable 3D artefact as part of an international inter-laboratory study. The validated models were used to study the expected alignment uncertainty and important sensitivity factors in measuring the shortest and longest of the compact linear collider study assemblies, 0.54 m and 2.1 m respectively. In both cases, the laboratory alignment uncertainty was within the targeted uncertainty budget of 12 µm (68% confidence level). It was found that the remaining uncertainty budget for any additional alignment error compensations, such as the thermal drift error due to variation in machine operation heat load conditions, must be within 8.9 µm and 9.8 µm (68% confidence level) respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Notz, Dirk; Jahn, Alexandra; Holland, Marika
A better understanding of the role of sea ice for the changing climate of our planet is the central aim of the diagnostic Coupled Model Intercomparison Project 6 (CMIP6)-endorsed Sea-Ice Model Intercomparison Project (SIMIP). To reach this aim, SIMIP requests sea-ice-related variables from climate-model simulations that allow for a better understanding and, ultimately, improvement of biases and errors in sea-ice simulations with large-scale climate models. This then allows us to better understand to what degree CMIP6 model simulations relate to reality, thus improving our confidence in answering sea-ice-related questions based on these simulations. Furthermore, the SIMIP protocol provides a standardmore » for sea-ice model output that will streamline and hence simplify the analysis of the simulated sea-ice evolution in research projects independent of CMIP. To reach its aims, SIMIP provides a structured list of model output that allows for an examination of the three main budgets that govern the evolution of sea ice, namely the heat budget, the momentum budget, and the mass budget. Furthermore, we explain the aims of SIMIP in more detail and outline how its design allows us to answer some of the most pressing questions that sea ice still poses to the international climate-research community.« less
Notz, Dirk; Jahn, Alexandra; Holland, Marika; ...
2016-09-23
A better understanding of the role of sea ice for the changing climate of our planet is the central aim of the diagnostic Coupled Model Intercomparison Project 6 (CMIP6)-endorsed Sea-Ice Model Intercomparison Project (SIMIP). To reach this aim, SIMIP requests sea-ice-related variables from climate-model simulations that allow for a better understanding and, ultimately, improvement of biases and errors in sea-ice simulations with large-scale climate models. This then allows us to better understand to what degree CMIP6 model simulations relate to reality, thus improving our confidence in answering sea-ice-related questions based on these simulations. Furthermore, the SIMIP protocol provides a standardmore » for sea-ice model output that will streamline and hence simplify the analysis of the simulated sea-ice evolution in research projects independent of CMIP. To reach its aims, SIMIP provides a structured list of model output that allows for an examination of the three main budgets that govern the evolution of sea ice, namely the heat budget, the momentum budget, and the mass budget. Furthermore, we explain the aims of SIMIP in more detail and outline how its design allows us to answer some of the most pressing questions that sea ice still poses to the international climate-research community.« less
Intuitive Tools for the Design and Analysis of Communication Payloads for Satellites
NASA Technical Reports Server (NTRS)
Culver, Michael R.; Soong, Christine; Warner, Joseph D.
2014-01-01
In an effort to make future communications satellite payload design more efficient and accessible, two tools were created with intuitive graphical user interfaces (GUIs). The first tool allows payload designers to graphically design their payload by using simple drag and drop of payload components onto a design area within the program. Information about each picked component is pulled from a database of common space-qualified communication components sold by commerical companies. Once a design is completed, various reports can be generated, such as the Master Equipment List. The second tool is a link budget calculator designed specifically for ease of use. Other features of this tool include being able to access a database of NASA ground based apertures for near Earth and Deep Space communication, the Tracking and Data Relay Satellite System (TDRSS) base apertures, and information about the solar system relevant to link budget calculations. The link budget tool allows for over 50 different combinations of user inputs, eliminating the need for multiple spreadsheets and the user errors associated with using them. Both of the aforementioned tools increase the productivity of space communication systems designers, and have the colloquial latitude to allow non-communication experts to design preliminary communication payloads.
NASA Astrophysics Data System (ADS)
Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward
2016-01-01
We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.
Science support for the Earth radiation budget experiment
NASA Technical Reports Server (NTRS)
Coakley, James A., Jr.
1994-01-01
The work undertaken as part of the Earth Radiation Budget Experiment (ERBE) included the following major components: The development and application of a new cloud retrieval scheme to assess errors in the radiative fluxes arising from errors in the ERBE identification of cloud conditions. The comparison of the anisotropy of reflected sunlight and emitted thermal radiation with the anisotropy predicted by the Angular Dependence Models (ADM's) used to obtain the radiative fluxes. Additional studies included the comparison of calculated longwave cloud-free radiances with those observed by the ERBE scanner and the use of ERBE scanner data to track the calibration of the shortwave channels of the Advanced Very High Resolution Radiometer (AVHRR). Major findings included: the misidentification of cloud conditions by the ERBE scene identification algorithm could cause 15 percent errors in the shortwave flux reflected by certain scene types. For regions containing mixtures of scene types, the errors were typically less than 5 percent, and the anisotropies of the shortwave and longwave radiances exhibited a spatial scale dependence which, because of the growth of the scanner field of view from nadir to limb, gave rise to a view zenith angle dependent bias in the radiative fluxes.
Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand
2018-01-01
The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.
NASA Astrophysics Data System (ADS)
Schlegel, N.; Larour, E. Y.; Gardner, A. S.; Lang, C.; Miller, C. E.; van den Broeke, M. R.
2016-12-01
How Greenland ice flow may respond to future increases in surface runoff and to increases in the frequency of extreme melt events is unclear, as it requires detailed comprehension of Greenland surface climate and the ice sheet's sensitivity to associated uncertainties. With established uncertainty quantification tools run within the framework of Ice Sheet System Model (ISSM), we conduct decadal-scale forward modeling experiments to 1) quantify the spatial resolution needed to effectively force distinct components of the surface radiation budget, and subsequently surface mass balance (SMB), in various regions of the ice sheet and 2) determine the dynamic response of Greenland ice flow to variations in components of the net radiation budget. The Glacier Energy and Mass Balance (GEMB) software is a column surface model (1-D) that has recently been embedded as a module within ISSM. Using the ISSM-GEMB framework, we perform sensitivity analyses to determine how perturbations in various components of the surface radiation budget affect model output; these model experiments allow us predict where and on what spatial scale the ice sheet is likely to dynamically respond to changes in these parameters. Preliminary results suggest that SMB should be forced at at least a resolution of 23 km to properly capture dynamic ice response. In addition, Monte-Carlo style sampling analyses reveals that the areas with the largest uncertainty in mass flux are located near the equilibrium line altitude (ELA), upstream of major outlet glaciers in the North and West of the ice sheet. Sensitivity analysis indicates that these areas are also the most vulnerable on the ice sheet to persistent, far-field shifts in SMB, suggesting that continued warming, and upstream shift in the ELA, are likely to result in increased velocities, and consequentially SMB-induced thinning upstream of major outlet glaciers. Here, we extend our investigation to consider various components of the surface radiation budget separately, in order to determine how and where errors in these fields may independently impact ice flow. This work was performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Cryosphere and Interdisciplinary Research in Earth Science Programs.
Delanghe, Joris R; Cobbaert, Christa; Galteau, Marie-Madeleine; Harmoinen, Aimo; Jansen, Rob; Kruse, Rolf; Laitinen, Päivi; Thienpont, Linda M; Wuyts, Birgitte; Weykamp, Cas; Panteghini, Mauro
2008-01-01
The European In Vitro Diagnostics (IVD) directive requires traceability to reference methods and materials of analytes. It is a task of the profession to verify the trueness of results and IVD compatibility. The results of a trueness verification study by the European Communities Confederation of Clinical Chemistry (EC4) working group on creatinine standardization are described, in which 189 European laboratories analyzed serum creatinine in a commutable serum-based material, using analytical systems from seven companies. Values were targeted using isotope dilution gas chromatography/mass spectrometry. Results were tested on their compliance to a set of three criteria: trueness, i.e., no significant bias relative to the target value, between-laboratory variation and within-laboratory variation relative to the maximum allowable error. For the lower and intermediate level, values differed significantly from the target value in the Jaffe and the dry chemistry methods. At the high level, dry chemistry yielded higher results. Between-laboratory coefficients of variation ranged from 4.37% to 8.74%. Total error budget was mainly consumed by the bias. Non-compensated Jaffe methods largely exceeded the total error budget. Best results were obtained for the enzymatic method. The dry chemistry method consumed a large part of its error budget due to calibration bias. Despite the European IVD directive and the growing needs for creatinine standardization, an unacceptable inter-laboratory variation was observed, which was mainly due to calibration differences. The calibration variation has major clinical consequences, in particular in pediatrics, where reference ranges for serum and plasma creatinine are low, and in the estimation of glomerular filtration rate.
NASA Technical Reports Server (NTRS)
Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert
1994-01-01
Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.
Waffle mode error in the AEOS adaptive optics point-spread function
NASA Astrophysics Data System (ADS)
Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.
2003-02-01
Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.
Uncertainty Analysis of Sonic Boom Levels Measured in a Simulator at NASA Langley
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Ely, Jeffry W.
2012-01-01
A sonic boom simulator has been constructed at NASA Langley Research Center for testing the human response to sonic booms heard indoors. Like all measured quantities, sonic boom levels in the simulator are subject to systematic and random errors. To quantify these errors, and their net influence on the measurement result, a formal uncertainty analysis is conducted. Knowledge of the measurement uncertainty, or range of values attributable to the quantity being measured, enables reliable comparisons among measurements at different locations in the simulator as well as comparisons with field data or laboratory data from other simulators. The analysis reported here accounts for acoustic excitation from two sets of loudspeakers: one loudspeaker set at the facility exterior that reproduces the exterior sonic boom waveform and a second set of interior loudspeakers for reproducing indoor rattle sounds. The analysis also addresses the effect of pressure fluctuations generated when exterior doors of the building housing the simulator are opened. An uncertainty budget is assembled to document each uncertainty component, its sensitivity coefficient, and the combined standard uncertainty. The latter quantity will be reported alongside measurement results in future research reports to indicate data reliability.
Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun
2016-07-01
The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.
Sloto, Ronald A.; Buxton, Debra E.
2005-01-01
This pilot study, done by the U.S. Geological Survey in cooperation with the Delaware River Basin Commission, developed annual water budgets using available data for five watersheds in the Delaware River Basin with different degrees of urbanization and different geological settings. A basin water budget and a water-use budget were developed for each watershed. The basin water budget describes inputs to the watershed (precipitation and imported water), outputs of water from the watershed (streamflow, exported water, leakage, consumed water, and evapotranspiration), and changes in ground-water and surface-water storage. The water-use budget describes water withdrawals in the watershed (ground-water and surface-water withdrawals), discharges of water in the watershed (discharge to surface water and ground water), and movement of water of water into and out of the watershed (imports, exports, and consumed water). The water-budget equations developed for this study can be applied to any watershed in the Delaware River Basin. Data used to develop the water budgets were obtained from available long-term meteorological and hydrological data-collection stations and from water-use data collected by regulatory agencies. In the Coastal Plain watersheds, net ground-water loss from unconfined to confined aquifers was determined by using ground-water-flow-model simulations. Error in the water-budget terms is caused by missing data, poor or incomplete measurements, overestimated or underestimated quantities, measurement or reporting errors, and the use of point measurements, such as precipitation and water levels, to estimate an areal quantity, particularly if the watershed is hydrologically or geologically complex or the data-collection station is outside the watershed. The complexity of the water budgets increases with increasing watershed urbanization and interbasin transfer of water. In the Wissahickon Creek watershed, for example, some ground water is discharged to streams in the watershed, some is exported as wastewater, and some is exported for public supply. In addition, ground water withdrawn outside the watershed is imported for public supply or imported as wastewater for treatment and discharge in the watershed. A GIS analysis was necessary to quantify many of the water-budget components. The 89.9-square mile East Branch Brandywine Creek watershed in Pennsylvania is a rural watershed with reservoir storage that is underlain by fractured rock. Water budgets were developed for 1977-2001. Average annual precipitation, streamflow, and evapotranspiration were 46.89, 21.58, and 25.88 inches, respectively. Some water was imported (average of 0.68 inches) into the watershed for public-water supply and as wastewater for treatment and discharge; these imports resulted in a net gain of water to the watershed. More water was discharged to East Branch Brandywine Creek than was withdrawn from it; the net discharge resulted in an increase in streamflow. Most ground water was withdrawn (average of 0.25 inches) for public-water supply. Surface water was withdrawn (average of 0.58 inches) for public-water and industrial supply. Discharge of water by sewage-treatment plants and industries (average of 1.22 inches) and regulation by Marsh Creek Reservoir caused base flow to appear an average of 7.2 percent higher than it would have been without these additional sources. On average, 67 percent of the difference was caused by sewage-treatment-plant and industrial discharges, and 33 percent was caused by regulation of the Marsh Creek Reservoir. Water imports, withdrawals, and discharges have been increasing as the watershed becomes increasingly urbanized. The 64-square mile Wissahickon Creek watershed in Pennsylvania is an urban watershed underlain by fractured rock. Water budgets were developed for 1987-98. Average annual precipitation, streamflow, and evapotranspiration were 47.23, 22.24, and 23.12 inches, respectively. The watershed is highly u
Analysis of Lidar Remote Sensing Concepts
NASA Technical Reports Server (NTRS)
Spiers, Gary D.
1999-01-01
Line of sight velocity and measurement position sensitivity analyses for an orbiting coherent Doppler lidar are developed and applied to two lidars, one with a nadir angle of 30 deg. in a 300 km altitude, 58 deg. inclination orbit and the second for a 45 deg. nadir angle instrument in a 833 km altitude, 89 deg. inclination orbit. The effect of orbit related effects on the backscatter sensitivity of a coherent Doppler lidar is also discussed. Draft performance estimate, error budgets and payload accommodation requirements for the SPARCLE (Space Readiness Coherent Lidar) instrument were also developed and documented.
An Analysis of C4I Effectiveness Using the RESA Wargame
1994-06-01
the Target from Both comunities based on Warfare Specialty. SOURCE DF SS MS F p War Spec 1 1.0 1.0 0.02 0.898 ERROR 22 1372.6 62.4 TOTAL 23 1373.6...requirements. During the post Cold War era, a declining defense budget has forced complicated decisions concerning which systems the military will be...F-14 NFO 24. LT Donald Zwick, USN, EA-6B NFO 69 Appendix B: Basic Experimental Results Coil Col2 Col3 CoW4 Col5 Col6 Co17 War SpeC Level Stk Pack Sup
Error analysis of motion correction method for laser scanning of moving objects
NASA Astrophysics Data System (ADS)
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Sensitivity analysis for high-contrast missions with segmented telescopes
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Sauvage, Jean-François; Pueyo, Laurent; Fusco, Thierry; Soummer, Rémi; N'Diaye, Mamadou; St. Laurent, Kathryn
2017-09-01
Segmented telescopes enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures, and segment gaps, makes high-contrast imaging very challenging. In this context, we present an analytical model that will enable to establish a comprehensive error budget to evaluate the constraints on the segments and the influence of the error terms on the final image and contrast. Indeed, the target contrast of 1010 to image Earth-like planets requires drastic conditions, both in term of segment alignment and telescope stability. Despite space telescopes evolving in a more friendly environment than ground-based telescopes, remaining vibrations and resonant modes on the segments can still deteriorate the contrast. In this communication, we develop and validate the analytical model, and compare its outputs to images issued from end-to-end simulations.
Global land cover mapping: a review and uncertainty analysis
Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu
2014-01-01
Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.
Impact of shorter wavelengths on optical quality for laws
NASA Technical Reports Server (NTRS)
Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.
1993-01-01
This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefront tilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline l.5 m CO2 LAWS and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometers. However, advanced sensing and control devices would be necessary to maintain on-orbit alignment. Optical tolerance for maintaining boresight stability would have to be tightened by a factor of 1.8 for a 2 micrometers system and by 3.6 for a 1 micrometers system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometers does not appear feasible.
Impact of shorter wavelengths on optical quality for laws
NASA Technical Reports Server (NTRS)
Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.
1993-01-01
This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefronttilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline 1.5 m CO2 laws and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometer. However, advanced sensing and control devices would be necessary to be tightened by a factory of 1.8 for a 2 micrometer system and by 3.6 for a 1 micrometer system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometer does not appear feasible.
Evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas
Medina, K.D.; Geiger, C.O.
1984-01-01
The results of an evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas are documented. Data uses and funding sources were identified for the 140 complete record streamflow-gaging stations operated in Kansas during 1983 with a budget of $793,780. As a result of the evaluation of the needs and uses of data from the stream-gaging program, it was found that the 140 gaging stations were needed to meet these data requirements. The average standard error of estimation of streamflow records was 20.8 percent, assuming the 1983 budget and operating schedule of 6-week interval visitations and based on 85 of the 140 stations. It was shown that this overall level of accuracy could be improved to 18.9 percent by altering the 1983 schedule of station visitations. A minimum budget of $760 ,000, with a corresponding average error of estimation of 24.9 percent, is required to operate the 1983 program. None of the stations investigated were suitable for the application of alternative methods for simulating discharge records. Improved instrumentation can have a very positive impact on streamflow uncertainties by decreasing lost record. (USGS)
Dual view Geostationary Earth Radiation Budget from the Meteosat Second Generation satellites.
NASA Astrophysics Data System (ADS)
Dewitte, Steven; Clerbaux, Nicolas; Ipe, Alessandro; Baudrez, Edward; Moreels, Johan
2017-04-01
The diurnal cycle of the radiation budget is a key component of the tropical climate. The geostationary Meteosat Second Generation (MSG) satellites carrying both the broadband Geostationary Earth Radiation Budget (GERB) instrument with nadir resolution of 50 km and the multispectral Spinning Enhanced VIsible and InfraRed Imager (SEVIRI) with nadir resolution of 3 km offer a unique opportunity to observe this diurnal cycle. The geostationary orbit has the advantage of good temporal sampling but the disadvantage of fixed viewing angles, which makes the measurements of the broadband Top Of Atmosphere (TOA) radiative fluxes more sensitive to angular dependent errors. The Meteosat-10 (MSG-3) satellite observes the earth from the standard position at 0° longitude. From October 2016 onwards the Meteosat-8 (MSG-1) satellite makes observations from a new position at 41.5° East over the Indian Ocean. The dual view from Meteosat-8 and Meteosat-10 allows the assessment and correction of angular dependent systematic errors of the flux estimates. We demonstrate this capability with the validation of a new method for the estimation of the clear-sky TOA albedo from the SEVIRI instruments.
NASA Astrophysics Data System (ADS)
Vanhaelewyn, Gauthier; Duchatelet, Pierre; Vigouroux, Corinne; Dils, Bart; Kumps, Nicolas; Hermans, Christian; Demoulin, Philippe; Mahieu, Emmanuel; Sussmann, Ralf; de Mazière, Martine
2010-05-01
The Fourier Transform Infra Red (FTIR) remote measurements of atmospheric constituents at the observatories at Saint-Denis (20.90°S, 55.48°E, 50 m a.s.l., Île de la Réunion) and Jungfraujoch (46.55°N, 7.98°E, 3580 m a.s.l., Switzerland) are affiliated to the Network for the Detection of Atmospheric Composition Change (NDACC). The European NDACC FTIR data for CH4 were improved and homogenized among the stations in the EU project HYMN. One important application of these data is their use for the validation of satellite products, like the validation of SCIAMACHY or IASI CH4 columns. Therefore, it is very important that errors and uncertainties associated to the ground-based FTIR CH4 data are well characterized. In this poster we present a comparison of errors on retrieved vertical concentration profiles of CH4 between Saint-Denis and Jungfraujoch. At both stations, we have used the same retrieval algorithm, namely SFIT2 v3.92 developed jointly at the NASA Langley Research Center, the National Center for Atmospheric Research (NCAR) and the National Institute of Water and Atmosphere Research (NIWA) at Lauder, New Zealand, and error evaluation tools developed at the Belgian Institute for Space Aeronomy (BIRA-IASB). The error components investigated in this study are: smoothing, noise, temperature, instrumental line shape (ILS) (in particular the modulation amplitude and phase), spectroscopy (in particular the pressure broadening and intensity), interfering species and solar zenith angle (SZA) error. We will determine if the characteristics of the sites in terms of altitude, geographic locations and atmospheric conditions produce significant differences in the error budgets for the retrieved CH4 vertical profiles
7 CFR 2.30 - Director, Office of Budget and Program Analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 1 2010-01-01 2010-01-01 false Director, Office of Budget and Program Analysis. 2.30... Officers and Agency Heads § 2.30 Director, Office of Budget and Program Analysis. (a) The following... Program Analysis: (1) Serve as the Department's Budget Officer and exercise general responsibility and...
The Role of Margin in Link Design and Optimization
NASA Technical Reports Server (NTRS)
Cheung, K.
2015-01-01
Link analysis is a system engineering process in the design, development, and operation of communication systems and networks. Link models that are mathematical abstractions representing the useful signal power and the undesirable noise and attenuation effects (including weather effects if the signal path transverses through the atmosphere) that are integrated into the link budget calculation that provides the estimates of signal power and noise power at the receiver. Then the link margin is applied which attempts to counteract the fluctuations of the signal and noise power to ensure reliable data delivery from transmitter to receiver. (Link margin is dictated by the link margin policy or requirements.) A simple link budgeting approach assumes link parameters to be deterministic values typically adopted a rule-of-thumb policy of 3 dB link margin. This policy works for most S- and X-band links due to their insensitivity to weather effects. But for higher frequency links like Ka-band, Ku-band, and optical communication links, it is unclear if a 3 dB link margin would guarantee link closure. Statistical link analysis that adopted the 2-sigma or 3-sigma link margin incorporates link uncertainties in the sigma calculation. (The Deep Space Network (DSN) link margin policies are 2-sigma for downlink and 3-sigma for uplink.) The link reliability can therefore be quantified statistically even for higher frequency links. However in the current statistical link analysis approach, link reliability is only expressed as the likelihood of exceeding the signal-to-noise ratio (SNR) threshold that corresponds to a given bit-error-rate (BER) or frame-error-rate (FER) requirement. The method does not provide the true BER or FER estimate of the link with margin, or the required signalto-noise ratio (SNR) that would meet the BER or FER requirement in the statistical sense. In this paper, we perform in-depth analysis on the relationship between BER/FER requirement, operating SNR, and coding performance curve, in the case when the channel coherence time of link fluctuation is comparable or larger than the time duration of a codeword. We compute the "true" SNR design point that would meet the BER/FER requirement by taking into account the fluctuation of signal power and noise power at the receiver, and the shape of the coding performance curve. This analysis yields a number of valuable insights on the design choices of coding scheme and link margin for the reliable data delivery of a communication system - space and ground. We illustrate the aforementioned analysis using a number of standard NASA error-correcting codes.
Estimating diffusivity from the mixed layer heat and salt balances in the North Pacific
NASA Astrophysics Data System (ADS)
Cronin, M. F.; Pelland, N.; Emerson, S. R.; Crawford, W. R.
2015-12-01
Data from two National Oceanographic and Atmospheric Administration (NOAA) surface moorings in the North Pacific, in combination with data from satellite, Argo floats and glider (when available), are used to evaluate the residual diffusive flux of heat across the base of the mixed layer from the surface mixed layer heat budget. The diffusion coefficient (i.e., diffusivity) is then computed by dividing the diffusive flux by the temperature gradient in the 20-m transition layer just below the base of the mixed layer. At Station Papa in the NE Pacific subpolar gyre, this diffusivity is 1×10-4 m2/s during summer, increasing to ~3×10-4 m2/s during fall. During late winter and early spring, diffusivity has large errors. At other times, diffusivity computed from the mixed layer salt budget at Papa correlate with those from the heat budget, giving confidence that the results are robust for all seasons except late winter-early spring and can be used for other tracers. In comparison, at the Kuroshio Extension Observatory (KEO) in the NW Pacific subtropical recirculation gyre, somewhat larger diffusivity are found based upon the mixed layer heat budget: ~ 3×10-4 m2/s during the warm season and more than an order of magnitude larger during the winter, although again, wintertime errors are large. These larger values at KEO appear to be due to the increased turbulence associated with the summertime typhoons, and weaker wintertime stratification.
Estimating diffusivity from the mixed layer heat and salt balances in the North Pacific
NASA Astrophysics Data System (ADS)
Cronin, Meghan F.; Pelland, Noel A.; Emerson, Steven R.; Crawford, William R.
2015-11-01
Data from two National Oceanographic and Atmospheric Administration (NOAA) surface moorings in the North Pacific, in combination with data from satellite, Argo floats and glider (when available), are used to evaluate the residual diffusive flux of heat across the base of the mixed layer from the surface mixed layer heat budget. The diffusion coefficient (i.e., diffusivity) is then computed by dividing the diffusive flux by the temperature gradient in the 20 m transition layer just below the base of the mixed layer. At Station Papa in the NE Pacific subpolar gyre, this diffusivity is 1 × 10-4 m2/s during summer, increasing to ˜3 × 10-4 m2/s during fall. During late winter and early spring, diffusivity has large errors. At other times, diffusivity computed from the mixed layer salt budget at Papa correlate with those from the heat budget, giving confidence that the results are robust for all seasons except late winter-early spring and can be used for other tracers. In comparison, at the Kuroshio Extension Observatory (KEO) in the NW Pacific subtropical recirculation gyre, somewhat larger diffusivities are found based upon the mixed layer heat budget: ˜ 3 × 10-4 m2/s during the warm season and more than an order of magnitude larger during the winter, although again, wintertime errors are large. These larger values at KEO appear to be due to the increased turbulence associated with the summertime typhoons, and weaker wintertime stratification.
Overlay improvement by exposure map based mask registration optimization
NASA Astrophysics Data System (ADS)
Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric
2015-03-01
Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density could also achieve under 5nm performance. We assume mask registration excluding random error is mostly induced by charge accumulation during mask writing, which may be calculated from surrounding exposed pattern density. Multi-loading test mask registration result shows that with x direction writing sequence, mask registration behavior in x direction is mainly related to sequence direction, but mask registration in y direction would be highly impacted by pattern density distribution map. It proves part of mask registration error is due to charge issue from nearby environment. If exposure sequence is chip by chip for normal multi chip layout case, mask registration of both x and y direction would be impacted analogously, which has also been proved by real data. Therefore, we try to set up a simple model to predict the mask registration error based on mask exposure map, and correct it with the given POSCOR (position correction) file for advanced mask writing if needed.
Investigation of scene identification algorithms for radiation budget measurements
NASA Technical Reports Server (NTRS)
Diekmann, F. J.
1986-01-01
The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.
Prototype Development of a Geostationary Synthetic Thinned Aperture Radiometer, GeoSTAR
NASA Technical Reports Server (NTRS)
Tanner, Alan B.; Wilson, William J.; Kangaslahti, Pekka P.; Lambrigsten, Bjorn H.; Dinardo, Steven J.; Piepmeier, Jeffrey R.; Ruf, Christopher S.; Rogacki, Steven; Gross, S. M.; Musko, Steve
2004-01-01
Preliminary details of a 2-D synthetic aperture radiometer prototype operating from 50 to 58 GHz will be presented. The instrument is being developed as a laboratory testbed, and the goal of this work is to demonstrate the technologies needed to do atmospheric soundings with high spatial resolution from Geostationary orbit. The concept is to deploy a large sparse aperture Y-array from a geostationary satellite, and to use aperture synthesis to obtain images of the earth without the need for a large mechanically scanned antenna. The laboratory prototype consists of a Y-array of 24 horn antennas, MMIC receivers, and a digital cross-correlation sub-system. System studies are discussed, including an error budget which has been derived from numerical simulations. The error budget defines key requirements, such as null offsets, phase calibration, and antenna pattern knowledge. Details of the instrument design are discussed in the context of these requirements.
WFIRST: Managing Telescope Wavefront Stability to Meet Coronagraph Performance
NASA Astrophysics Data System (ADS)
Noecker, Martin; Poberezhskiy, Ilya; Kern, Brian; Krist, John; WFIRST System Engineering Team
2018-01-01
The WFIRST coronagraph instrument (CGI) needs a stable telescope and active wavefront control to perform coronagraph science with an expected sensitivity of 8x10-9 in the exoplanet-star flux ratio (SNR=10) at 200 milliarcseconds angular separation. With its subnanometer requirements on the stability of its input wavefront error (WFE), the CGI employs a combination of pointing and wavefront control loops and thermo-mechanical stability to meet budget allocations for beam-walk and low-order WFE, which enable stable starlight speckles on the science detector that can be removed by image subtraction. We describe the control strategy and the budget framework for estimating and budgeting the elements of wavefront stability, and the modeling strategy to evaluate it.
Groundwater discharge to lakes (GDL) - the disregarded component of lake nutrient budgets
NASA Astrophysics Data System (ADS)
Lewandowski, J.; Meinikmann, K.; Pöschke, F.; Nützmann, G.
2012-04-01
Eutrophication is a major threat to lakes in temperate climatic zones. It is necessary to determine the relevance of different nutrient sources to conduct effective management measures, to understand in-lake processes and to model future scenarios. A prerequisite for such nutrient budgets are water budgets. While most components of the water budget can be determined quite accurate the quantification of groundwater discharge to lakes (GDL) and surface water infiltration into the aquifer are much more difficult. For example, it is quite common to determine the groundwater component as residual in the water and nutrient budget which is extremely problematic since in that case all errors of the budget terms are summed up in the groundwater term. In total, we identified 10 different reasons for disregarding the groundwater path in nutrient budgets. We investigated the fate of the nutrients nitrogen and phosphorus on their pathway from the catchment through the reactive aquifer-lake interface into the lake. We reviewed the international literature and summarized numbers reported for GDL of nutrients. Since literature is quite sparse we also had a look at numbers reported for submarine groundwater discharge (SGD) of nutrients for which much more literature exists and which is despite some fundamental differences in principal comparable to GDL.
NASA Astrophysics Data System (ADS)
Nikolsky, Peter; Strolenberg, Chris; Nielsen, Rasmus; Nooitgedacht, Tjitte; Davydova, Natalia; Yang, Greg; Lee, Shawn; Park, Chang-Min; Kim, Insung; Yeo, Jeong-Ho
2013-04-01
As the International Technology Roadmap for Semiconductors critical dimension uniformity (CDU) specification shrinks, semiconductor companies need to maintain a high yield of good wafers per day and high performance (and hence market value) of finished products. This cannot be achieved without continuous analysis and improvement of on-product CDU as one of the main drivers for process control and optimization with better understanding of main contributors from the litho cluster: mask, process, metrology and scanner. We will demonstrate a study of mask CDU characterization and its impact on CDU Budget Breakdown (CDU BB) performed for advanced extreme ultraviolet (EUV) lithography with 1D (dense lines) and 2D (dense contacts) feature cases. We will show that this CDU contributor is one of the main differentiators between well-known ArFi and new EUV CDU budgeting principles. We found that reticle contribution to intrafield CDU should be characterized in a specific way: mask absorber thickness fingerprints play a role comparable with reticle CDU in the total reticle part of the CDU budget. Wafer CD fingerprints, introduced by this contributor, may or may not compensate variations of mask CDs and hence influence on total mask impact on intrafield CDU at the wafer level. This will be shown on 1D and 2D feature examples. Mask stack reflectivity variations should also be taken into account: these fingerprints have visible impact on intrafield CDs at the wafer level and should be considered as another contributor to the reticle part of EUV CDU budget. We also observed mask error enhancement factor (MEEF) through field fingerprints in the studied EUV cases. Variations of MEEF may play a role towards the total intrafield CDU and may need to be taken into account for EUV lithography. We characterized MEEF-through-field for the reviewed features, with results herein, but further analysis of this phenomenon is required. This comprehensive approach to quantifying the mask part of the overall EUV CDU contribution helps deliver an accurate and integral CDU BB per product/process and litho tool. The better understanding of the entire CDU budget for advanced EUVL nodes achieved by Samsung and ASML helps extend the limits of Moore's Law and to deliver successful implementation of smaller, faster and smarter chips in semiconductor industry.
NASA Technical Reports Server (NTRS)
Barkstrom, B. R.
1983-01-01
The measurement of the earth's radiation budget has been chosen to illustrate the technique of objective system design. The measurement process is an approximately linear transformation of the original field of radiant exitances, so that linear statistical techniques may be employed. The combination of variability, measurement strategy, and error propagation is presently made with the help of information theory, as suggested by Kondratyev et al. (1975) and Peckham (1974). Covariance matrices furnish the quantitative statement of field variability.
Quantifying uncertainty in carbon and nutrient pools of coarse woody debris
NASA Astrophysics Data System (ADS)
See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.
2016-12-01
Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.
Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas
Lindgren, R.J.
2006-01-01
A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.
Stability Error Budget for an Aggressive Coronagraph on a 3.8 m Telescope
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Krist, John; Rud, Mayer
2011-01-01
We evaluate in detail the stability requirements for a band-limited coronagraph with an inner working angle as small as 2 lambda/D coupled to an off-axis, 3.8-m diameter telescope. We have updated our methodologies since presenting a stability error budget for the Terrestrial Planet Finder Coronagraph mission that worked at 4 lambda/D and employed an 8th-order mask to reduce aberration sensitives. In the previous work, we determined the tolerances relative to the total light leaking through the coronagraph. Now, we separate the light into a radial component, which is readily separable from a planet signal, and an azimuthal component, which is easily confused with a planet signal. In the current study, throughput considerations require a 4th-order coronagraph. This, combined with the more aggressive working angle, places extraordinarily tight requirements on wavefront stability and opto-mechanical stability. We find that the requirements are driven mainly by coma that leaks around the coronagraph mask and mimics the localized signal of a planet, and pointing errors that scatter light into the background, decreasing SNR. We also show how the requirements would be relaxed if a low-order aberration detection system could be employed.
The Alignment of the Mean Wind and Stress Vectors in the Unstable Surface Layer
NASA Astrophysics Data System (ADS)
Bernardes, M.; Dias, N. L.
2010-01-01
A significant non-alignment between the mean horizontal wind vector and the stress vector was observed for turbulence measurements both above the water surface of a large lake, and over a land surface (soybean crop). Possible causes for this discrepancy such as flow distortion, averaging times and the procedure used for extracting the turbulent fluctuations (low-pass filtering and filter widths etc.), were dismissed after a detailed analysis. Minimum averaging times always less than 30 min were established by calculating ogives, and error bounds for the turbulent stresses were derived with three different approaches, based on integral time scales (first-crossing and lag-window estimates) and on a bootstrap technique. It was found that the mean absolute value of the angle between the mean wind and stress vectors is highly related to atmospheric stability, with the non-alignment increasing distinctively with increasing instability. Given a coordinate rotation that aligns the mean wind with the x direction, this behaviour can be explained by the growth of the relative error of the u- w component with instability. As a result, under more unstable conditions the u- w and the v- w components become of the same order of magnitude, and the local stress vector gives the impression of being non-aligned with the mean wind vector. The relative error of the v- w component is large enough to make it undistinguishable from zero throughout the range of stabilities. Therefore, the standard assumptions of Monin-Obukhov similarity theory hold: it is fair to assume that the v- w stress component is actually zero, and that the non-alignment is a purely statistical effect. An analysis of the dimensionless budgets of the u- w and the v- w components confirms this interpretation, with both shear and buoyant production of u- w decreasing with increasing instability. In the v- w budget, shear production is zero by definition, while buoyancy displays very low-intensity fluctuations around zero. As local free convection is approached, the turbulence becomes effectively axisymetrical, and a practical limit seems to exist beyond which it is not possible to measure the u- w component accurately.
NASA Technical Reports Server (NTRS)
Li, Zhanqing; Whitlock, Charles H.; Charlock, Thomas P.
1995-01-01
Global sets of surface radiation budget (SRB) have been obtained from satellite programs. These satellite-based estimates need validation with ground-truth observations. This study validates the estimates of monthly mean surface insolation contained in two satellite-based SRB datasets with the surface measurements made at worldwide radiation stations from the Global Energy Balance Archive (GEBA). One dataset was developed from the Earth Radiation Budget Experiment (ERBE) using the algorithm of Li et al. (ERBE/SRB), and the other from the International Satellite Cloud Climatology Project (ISCCP) using the algorithm of Pinker and Laszlo and that of Staylor (GEWEX/SRB). Since the ERBE/SRB data contain the surface net solar radiation only, the values of surface insolation were derived by making use of the surface albedo data contained GEWEX/SRB product. The resulting surface insolation has a bias error near zero and a root-mean-square error (RMSE) between 8 and 28 W/sq m. The RMSE is mainly associated with poor representation of surface observations within a grid cell. When the number of surface observations are sufficient, the random error is estimated to be about 5 W/sq m with present satellite-based estimates. In addition to demonstrating the strength of the retrieving method, the small random error demonstrates how well the ERBE derives from the monthly mean fluxes at the top of the atmosphere (TOA). A larger scatter is found for the comparison of transmissivity than for that of insolation. Month to month comparison of insolation reveals a weak seasonal trend in bias error with an amplitude of about 3 W/sq m. As for the insolation data from the GEWEX/SRB, larger bias errors of 5-10 W/sq m are evident with stronger seasonal trends and almost identical RMSEs.
NASA Astrophysics Data System (ADS)
Nesladek, Pavel; Wiswesser, Andreas; Sass, Björn; Mauermann, Sebastian
2008-04-01
The Critical dimension off-target (CDO) is a key parameter for mask house customer, affecting directly the performance of the mask. The CDO is the difference between the feature size target and the measured feature size. The change of CD during the process is either compensated within the process or by data correction. These compensation methods are commonly called process bias and data bias, respectively. The difference between data bias and process bias in manufacturing results in systematic CDO error, however, this systematic error does not take into account the instability of the process bias. This instability is a result of minor variations - instabilities of manufacturing processes and changes in materials and/or logistics. Using several masks the CDO of the manufacturing line can be estimated. For systematic investigation of the unit process contribution to CDO and analysis of the factors influencing the CDO contributors, a solid understanding of each unit process and huge number of masks is necessary. Rough identification of contributing processes and splitting of the final CDO variation between processes can be done with approx. 50 masks with identical design, material and process. Such amount of data allows us to identify the main contributors and estimate the effect of them by means of Analysis of variance (ANOVA) combined with multivariate analysis. The analysis does not provide information about the root cause of the variation within the particular unit process, however, it provides a good estimate of the impact of the process on the stability of the manufacturing line. Additionally this analysis can be used to identify possible interaction between processes, which cannot be investigated if only single processes are considered. Goal of this work is to evaluate limits for CDO budgeting models given by the precision and the number of measurements as well as partitioning the variation within the manufacturing process. The CDO variation splits according to the suggested model into contributions from particular processes or process groups. Last but not least the power of this method to determine the absolute strength of each parameter will be demonstrated. Identification of the root cause of this variation within the unit process itself is not scope of this work.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Yang, Yuekui
2016-01-01
Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.
Error budgeting single and two qubit gates in a superconducting qubit
NASA Astrophysics Data System (ADS)
Chen, Z.; Chiaro, B.; Dunsworth, A.; Foxen, B.; Neill, C.; Quintana, C.; Wenner, J.; Martinis, John. M.; Google Quantum Hardware Team Team
Superconducting qubits have shown promise as a platform for both error corrected quantum information processing and demonstrations of quantum supremacy. High fidelity quantum gates are crucial to achieving both of these goals, and superconducting qubits have demonstrated two qubit gates exceeding 99% fidelity. In order to improve gate fidelity further, we must understand the remaining sources of error. In this talk, I will demonstrate techniques for quantifying the contributions of control, decoherence, and leakage to gate error, for both single and two qubit gates. I will also discuss the near term outlook for achieving quantum supremacy using a gate-based approach in superconducting qubits. This work is supported Google Inc., and by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1605114.
Characterizing biospheric carbon balance using CO2 observations from the OCO-2 satellite
NASA Astrophysics Data System (ADS)
Miller, Scot M.; Michalak, Anna M.; Yadav, Vineet; Tadić, Jovan M.
2018-05-01
NASA's Orbiting Carbon Observatory 2 (OCO-2) satellite launched in summer of 2014. Its observations could allow scientists to constrain CO2 fluxes across regions or continents that were previously difficult to monitor. This study explores an initial step toward that goal; we evaluate the extent to which current OCO-2 observations can detect patterns in biospheric CO2 fluxes and constrain monthly CO2 budgets. Our goal is to guide top-down, inverse modeling studies and identify areas for future improvement. We find that uncertainties and biases in the individual OCO-2 observations are comparable to the atmospheric signal from biospheric fluxes, particularly during Northern Hemisphere winter when biospheric fluxes are small. A series of top-down experiments indicate how these errors affect our ability to constrain monthly biospheric CO2 budgets. We are able to constrain budgets for between two and four global regions using OCO-2 observations, depending on the month, and we can constrain CO2 budgets at the regional level (i.e., smaller than seven global biomes) in only a handful of cases (16 % of all regions and months). The potential of the OCO-2 observations, however, is greater than these results might imply. A set of synthetic data experiments suggests that retrieval errors have a salient effect. Advances in retrieval algorithms and to a lesser extent atmospheric transport modeling will improve the results. In the interim, top-down studies that use current satellite observations are best-equipped to constrain the biospheric carbon balance across only continental or hemispheric regions.
NASA Astrophysics Data System (ADS)
Phung, D.-H.; Samain, E.; Maurice, N.; Albanesse, D.; Mariey, H.; Aimar, M.; M. Lagarde, G.; Artaud, G.; Issler, J.-L.; Vedrenne, N.; Velluet, M.-T.; Toyoshima, M.; Akioka, M.; Kolev, D.; Munemasa, Y.; Takenaka, H.; Iwakiri, N.
2016-03-01
In collaboration between CNES, NICT, Geoazur, the first successful lasercom link between the micro-satellite SOCRATES and an OGS in Europe has been established. This paper presents some results of telecom and scintillation first data analysis for 4 successful links in June & July 2015 between SOTA terminal and MEO optical ground station (OGS) at Caussols France. The telecom and scintillation data have been continuously recorded during the passes by using a detector developed at the laboratory. An irradiance of 190 nW/m2 and 430 nW/m2 has been detected for 1549 nm and 976 nm downlinks at 35° elevation. Spectrums of power fluctuation measured at OGS are analyzed at different elevation angles and at different diameters of telescope aperture to determine fluctuations caused by pointing error (due to satellite & OGS telescope vibrations) and caused by atmospheric turbulence. Downlink & Uplink budgets are analyzed, the theoretical estimation matches well to measured power levels. Telecom signal forms and bit error rates (BER) of 1549 nm and 976 nm downlink are also shown at different diameters of telescope aperture. BER is 'Error Free' with full-aperture 1.5m telescope, and almost in `good channel' with 0.4 m sub-aperture of telescope. We also show the comparison between the expected and measured BER distributions.
Opto-mechanical design of ShaneAO: the adaptive optics system for the 3-meter Shane Telescope
NASA Astrophysics Data System (ADS)
Ratliff, C.; Cabak, J.; Gavel, D.; Kupke, R.; Dillon, D.; Gates, E.; Deich, W.; Ward, J.; Cowley, D.; Pfister, T.; Saylor, M.
2014-07-01
A Cassegrain mounted adaptive optics instrument presents unique challenges for opto-mechanical design. The flexure and temperature tolerances for stability are tighter than those of seeing limited instruments. This criteria requires particular attention to material properties and mounting techniques. This paper addresses the mechanical designs developed to meet the optical functional requirements. One of the key considerations was to have gravitational deformations, which vary with telescope orientation, stay within the optical error budget, or ensure that we can compensate with a steering mirror by maintaining predictable elastic behavior. Here we look at several cases where deformation is predicted with finite element analysis and Hertzian deformation analysis and also tested. Techniques used to address thermal deformation compensation without the use of low CTE materials will also be discussed.
Simulation and analysis of atmospheric transmission performance in airborne Terahertz communication
NASA Astrophysics Data System (ADS)
Pan, Chengsheng; Shi, Xin; Liu, Chengyang; Wang, Xue; Ding, Yuanming
2018-02-01
For the special meteorological condition of high altitude transmission; first the influence of atmospheric turbulence on the Terahertz wireless communication is analyzed, and the atmospheric constants model with increase in height is given. On this basis, the relationship between the flicker index and the high altitude horizon transmission distance of the Terahertz wave is analyzed by simulation. Then, through the analysis of high altitude path loss and noise, the high altitude wireless link model is built. Finally, the link loss budget is given according to the current Terahertz device parameters, and bit error rate (BER) performance of on-off keyed modulation (OOK) and pulse position modulation (PPM) in four Terahertz frequency bands is compared and analyzed. All these above provided theoretical reference for high-altitude Terahertz wireless communication transmission.
Preventing Marketing Efforts That Bomb.
ERIC Educational Resources Information Center
Sevier, Robert A.
2000-01-01
In a marketplace overwhelmed with messages, too many institutions waste money on ineffective marketing. Highlights five common marketing errors: limited definition of marketing; unwillingness to address strategic issues; no supporting data; fuzzy goals and directions; and unrealistic expectations, time lines, and budgets. Though trustees are not…
NASA Astrophysics Data System (ADS)
Van-Wierts, S.; Bernatchez, P.
2012-04-01
Coastal erosion is an important issue within the St-Lawrence estuary and gulf, especially in zones of unconsolidated material. Wide beaches are important coastal environments; they act as a buffer against breaking waves by absorbing and dissipating their energy, thus reducing the rate of coastal erosion. They also offer protection to humans and nearby ecosystems, providing habitat for plants, animals and lifeforms such as algae and microfauna. Conventional methods, such as aerial photograph analysis, fail to adequately quantify the morphosedimentary behavior of beaches at the scale of a hydrosedimentary cells. The lack of reliable and quantitative data leads to considerable errors of overestimation and underestimation of sediment budgets. To address these gaps and to minimize acquisition costs posed by airborne LiDAR survey, a mobile terrestrial LiDAR has been set up to acquire topographic data of the coastal zone. The acquisition system includes a LiDAR sensor, a high precision navigation system (GPS-INS) and a video camera. Comparison of LiDAR data with 1050 DGPS control points shows a vertical mean absolute error of 0.1 m in beach areas. The extracted data is used to calculate sediment volumes, widths, slopes, and a sediment budget index. A high accuracy coastal characterization is achieved through the integration of laser data and video. The main objective of this first project using this system is to quantify the impact of rigid coastal protective structures on sediment budget and beach morphology. Results show that the average sediment volume of beaches located before a rock armour barrier (12 m3/m) were three times narrower than for natural beaches (35,5 m3/m). Natural beaches were also found to have twice the width (25.4 m) of the beaches bordering inhabited areas (12.7 m). The development of sediment budget index for beach areas is an excellent proxy to quickly identify deficit areas and therefore the coastal segments most at risk of erosion. The obtained LiDAR coverage also revealed that beach profiles made at an interval of more than 200 m on diversified coasts lead to results significantly different from reality. However, profile intervals have little impact on long uniform beaches.
NASA Astrophysics Data System (ADS)
Blyth, E.; Martinez-de la Torre, A.; Ellis, R.; Robinson, E.
2017-12-01
The fresh-water budget of the Artic region has a diverse range of impacts: the ecosystems of the region, ocean circulation response to Arctic freshwater, methane emissions through changing wetland extent as well as the available fresh water for human consumption. But there are many processes that control the budget including a seasonal snow packs building and thawing, freezing soils and permafrost, extensive organic soils and large wetland systems. All these processes interact to create a complex hydrological system. In this study we examine a suite of 10 models that bring all those processes together in a 25 year reanalysis of the global water budget. We assess their performance in the Arctic region. There are two approaches to modelling fresh-water flows at large scales, referred to here as `Hydrological' and `Land Surface' models. While both approaches include a physically based model of the water stores and fluxes, the Land Surface models links the water flows to an energy-based model for processes such as snow melt and soil freezing. This study will analyse the impact of that basic difference on the regional patterns of evapotranspiration, runoff generation and terrestrial water storage. For the evapotranspiration, the Hydrological models tend to have a bigger spatial range in the model bias (difference to observations), implying greater errors compared to the Land-Surface models. For instance, some regions such as Eastern Siberia have consistently lower Evaporation in the Hydrological models than the Land Surface models. For the Runoff however, the results are the other way round with a slightly higher spatial range in bias for the Land Surface models implying greater errors than the Hydrological models. A simple analysis would suggest that Hydrological models are designed to get the runoff right, while Land Surface models designed to get the evapotranspiration right. Tracing the source of the difference suggests that the difference comes from the treatment of snow and evapotranspiration. The study reveals that expertise in the role of snow on runoff generation and evapotranspiration in Hydrological and Land Surface could be combined to improve the representation of the fresh water flows in the Arctic in both approaches. Improved observations are essential to make these modelling advances possible.
Kjelstrom, L.C.
1995-01-01
Many individual springs and groups of springs discharge water from volcanic rocks that form the north canyon wall of the Snake River between Milner Dam and King Hill. Previous estimates of annual mean discharge from these springs have been used to understand the hydrology of the eastern part of the Snake River Plain. Four methods that were used in previous studies or developed to estimate annual mean discharge since 1902 were (1) water-budget analysis of the Snake River; (2) correlation of water-budget estimates with discharge from 10 index springs; (3) determination of the combined discharge from individual springs or groups of springs by using annual discharge measurements of 8 springs, gaging-station records of 4 springs and 3 sites on the Malad River, and regression equations developed from 5 of the measured springs; and (4) a single regression equation that correlates gaging-station records of 2 springs with historical water-budget estimates. Comparisons made among the four methods of estimating annual mean spring discharges from 1951 to 1959 and 1963 to 1980 indicated that differences were about equivalent to a measurement error of 2 to 3 percent. The method that best demonstrates the response of annual mean spring discharge to changes in ground-water recharge and discharge is method 3, which combines the measurements and regression estimates of discharge from individual springs.
Anderson, Mark T.
1995-01-01
The study of ground-water and surface-water interactions often employs streamflow-gaging records and hydrologic budgets to determine ground-water seepage. Because ground-water seepage usually is computed as a residual in the hydrologic budget approach, all uncertainty of measurement and estimation of budget components is associated with the ground-water seepage. This uncertainty can exceed the estimate, especially when streamflow and its associated error of measurement, is large relative to other budget components. In a study of Rapid Creek in western South Dakota, the hydrologic budget approach with hydrochemistry was combined to determine ground-water seepage. The City of Rapid City obtains most of its municipal water from three infiltration galleries (Jackson Springs, Meadowbrook, and Girl Scout) constructed in the near-stream alluvium along Rapid Creek. The reach of Rapid Creek between Pactola Reservoir and Rapid City and, in particular the two subreaches containing the galleries, were studied intensively to identify the sources of water to each gallery. Jackson Springs Gallery was found to pump predominantly ground water with a minor component of surface water. Meadowbrook and Girl Scout Galleries induce infiltration of surface water from Rapid Creek but also have a significant component of ground water.
The Budget and Economic Outlook: Fiscal Years 2001-2010
2000-01-01
CONGRESS OF THE UNITED STATES CONGRESSIONAL BUDGET OFFICE The Budget and Economic Outlook: Fiscal Years 2001-2010 Debt Held by the Public Under...20000223 042 THE BUDGET AND ECONOMIC OUTLOOK: FISCAL YEARS 2001-2010 The Congress of the United States Congressional Budget Office NOTES...recommendations. The analysis of the economic outlook presented in Chapter 2 was prepared by the Macroeco- nomic Analysis Division under the direction
Vanos, J K; Warland, J S; Gillespie, T J; Kenny, N A
2012-11-01
The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO(2) reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m(-2), respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation (I (cl)), as well clothing non-uniformity, with changing air temperature (T (a)) and metabolic activity (M (act)). Equivalent T (a) values (for I (cl) estimation) are calculated in order to lower the I (cl) value with increasing M (act) at equal T (a). Furthermore, threshold T (a) values are calculated to predict the point at which an individual will change from a uniform I (cl) to a segmented I (cl) (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity (v (r)) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v (r) equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m(-2) and 1.7°C higher when using the improved v (r) equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.
NASA Astrophysics Data System (ADS)
Vanos, J. K.; Warland, J. S.; Gillespie, T. J.; Kenny, N. A.
2012-11-01
The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO2 reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m-2, respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation ( I cl), as well clothing non-uniformity, with changing air temperature ( T a) and metabolic activity ( M act). Equivalent T a values (for I cl estimation) are calculated in order to lower the I cl value with increasing M act at equal T a. Furthermore, threshold T a values are calculated to predict the point at which an individual will change from a uniform I cl to a segmented I cl (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity ( v r) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v r equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m-2 and 1.7°C higher when using the improved v r equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.
NASA Technical Reports Server (NTRS)
Petersen, Jeremy; Tichy, Jason; Wawrzyniak, Geoffrey; Richon, Karen
2014-01-01
The James Webb Space Telescope will be launched into a highly elliptical orbit that does not possess sufficient energy to achieve a proper Sun-Earth L2 libration point orbit. Three mid-course correction (MCC) maneuvers are planned to rectify the energy deficit: MCC-1a, MCC-1b, and MCC-2. To validate the propellant budget and trajectory design methods, a set of Monte Carlo analyses that incorporate MCC maneuver modeling and execution are employed. The first analysis focuses on the effects of launch vehicle injection errors on the magnitude of MCC-1a. The second on the spread of potential V based on the performance of the propulsion system as applied to all three MCC maneuvers. The final highlights the slight, but notable, contribution of the attitude thrusters during each MCC maneuver. Given the possible variations in these three scenarios, the trajectory design methods are determined to be robust to errors in the modeling of the flight system.
Spacecraft-spacecraft very long baseline interferometry for planetary approach navigation
NASA Technical Reports Server (NTRS)
Edwards, Charles D., Jr.; Folkner, William M.; Border, James S.; Wood, Lincoln J.
1991-01-01
The study presents an error budget for Delta differential one-way range (Delta-DOR) measurements between two spacecraft. Such observations, made between a planetary orbiter (or lander) and another spacecraft approaching that planet, would provide a powerful target-relative angular tracking data type for approach navigation. Accuracies of about 5 nrad should be possible for a pair of X-band spacecraft incorporating 40-MHz DOR tone spacings, while accuracies approaching 1 nrad will be possible if the spacecraft incorporate Ka-band downlinks with DOR tone spacings of order 250 MHz. Operational advantages of this data type are discussed, and ground system requirements needed to enable S/C-S/C Delta-DOR observations are outlined. A covariance analysis is presented to examine the potential navigation improvement for this scenario. The results show factors of 2-3 improvement in spacecraft targeting over conventional Doppler, range, and quasar-relative VLBI, along with reduced sensitivity to ephemeris uncertainty and other systematic errors.
NASA Technical Reports Server (NTRS)
Petersen, Jeremy; Tichy, Jason; Wawrzyniak, Geoffrey; Richon, Karen
2014-01-01
The James Webb Space Telescope will be launched into a highly elliptical orbit that does not possess sufficient energy to achieve a proper Sun-Earth/Moon L2 libration point orbit. Three mid-course correction (MCC) maneuvers are planned to rectify the energy deficit: MCC-1a, MCC-1b, and MCC-2. To validate the propellant budget and trajectory design methods, a set of Monte Carlo analyses that incorporate MCC maneuver modeling and execution are employed. The first analysis focuses on the effects of launch vehicle injection errors on the magnitude of MCC-1a. The second on the spread of potential V based on the performance of the propulsion system as applied to all three MCC maneuvers. The final highlights the slight, but notable, contribution of the attitude thrusters during each MCC maneuver. Given the possible variations in these three scenarios, the trajectory design methods are determined to be robust to errors in the modeling of the flight system.
Federal Budget Analysis on Children's Social Services Programs. FY 1985.
ERIC Educational Resources Information Center
Capitol Publications, Inc., Arlington, VA.
To help subscribers better understand the federal budget, the editorial staff of "Report on Preschool Programs" has prepared this special analysis of the fiscal 1985 budget. The first section presents an overview of President Reagan's fiscal 1985 budget request and reports congressional reactions. Information focuses on the Social…
Simulating a transmon implementation of the surface code, Part I
NASA Astrophysics Data System (ADS)
Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo
Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
77 FR 55240 - Order Making Fiscal Year 2013 Annual Adjustments to Registration Fee Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-07
... Management and Budget (``OMB'') to project the aggregate offering price for purposes of the fiscal year 2012... AAMOP is given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n...
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Cost effectiveness of the stream-gaging program in Pennsylvania
Flippo, H.N.; Behrendt, T.E.
1985-01-01
This report documents a cost-effectiveness study of the stream-gaging program in Pennsylvania. Data uses and funding were identified for 223 continuous-record stream gages operated in 1983; four are planned for discontinuance at the close of water-year 1985; two are suggested for conversion, at the beginning of the 1985 water year, for the collection of only continuous stage records. Two of 11 special-purpose short-term gages are recommended for continuation when the supporting project ends; eight of these gages are to be discontinued and the other will be converted to a partial-record type. Current operation costs for the 212 stations recommended for continued operation is $1,199,000 per year in 1983. The average standard error of estimation for instantaneous streamflow is 15.2%. An overall average standard error of 9.8% could be attained on a budget of $1,271,000, which is 6% greater than the 1983 budget, by adopted cost-effective stream-gaging operations. (USGS)
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chow, Chi-Wai; Chiang, Ming-Feng; Shih, Fu-Yuan; Pan, Ci-Ling
2011-09-01
In a wavelength division multiplexed-passive optical network (WDM-PON), different fiber lengths and optical components would introduce different power budgets to different optical networking units (ONUs). Besides, the power decay of the distributed optical carrier from the optical line terminal owing to aging of the optical transmitter could also reduce the injected power into the ONU. In this work, we propose and demonstrate a carrier distributed WDM-PON using a reflective semiconductor optical amplifier-based ONU that can adjust its upstream data rate to accommodate different injected optical powers. The WDM-PON is evaluated at standard-reach (25 km) and long-reach (100 km). Bit-error rate measurements at different injected optical powers and transmission lengths show that by adjusting the upstream data rate of the system (622 Mb/s, 1.25 and 2.5 Gb/s), error-free (<10-9) operation can still be achieved when the power budget drops.
Error Budgeting and Tolerancing of Starshades for Exoplanet Detection
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Noecker, M. Charley; Glassman, Tiffany; Lo, Amy S.; Dumont, Philip J.; Kasdin, N. Jeremy; Cady, Eric J.; Vanderbei, Robert; Lawson, Peter R.
2010-01-01
A flower-like starshade positioned between a star and a space telescope is an attractive option for blocking the starlight to reveal the faint reflected light of an orbiting Earth-like planet. Planet light passes around the petals and directly enters the telescope where it is seen along with a background of scattered light due to starshade imperfections. We list the major perturbations that are expected to impact the performance of a starshade system and show that independent models at NGAS and JPL yield nearly identical optical sensitivities. We give the major sensitivities in the image plane for a design consisting of a 34-m diameter starshade, and a 2-m diameter telescope separated by 39,000 km, operating between 0.25 and 0.55 um. These sensitivities include individual petal and global shape terms evaluated at the inner working angle. Following a discussion of the combination of individual perturbation terms, we then present an error budget that is consistent with detection of an Earth-like planet 26 magnitudes fainter than its host star.
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Fitzjarrald, Dan; Marshall, Susan; Oglesby, Robert; Roads, John; Arnold, James E. (Technical Monitor)
2001-01-01
This paper focuses on how fresh water and radiative fluxes over the tropical oceans change during ENSO warm and cold events and how these changes affect the tropical energy balance. At present, ENSO remains the most prominent known mode of natural variability at interannual time scales. While this natural perturbation to climate is quite distinct from possible anthropogenic changes in climate, adjustments in the tropical water and energy budgets during ENSO may give insight into feedback processes involving water vapor and cloud feedbacks. Although great advances have been made in understanding this phenomenon and realizing prediction skill over the past decade, our ability to document the coupled water and energy changes observationally and to represent them in climate models seems far from settled (Soden, 2000 J Climate). In a companion paper we have presented observational analyses, based principally on space-based measurements which document systematic changes in rainfall, evaporation, and surface and top-of-atmosphere (TOA) radiative fluxes. Here we analyze several contemporary climate models run with observed SSTs over recent decades and compare SST-induced changes in radiation, precipitation, evaporation, and energy transport to observational results. Among these are the NASA / NCAR Finite Volume Model, the NCAR Community Climate Model, the NCEP Global Spectral Model, and the NASA NSIPP Model. Key disagreements between model and observational results noted in the recent literature are shown to be due predominantly to observational shortcomings. A reexamination of the Langley 8-Year Surface Radiation Budget data reveals errors in the SST surface longwave emission due to biased SSTs. Subsequent correction allows use of this data set along with ERBE TOA fluxes to infer net atmospheric radiative heating. Further analysis of recent rainfall algorithms provides new estimates for precipitation variability in line with interannual evaporation changes inferred from the da Silva, Young, Levitus COADS analysis. The overall results from our analysis suggest an increase (decrease) of the hydrologic cycle during ENSO warm (cold) events at the rate of about 5 W/sq m per K of SST change. Model results agree reasonably well with this estimate of sensitivity. This rate is slightly less than that which would be expected for constant relative humidity over the tropical oceans. There remain, however, significant quantitative uncertainties in cloud forcing changes in the models as compared to observations. These differences are examined in relationship to model convection and cloud parameterizations Analysis of the possible sampling and measurement errors compared to systematic model errors is also presented.
1986-06-01
la Armada (EMGAR)-- Staff of the Navy ------------------------- 18 b. Direction de Presupuesto Programac ion Ecomica (DIPPE)-Direction of Budget and...Economic Programming -------------------- 18 c. Cornite De Programacion y Presupuesto (CPP)-- Programming and Budget Committee-----------18 3. Major...development. This analysis is included in the annual budget. b. Direction de Presupuesto Programaclon Ecomica (DIPPE)- Direction of Budget and Economic
Size, Stability and Incremental Budgeting Outcomes in Public Universities.
ERIC Educational Resources Information Center
Schick, Allen G.; Hills, Frederick S.
1982-01-01
Examined the influence of relative size in the analysis of total dollar and workforce budgets, and changes in total dollar and workforce budgets when correlational/regression methods are used. Data suggested that size dominates the analysis of total budgets, and is not a factor when discretionary dollar increments are analyzed. (JAC)
NASA Astrophysics Data System (ADS)
Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.
2017-10-01
We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.
Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.
Zhao, Jian; Chen, Lian-Kuan
2017-04-17
We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Tsujimoto, Takuji; Suganuma, Masahiro; Niwa, Yoshito; Sako, Nobutada; Hatsutori, Yoichi; Tanaka, Takashi
2006-06-01
We explain simulation tools in JASMINE project (JASMINE simulator). The JASMINE project stands at the stage where its basic design will be determined in a few years. Then it is very important to simulate the data stream generated by astrometric fields at JASMINE in order to support investigations into error budgets, sampling strategy, data compression, data analysis, scientific performances, etc. Of course, component simulations are needed, but total simulations which include all components from observation target to satellite system are also very important. We find that new software technologies, such as Object Oriented(OO) methodologies are ideal tools for the simulation system of JASMINE(the JASMINE simulator). In this article, we explain the framework of the JASMINE simulator.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
Cumulative rain fade statistics are used by space communications engineers to establish transmitter power and receiver sensitivities for systems operating under various geometries, climates, and radio frequencies. Space-diversity performance criteria are also of interest. This work represents a review, in which are examined the many elements involved in the employment of single nonattenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenuation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain-attenuation modeling. Suggestions are made for improving present methods.
Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)
NASA Astrophysics Data System (ADS)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng
2018-06-01
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.
Dalton, Melinda S.; Aulenbach, Brent T.; Torak, Lynn J.
2004-01-01
Lake Seminole is a 37,600-acre impoundment formed at the confluence of the Flint and Chattahoochee Rivers along the Georgia?Florida State line. Outflow from Lake Seminole through Jim Woodruff Lock and Dam provides headwater to the Apalachicola River, which is a major supply of freshwater, nutrients, and detritus to ecosystems downstream. These rivers,together with their tributaries, are hydraulically connected to karst limestone units that constitute most of the Upper Floridan aquifer and to a chemically weathered residuum of undifferentiated overburden. The ground-water flow system near Lake Seminole consists of the Upper Floridan aquifer and undifferentiated overburden. The aquifer is confined below by low-permeability sediments of the Lisbon Formation and, generally, is semiconfined above by undifferentiated overburden. Ground-water flow within the Upper Floridan aquifer is unconfined or semiconfined and discharges at discrete points by springflow or diffuse leakage into streams and other surface-water bodies. The high degree of connectivity between the Upper Floridan aquifer and surface-water bodies is limited to the upper Eocene Ocala Limestone and younger units that are in contact with streams in the Lake Seminole area. The impoundment of Lake Seminole inundated natural stream channels and other low-lying areas near streams and raised the water-level altitude of the Upper Floridan aquifer near the lake to nearly that of the lake, about 77 feet. Surface-water inflow from the Chattahoochee and Flint Rivers and Spring Creek and outflow to the Apalachicola River through Jim Woodruff Lock and Dam dominate the water budget for Lake Seminole. About 81 percent of the total water-budget inflow consists of surface water; about 18 percent is ground water, and the remaining 1 percent is lake precipitation. Similarly, lake outflow consists of about 89 percent surface water, as flow to the Apalachicola River through Jim Woodruff Lock and Dam, about 4 percent ground water, and about 2 percent lake evaporation. Measurement error and uncertainty in flux calculations cause a flow imbalance of about 4 percent between inflow and outflow water-budget components. Most of this error can be attributed to errors in estimating ground-water discharge from the lake, which was calculated using a ground-water model calibrated to October 1986 conditions for the entire Apalachicola?Chattahoochee?Flint River Basin and not just the area around Lake Seminole. Evaporation rates were determined using the preferred, but mathematically complex, energy budget and five empirical equations: Priestley-Taylor, Penman, DeBruin-Keijman, Papadakis, and the Priestley-Taylor used by the Georgia Automated Environmental Monitoring Network. Empirical equations require a significant amount of data but are relatively easy to calculate and compare well to long-term average annual (April 2000?March 2001) pan evaporation, which is 65 inches. Calculated annual lake evaporation, for the study period, using the energy-budget method was 67.2 inches, which overestimated long-term average annual pan evaporation by 2.2 inches. The empirical equations did not compare well with the energy-budget method during the 18-month study period, with average differences in computed evaporation using each equation ranging from 8 to 26 percent. The empirical equations also compared poorly with long-term average annual pan evaporation, with average differences in evaporation ranging from 3 to 23 percent. Energy budget and long-term average annual pan evaporation estimates did compare well, with only a 3-percent difference between estimates. Monthly evaporation estimates using all methods ranged from 0.7 to 9.5 inches and were lowest during December 2000 and highest during May 2000. Although the energy budget is generally the preferred method, the dominance of surface water in the Lake Seminole water budget makes the method inaccurate and difficult to use, because surface water makes up m
NASA Astrophysics Data System (ADS)
Evrard, Rebecca L.; Ding, Yifeng
2018-01-01
Clouds play a large role in the Earth's global energy budget, but the impact of cirrus clouds is still widely questioned and researched. Cirrus clouds reside high in the atmosphere and due to cold temperatures are comprised of ice crystals. Gaining a better understanding of ice cloud optical properties and the distribution of cirrus clouds provides an explanation for the contribution of cirrus clouds to the global energy budget. Using radiative transfer models (RTMs), accurate simulations of cirrus clouds can enhance the understanding of the global energy budget as well as improve the use of global climate models. A newer, faster RTM such as the visible infrared imaging radiometer suite (VIIRS) fast radiative transfer model (VFRTM) is compared to a rigorous RTM such as the line-by-line radiative transfer model plus the discrete ordinates radiative transfer program. By comparing brightness temperature (BT) simulations from both models, the accuracy of the VFRTM can be obtained. This study shows root-mean-square error <0.2 K for BT difference using reanalysis data for atmospheric profiles and updated ice particle habit information from the moderate-resolution imaging spectroradiometer collection 6. At a higher resolution, the simulated results of the VFRTM are compared to the observations of VIIRS resulting in a <1.5 % error from the VFRTM for all cases. The VFRTM is validated and is an appropriate RTM to use for global cloud retrievals.
Addressing and Presenting Quality of Satellite Data via Web-Based Services
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory; Lynnes, C.; Ahmad, S.; Fox, P.; Zednik, S.; West, P.
2011-01-01
With the recent attention to climate change and proliferation of remote-sensing data utilization, climate model and various environmental monitoring and protection applications have begun to increasingly rely on satellite measurements. Research application users seek good quality satellite data, with uncertainties and biases provided for each data point. However, different communities address remote sensing quality issues rather inconsistently and differently. We describe our attempt to systematically characterize, capture, and provision quality and uncertainty information as it applies to the NASA MODIS Aerosol Optical Depth data product. In particular, we note the semantic differences in quality/bias/uncertainty at the pixel, granule, product, and record levels. We outline various factors contributing to uncertainty or error budget; errors. Web-based science analysis and processing tools allow users to access, analyze, and generate visualizations of data while alleviating users from having directly managing complex data processing operations. These tools provide value by streamlining the data analysis process, but usually shield users from details of the data processing steps, algorithm assumptions, caveats, etc. Correct interpretation of the final analysis requires user understanding of how data has been generated and processed and what potential biases, anomalies, or errors may have been introduced. By providing services that leverage data lineage provenance and domain-expertise, expert systems can be built to aid the user in understanding data sources, processing, and the suitability for use of products generated by the tools. We describe our experiences developing a semantic, provenance-aware, expert-knowledge advisory system applied to NASA Giovanni web-based Earth science data analysis tool as part of the ESTO AIST-funded Multi-sensor Data Synergy Advisor project.
NASA Astrophysics Data System (ADS)
Aziz, Wan Noor Hayatie Wan Abdul; Aziz, Rossidah Wan Abdul; Shuib, Adibah; Razi, Nor Faezah Mohamad
2014-06-01
Budget planning enables an organization to set priorities towards achieving certain goals and to identify the highest priorities to be accomplished with the available funds, thus allowing allocation of resources according to the set priorities and constraints. On the other hand, budget execution and monitoring enables allocated funds or resources to be utilized as planned. Our study concerns with investigating the relationship between budget allocation and budget utilization of faculties in a public university in Malaysia. The focus is on the university's operations management financial allocation and utilization based on five categories which are emolument expenditure, academic or services and supplies expenditure, maintenance expenditure, student expenditure and others expenditure. The analysis on financial allocation and utilization is performed based on yearly quarters. Data collected include three years faculties' budget allocation and budget utilization performance involving a sample of ten selected faculties of a public university in Malaysia. Results show that there are positive correlation and significant relationship between quarterly budget allocation and quarterly budget utilization. This study found that emolument give the highest contribution to the total allocation and total utilization for all quarters. This paper presents some findings based on statistical analysis conducted which include descriptive statistics and correlation analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
Input-output analysis and the hospital budgeting process.
Cleverly, W O
1975-01-01
Two hospitals budget systems, a conventional budget and an input-output budget, are compared to determine how they affect management decisions in pricing, output, planning, and cost control. Analysis of data from a 210-bed not-for-profit hospital indicates that adoption of the input-output budget could cause substantial changes in posted hospital rates in individual departments but probably would have no impact on hospital output determination. The input-output approach promises to be a more accurate system for cost control and planning because, unlike the conventional approach, it generates objective signals for investigating variances of expenses from budgeted levels. PMID:1205865
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... be submitted to the Office of Management and Budget (OMB) for review and approval. Proposed... size to evaluate the measurement error structure of the diet and physical activity assessment... on cancer research, diagnosis, prevention and treatment. Dietary and physical activity data will be...
Semiannual Report to Congress, No. 49. April 1, 2004-September 30, 2004
ERIC Educational Resources Information Center
US Department of Education, 2004
2004-01-01
This report highlights significant work of the U.S. Department of Education's Office of Inspector General for the 6-month period ending September 30, 2004. Sections include: Activities and Accomplishments; Elimination of Fraud and Error in Student Aid Programs; Budget and Performance Integration; Financial Management; Expanded Electronic…
Resource-Bounded Information Gathering for Correlation Clustering
2007-01-01
5], budgeted learning, [4], and active learning , for example, [3]. 3 Acknowledgments We thank Avrim Blum, Katrina Ligett, Chris Pal, Sridhar...2007 3. N. Roy, A. McCallum, Toward Optimal Active Learning through Sampling Estima- tion of Error Reduction, Proc. of 18th ICML, 2001 4. A. Kapoor, R
7 CFR 2.501 - Director, Office of Budget and Program Analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... authority are made by the Chief Financial Officer to the Director, Office of Budget and Program Analysis: (1... 7 Agriculture 1 2011-01-01 2011-01-01 false Director, Office of Budget and Program Analysis. 2.501... OF AGRICULTURE AND GENERAL OFFICERS OF THE DEPARTMENT Delegations of Authority by the Chief Financial...
Constraining the mass–richness relationship of redMaPPer clusters with angular clustering
Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...
2016-08-04
The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less
High-quality two-nucleon potentials up to fifth order of the chiral expansion
NASA Astrophysics Data System (ADS)
Entem, D. R.; Machleidt, R.; Nosyk, Y.
2017-08-01
We present NN potentials through five orders of chiral effective field theory ranging from leading order (LO) to next-to-next-to-next-to-next-to-leading order (N4LO ). The construction may be perceived as consistent in the sense that the same power counting scheme as well as the same cutoff procedures are applied in all orders. Moreover, the long-range parts of these potentials are fixed by the very accurate π N low-energy constants (LECs) as determined in the Roy-Steiner equations analysis by Hoferichter, Ruiz de Elvira, and coworkers. In fact, the uncertainties of these LECs are so small that a variation within the errors leads to effects that are essentially negligible, reducing the error budget of predictions considerably. The NN potentials are fit to the world NN data below the pion-production threshold of the year 2016. The potential of the highest order (N4LO ) reproduces the world NN data with the outstanding χ2/datum of 1.15, which is the highest precision ever accomplished for any chiral NN potential to date. The NN potentials presented may serve as a solid basis for systematic ab initio calculations of nuclear structure and reactions that allow for a comprehensive error analysis. In particular, the consistent order by order development of the potentials will make possible a reliable determination of the truncation error at each order. Our family of potentials is nonlocal and, generally, of soft character. This feature is reflected in the fact that the predictions for the triton binding energy (from two-body forces only) converges to about 8.1 MeV at the highest orders. This leaves room for three-nucleon-force contributions of moderate size.
Consistent, high-quality two-nucleon potentials up to fifth order of the chiral expansion
NASA Astrophysics Data System (ADS)
Machleidt, R.
2018-02-01
We present N N potentials through five orders of chiral effective field theory ranging from leading order (LO) to next-to-next-to-next-to-next-to-leading order (N4LO). The construction may be perceived as consistent in the sense that the same power counting scheme as well as the same cutoff procedures are applied in all orders. Moreover, the long-range parts of these potentials are fixed by the very accurate πN low-energy constants (LECs) as determined in the Roy-Steiner equations analysis by Hoferichter, Ruiz de Elvira and coworkers. In fact, the uncertainties of these LECs are so small that a variation within the errors leads to effects that are essentially negligible, reducing the error budget of predictions considerably. The N N potentials are fit to the world N N data below pion-production threshold of the year of 2016. The potential of the highest order (N4LO) reproduces the world N N data with the outstanding χ 2/datum of 1.15, which is the highest precision ever accomplished for any chiral N N potential to date. The N N potentials presented may serve as a solid basis for systematic ab initio calculations of nuclear structure and reactions that allow for a comprehensive error analysis. In particular, the consistent order by order development of the potentials will make possible a reliable determination of the truncation error at each order. Our family of potentials is non-local and, generally, of soft character. This feature is reflected in the fact that the predictions for the triton binding energy (from two-body forces only) converges to about 8.1 MeV at the highest orders. This leaves room for three-nucleon-force contributions of moderate size.
Soil Carbon Budget During Establishment of Short Rotation Woody Crops
NASA Astrophysics Data System (ADS)
Coleman, M. D.
2003-12-01
Carbon budgets were monitored following forest harvest and during re-establishment of short rotation woody crops. Soil CO2 efflux was monitored using infared gas analyzer methods, fine root production was estimated with minirhizotrons, above ground litter inputs were trapped, coarse root inputs were estimated with developed allometric relationships, and soil carbon pools were measured in loblolly pine and cottonwood plantations. Our carbon budget allows evaluation of errors, as well as quantifying pools and fluxes in developing stands during non-steady-state conditions. Soil CO2 efflux was larger than the combined inputs from aboveground litter fall and root production. Fine-root production increased during stand development; however, mortality was not yet equivalent to production, showing the belowground carbon budget was not yet in equilibrium and root carbon standing crop was accruing. Belowground production was greater in cottonwood than pine, but the level of pine soil CO2 efflux was equal to or greater than that of cottonwood, indicating heterotrophic respiration was higher for pine. Comparison of unaccounted efflux with soil organic carbon changes provides verification of loss or accrual.
Cost-effectiveness of the stream-gaging program in Maryland, Delaware, and the District of Columbia
Carpenter, David H.; James, R.W.; Gillen, D.F.
1987-01-01
This report documents the results of a cost-effectiveness study of the stream-gaging program in Maryland, Delaware, and the District of Columbia. Data uses and funding sources were identified for 99 continuously operated stream gages in Maryland , Delaware, and the District of Columbia. The current operation of the program requires a budget of $465,260/year. The average standard error of estimation of streamflow records is 11.8%. It is shown that this overall level of accuracy at the 99 sites could be maintained with a budget of $461,000, if resources were redistributed among the gages. (USGS)
Saha, Amartya K.; Moses, Christopher S.; Price, Rene M.; Engel, Victor; Smith, Thomas J.; Anderson, Gordon
2012-01-01
Water budget parameters are estimated for Shark River Slough (SRS), the main drainage within Everglades National Park (ENP) from 2002 to 2008. Inputs to the water budget include surface water inflows and precipitation while outputs consist of evapotranspiration, discharge to the Gulf of Mexico and seepage losses due to municipal wellfield extraction. The daily change in volume of SRS is equated to the difference between input and outputs yielding a residual term consisting of component errors and net groundwater exchange. Results predict significant net groundwater discharge to the SRS peaking in June and positively correlated with surface water salinity at the mangrove ecotone, lagging by 1 month. Precipitation, the largest input to the SRS, is offset by ET (the largest output); thereby highlighting the importance of increasing fresh water inflows into ENP for maintaining conditions in terrestrial, estuarine, and marine ecosystems of South Florida.
NASA Astrophysics Data System (ADS)
Sturtevant, John L.; Liubich, Vlad; Gupta, Rachit
2016-04-01
Edge placement error (EPE) was a term initially introduced to describe the difference between predicted pattern contour edge and the design target for a single design layer. Strictly speaking, this quantity is not directly measurable in the fab. What is of vital importance is the relative edge placement errors between different design layers, and in the era of multipatterning, the different constituent mask sublayers for a single design layer. The critical dimensions (CD) and overlay between two layers can be measured in the fab, and there has always been a strong emphasis on control of overlay between design layers. The progress in this realm has been remarkable, accelerated in part at least by the proliferation of multipatterning, which reduces the available overlay budget by introducing a coupling of overlay and CD errors for the target layer. Computational lithography makes possible the full-chip assessment of two-layer edge to edge distances and two-layer contact overlap area. We will investigate examples of via-metal model-based analysis of CD and overlay errors. We will investigate both single patterning and double patterning. For single patterning, we show the advantage of contour-to-contour simulation over contour to target simulation, and how the addition of aberrations in the optical models can provide a more realistic CD-overlay process window (PW) for edge placement errors. For double patterning, the interaction of 4-layer CD and overlay errors is very complex, but we illustrate that not only can full-chip verification identify potential two-layer hotspots, the optical proximity correction engine can act to mitigate such hotspots and enlarge the joint CD-overlay PW.
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P.
2015-12-02
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help tomore » correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.« less
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P
2015-12-14
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.
Evaluation of Uncertainty in Precipitation Datasets for New Mexico, USA
NASA Astrophysics Data System (ADS)
Besha, A. A.; Steele, C. M.; Fernald, A.
2014-12-01
Climate change, population growth and other factors are endangering water availability and sustainability in semiarid/arid areas particularly in the southwestern United States. Wide coverage of spatial and temporal measurements of precipitation are key for regional water budget analysis and hydrological operations which themselves are valuable tool for water resource planning and management. Rain gauge measurements are usually reliable and accurate at a point. They measure rainfall continuously, but spatial sampling is limited. Ground based radar and satellite remotely sensed precipitation have wide spatial and temporal coverage. However, these measurements are indirect and subject to errors because of equipment, meteorological variability, the heterogeneity of the land surface itself and lack of regular recording. This study seeks to understand precipitation uncertainty and in doing so, lessen uncertainty propagation into hydrological applications and operations. We reviewed, compared and evaluated the TRMM (Tropical Rainfall Measuring Mission) precipitation products, NOAA's (National Oceanic and Atmospheric Administration) Global Precipitation Climatology Centre (GPCC) monthly precipitation dataset, PRISM (Parameter elevation Regression on Independent Slopes Model) data and data from individual climate stations including Cooperative Observer Program (COOP), Remote Automated Weather Stations (RAWS), Soil Climate Analysis Network (SCAN) and Snowpack Telemetry (SNOTEL) stations. Though not yet finalized, this study finds that the uncertainty within precipitation estimates datasets is influenced by regional topography, season, climate and precipitation rate. Ongoing work aims to further evaluate precipitation datasets based on the relative influence of these phenomena so that we can identify the optimum datasets for input to statewide water budget analysis.
Quantifying Diurnal Cloud Radiative Effects by Cloud Type in the Tropical Western Pacific
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burleyson, Casey D.; Long, Charles N.; Comstock, Jennifer M.
2015-06-01
Cloud radiative effects are examined using long-term datasets collected at the three Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facilities in the tropical western Pacific. We quantify the surface radiation budget, cloud populations, and cloud radiative effects by partitioning the data by cloud type, time of day, and as a function of large scale modes of variability such as El Niño Southern Oscillation (ENSO) phase and wet/dry seasons at Darwin. The novel facet of our analysis is that we break aggregate cloud radiative effects down by cloud type across the diurnal cycle. The Nauru cloud populations andmore » subsequently the surface radiation budget are strongly impacted by ENSO variability whereas the cloud populations over Manus only shift slightly in response to changes in ENSO phase. The Darwin site exhibits large seasonal monsoon related variations. We show that while deeper convective clouds have a strong conditional influence on the radiation reaching the surface, their limited frequency reduces their aggregate radiative impact. The largest source of shortwave cloud radiative effects at all three sites comes from low clouds. We use the observations to demonstrate that potential model biases in the amplitude of the diurnal cycle and mean cloud frequency would lead to larger errors in the surface energy budget compared to biases in the timing of the diurnal cycle of cloud frequency. Our results provide solid benchmarks to evaluate model simulations of cloud radiative effects in the tropics.« less
Thompson, Ryan F.
2002-01-01
A wetland was constructed in the Skunk Creek flood plain near Lyons in southeast South Dakota to mitigate for wetland areas that were filled during construction of a municipal golf course for the city of Sioux Falls. A water-rights permit was obtained to allow the city to pump water from Skunk Creek into the wetland during times when the wetland would be dry. The amount of water seeping through the wetland and recharging the underlying Skunk Creek aquifer was not known. The U.S. Geological Survey, in cooperation with the city of Sioux Falls, conducted a study during 1997-2000 to evaluate recharge to the Skunk Creek aquifer from the constructed wetland. Three methods were used to estimate recharge from the wetland to the aquifer: (1) analysis of the rate of water-level decline during periods of no inflow; (2) flow-net analysis; and (3) analysis of the hydrologic budget. The hydrologic budget also was used to evaluate the efficiency of recharge from the wetland to the aquifer. Recharge rates estimated by analysis of shut-off events ranged from 0.21 to 0.82 foot per day, but these estimates may be influenced by possible errors in volume calculations. Recharge rates determined by flow-net analysis were calculated using selected values of hydraulic conductivity and ranged from 566,000 gallons per day using a hydraulic conductivity of 0.5 foot per day to 1,684,000 gallons per day using a hydraulic conductivity of 1.0 foot per day. Recharge rates from the hydrologic budget varied from 0.74 to 0.85 foot per day, and averaged 0.79 foot per day. The amount of water lost to evapotranspiration at the study wetland is very small compared to the amount of water seeping from the wetland into the aquifer. Based on the hydrologic budget, the average recharge efficiency was estimated as 97.9 percent, which indicates that recharging the Skunk Creek aquifer by pumping water into the study wetland is highly efficient. Because the Skunk Creek aquifer is composed of sand and gravel, the 'recharge mound' is less distinct than might be found in an aquifer composed of finer materials. However, water levels recorded from piezometers in and around the wetland do show a higher water table than periods when the wetland was dry. The largest increases in water level occur between the wetland channel and Skunk Creek. The results of this study demonstrate that artificially recharged wetlands can be useful in recharging underlying aquifers and increasing water levels in these aquifers.
NASA Technical Reports Server (NTRS)
Stretchberry, D. M.; Hein, G. F.
1972-01-01
The general concepts of costing, budgeting, and benefit-cost ratio and cost-effectiveness analysis are discussed. The three common methods of costing are presented. Budgeting distributions are discussed. The use of discounting procedures is outlined. The benefit-cost ratio and cost-effectiveness analysis is defined and their current application to NASA planning is pointed out. Specific practices and techniques are discussed, and actual costing and budgeting procedures are outlined. The recommended method of calculating benefit-cost ratios is described. A standardized method of cost-effectiveness analysis and long-range planning are also discussed.
The Fiscal Year 1993 Bush Budget: Still Not Tackling the Nation's Problems.
ERIC Educational Resources Information Center
Greenstein, Robert; Leonard, Paul A.
On January 29, 1992, the Bush Administration unveiled its fiscal year 1993 budget. An examination of the budget reveals a substantial gap between the administration's rhetoric concerning the budget and what the budget actually contains. An analysis reveals a budget that continues to give priority to defense over domestic spending, one that favors…
Cluster mislocation in kinematic Sunyaev-Zel'dovich (kSZ) effect extraction
NASA Astrophysics Data System (ADS)
Calafut, Victoria Rose; Bean, Rachel; Yu, Byeonghee
2018-01-01
We investigate the impact of a variety of analysis assumptions that influence cluster identification and location on the kSZ pairwise momentum signal and covariance estimation. Photometric and spectroscopic galaxy tracers from SDSS, WISE, and DECaLs, spanning redshifts 0.05
GPS-Based Precision Orbit Determination for a New Era of Altimeter Satellites: Jason-1 and ICESat
NASA Technical Reports Server (NTRS)
Luthcke, Scott B.; Rowlands, David D.; Lemoine, Frank G.; Zelensky, Nikita P.; Williams, Teresa A.
2003-01-01
Accurate positioning of the satellite center of mass is necessary in meeting an altimeter mission's science goals. The fundamental science observation is an altimetric derived topographic height. Errors in positioning the satellite's center of mass directly impact this fundamental observation. Therefore, orbit error is a critical Component in the error budget of altimeter satellites. With the launch of the Jason-1 radar altimeter (Dec. 2001) and the ICESat laser altimeter (Jan. 2003) a new era of satellite altimetry has begun. Both missions pose several challenges for precision orbit determination (POD). The Jason-1 radial orbit accuracy goal is 1 cm, while ICESat (600 km) at a much lower altitude than Jason-1 (1300 km), has a radial orbit accuracy requirement of less than 5 cm. Fortunately, Jason-1 and ICESat POD can rely on near continuous tracking data from the dual frequency codeless BlackJack GPS receiver and Satellite Laser Ranging. Analysis of current GPS-based solution performance indicates the l-cm radial orbit accuracy goal is being met for Jason-1, while radial orbit accuracy for ICESat is well below the 54x1 mission requirement. A brief overview of the GPS precision orbit determination methodology and results for both Jason-1 and ICESat are presented.
Jason-2 systematic error analysis in the GPS derived orbits
NASA Astrophysics Data System (ADS)
Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.
2011-12-01
Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced dynamic versus dynamic orbit differences are used to characterize the remaining force model error and TRF instability. At first, we quantify the effect of a North/South displacement of the tracking reference points for each of the three techniques. We then compare these results to the study of Morel and Willis (2005) and Ceri et al. (2010). We extend the analysis to the most recent Jason-2 cycles. We evaluate the GPS vs SLR & DORIS orbits produced using the GEODYN.
Impact of Orbit Position Errors on Future Satellite Gravity Models
NASA Astrophysics Data System (ADS)
Encarnacao, J.; Ditmar, P.; Klees, R.
2015-12-01
We present the results of a study of the impact of orbit positioning noise (OPN) caused by incomplete knowledge of the Earth's gravity field on gravity models estimated from satellite gravity data. The OPN is simulated as the difference between two sets of orbits integrated on the basis of different static gravity field models. The OPN is propagated into ll-SST data, here computed as averaged inter-satellite accelerations projected onto the Line of Sight (LoS) vector between the two satellites. We consider the cartwheel formation (CF), pendulum formation (PF), and trailing formation (TF) as they produce a different dominant orientation of the LoS vector. Given the polar orbits of the formations, the LoS vector is mainly aligned with the North-South direction in the TF, with the East-West direction in the PF (i.e. no along-track offset), and contains a radial component in the CF. An analytical analysis predicts that the CF suffers from a very high sensitivity to the OPN. This is a fundamental characteristic of this formation, which results from the amplification of this noise by diagonal components of the gravity gradient tensor (defined in the local frame) during the propagation into satellite gravity data. In contrast, the OPN in the data from PF and TF is only scaled by off-diagonal gravity gradient components, which are much smaller than the diagonal tensor components. A numerical analysis shows that the effect of the OPN is similar in the data collected by the TF and the PF. The amplification of the OPN errors for the CF leads to errors in the gravity model that are three orders of magnitude larger than those in case of the PF. This means that any implementation of the CF will most likely produce data with relatively low quality since this error dominates the error budget, especially at low frequencies. This is particularly critical for future gravimetric missions that will be equipped with highly accurate ranging sensors.
Camera system considerations for geomorphic applications of SfM photogrammetry
Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John
2017-01-01
The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.
Analysis of the U.S. geological survey streamgaging network
Scott, A.G.
1987-01-01
This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U.S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3,493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19.9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17.8 percent. The existing streamgaging networks in four Districts were further analyzed to determine the impacts that satellite telemetry would have on the cost effectiveness. Satellite telemetry was not found to be cost effective on the basis of hydrologic data collection alone, given present cost of equipment and operation.This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U. S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3, 493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19. 9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17. 8 percent. Additional study results are discussed.
A note on the depreciation of the societal perspective in economic evaluation of health care.
Johannesson, M
1995-07-01
It is common in cost-effectiveness analyses of health care to only include health care costs, with the argument that some fictive 'health care budget' should be used to maximize the health effects. This paper provides a criticism of the 'health care budget' approach to cost-effectiveness analysis of health care. It is argued that the approach is ad hoc and lacks theoretical foundation. The approach is also inconsistent with using a fixed budget as the decision rule for cost-effectiveness analysis. That is the case unless only costs that fall into a single annual actual budget are included in the analysis, which would mean that any cost paid by the patients should be excluded as well as any future cost changes and all costs that fall on other budgets. Furthermore the prices facing the budget holder should be used, rather than opportunity costs. It is concluded that the 'health care budget' perspective should be abandoned and the societal perspective reinstated in economic evaluation of health care.
A Bayesian approach to multisource forest area estimation
Andrew O. Finley
2007-01-01
In efforts such as land use change monitoring, carbon budgeting, and forecasting ecological conditions and timber supply, demand is increasing for regional and national data layers depicting forest cover. These data layers must permit small area estimates of forest and, most importantly, provide associated error estimates. This paper presents a model-based approach for...
Cost-efficient selection of a marker panel in genetic studies
Jamie S. Sanderlin; Nicole Lazar; Michael J. Conroy; Jaxk Reeves
2012-01-01
Genetic techniques are frequently used to sample and monitor wildlife populations. The goal of these studies is to maximize the ability to distinguish individuals for various genetic inference applications, a process which is often complicated by genotyping error. However, wildlife studies usually have fixed budgets, which limit the number of geneticmarkers available...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-13
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Submission for OMB Review... Office of Management and Budget (OMB) a request to review and approve the information collection listed... measurement error structure of the diet and physical activity assessment instruments and the heterogeneity of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
... and by educating the public, especially young people, about tobacco products and the dangers their use... identified. When FDA receives tobacco-specific adverse event and product problem information, it will use the... quality problem, or product use error occurs. This risk identification process is the first necessary step...
ERIC Educational Resources Information Center
Meyer, J. Patrick; Liu, Xiang; Mashburn, Andrew J.
2014-01-01
Researchers often use generalizability theory to estimate relative error variance and reliability in teaching observation measures. They also use it to plan future studies and design the best possible measurement procedures. However, designing the best possible measurement procedure comes at a cost, and researchers must stay within their budget…
Matthews, Grant
2004-12-01
The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.
Precision VUV Spectro-Polarimetry for Solar Chromospheric Magnetic Field Measurements
NASA Astrophysics Data System (ADS)
Ishikawa, R.; Bando, T.; Hara, H.; Ishikawa, S.; Kano, R.; Kubo, M.; Katsukawa, Y.; Kobiki, T.; Narukage, N.; Suematsu, Y.; Tsuneta, S.; Aoki, K.; Miyagawa, K.; Ichimoto, K.; Kobayashi, K.; Auchère, F.; Clasp Team
2014-10-01
The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a VUV spectro-polarimeter optimized for measuring the linear polarization of the Lyman-α line (121.6 nm) to be launched in 2015 with NASA's sounding rocket (Ishikawa et al. 2011; Narukage et al. 2011; Kano et al. 2012; Kobayashi et al. 2012). With this experiment, we aim to (1) observe the scattering polarization in the Lyman-α line, (2) detect the Hanle effect, and (3) assess the magnetic fields in the upper chromosphere and transition region for the first time. The polarization measurement error consists of scale error δ a (error in amplitude of linear polarization), azimuth error Δφ (error in the direction of linear polarization), and spurious polarization ɛ (false linear polarization signals). The error ɛ should be suppressed below 0.1% in the Lyman-α core (121.567 nm ±0.02 nm), and 0.5% in the Lyman-α wing (121.567 nm ±0.05 nm), based on our scientific requirements shown in Table 2 of Kubo et al. (2014). From scientific justification, we adopt Δ φ<2° and δ a<10% as the instrument requirements. The spectro-polarimeter features a continuously rotating MgF2 waveplate (Ishikawa et al. 2013), a dual-beam spectrograph with a spherical grating working also as a beam splitter, and two polarization analyzers (Bridou et al. 2011), which are mounted at 90 degree from each other to measure two orthogonal polarization simultaneously. For the optical layout of the CLASP instrument, see Figure 3 in Kubo et al. (2014). Considering the continuous rotation of the half-waveplate, the modulation efficiency is 0.64 both for Stokes Q and U. All the raw data are returned and demodulation (successive addition or subtraction of images) is done on the ground. We control the CLASP polarization performance in the following three steps. First, we evaluate the throughput and polarization properties of each optical component in the Lyman-α line, using the Ultraviolet Synchrotron ORbital Radiation Facility (UVSOR) at the Institute for Molecular Science. The second step is polarization calibration of the spectro-polarimeter after alignment. Since the spurious polarization caused by the axisymmetric telescope is estimated to be negligibly small because of the symmetry (Ishikawa et al. 2014), we do not perform end-to-end polarization calibration. As the final step, before the scientific observation near the limb, we make a short observation at the Sun center and verify the polarization sensitivity, because the scattering polarization is expected to be close to zero at the Sun center due to symmetric geometry. In order to clarify whether we will be able to achieve the required polarization sensitivity and accuracy via these steps, we exercise polarization error budget, by investigating all the possible causes and their magnitudes of polarization errors, all of which are not necessarily verified by the polarization calibration. Based on these error budgets, we conclude that a polarization sensitivity of 0.1% in the line core, δ a<10% and Δ φ<2° can be achieved combined with the polarization calibration of the spectro-polarimeter and the onboard calibration at the Sun center(refer to Ishikawa et al. 2014, for the detail). We are currently conducting verification tests of the flight components and development of the UV light source for the polarization calibration. From 2014 spring, we will begin the integration, alignment, and calibration. We will update the error budgets throughout the course of these tests.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
In order to establish transmitter power and receiver sensitivity levels at frequencies above 10 GHz, the designers of earth-satellite telecommunication systems are interested in cumulative rain fade statistics at variable path orientations, elevation angles, climatological regions, and frequencies. They are also interested in establishing optimum space diversity performance parameters. In this work are examined the many elements involved in the employment of single non-attenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain attenuation modeling. Suggestions are made for improving present methods.
Exploring cosmic origins with CORE: Mitigation of systematic effects
NASA Astrophysics Data System (ADS)
Natoli, P.; Ashdown, M.; Banerji, R.; Borrill, J.; Buzzelli, A.; de Gasperis, G.; Delabrouille, J.; Hivon, E.; Molinari, D.; Patanchon, G.; Polastri, L.; Tomasi, M.; Bouchet, F. R.; Henrot-Versillé, S.; Hoang, D. T.; Keskitalo, R.; Kiiveri, K.; Kisner, T.; Lindholm, V.; McCarthy, D.; Piacentini, F.; Perdereau, O.; Polenta, G.; Tristram, M.; Achucarro, A.; Ade, P.; Allison, R.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Bartlett, J.; Bartolo, N.; Basak, S.; Baumann, D.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Boulanger, F.; Brinckmann, T.; Bucher, M.; Burigana, C.; Cai, Z.-Y.; Calvo, M.; Carvalho, C.-S.; Castellano, M. G.; Challinor, A.; Chluba, J.; Clesse, S.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; de Bernardis, P.; De Zotti, G.; Di Valentino, E.; Diego, J.-M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Finelli, F.; Forastieri, F.; Galli, S.; Genova-Santos, R.; Gerbino, M.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Gruppuso, A.; Hagstotz, S.; Hanany, S.; Handley, W.; Hernandez-Monteagudo, C.; Hervías-Caimapo, C.; Hills, M.; Keihänen, E.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lattanzi, M.; Lesgourgues, J.; Lewis, A.; Liguori, M.; López-Caniego, M.; Luzzi, G.; Maffei, B.; Mandolesi, N.; Martinez-González, E.; Martins, C. J. A. P.; Masi, S.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Migliaccio, M.; Monfardini, A.; Negrello, M.; Notari, A.; Pagano, L.; Paiella, A.; Paoletti, D.; Piat, M.; Pisano, G.; Pollo, A.; Poulin, V.; Quartin, M.; Remazeilles, M.; Roman, M.; Rossi, G.; Rubino-Martin, J.-A.; Salvati, L.; Signorelli, G.; Tartari, A.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Valiviita, J.; Van de Weijgaert, R.; van Tent, B.; Vennin, V.; Vielva, P.; Vittorio, N.; Wallis, C.; Young, K.; Zannoni, M.
2018-04-01
We present an analysis of the main systematic effects that could impact the measurement of CMB polarization with the proposed CORE space mission. We employ timeline-to-map simulations to verify that the CORE instrumental set-up and scanning strategy allow us to measure sky polarization to a level of accuracy adequate to the mission science goals. We also show how the CORE observations can be processed to mitigate the level of contamination by potentially worrying systematics, including intensity-to-polarization leakage due to bandpass mismatch, asymmetric main beams, pointing errors and correlated noise. We use analysis techniques that are well validated on data from current missions such as Planck to demonstrate how the residual contamination of the measurements by these effects can be brought to a level low enough not to hamper the scientific capability of the mission, nor significantly increase the overall error budget. We also present a prototype of the CORE photometric calibration pipeline, based on that used for Planck, and discuss its robustness to systematics, showing how CORE can achieve its calibration requirements. While a fine-grained assessment of the impact of systematics requires a level of knowledge of the system that can only be achieved in a future study phase, the analysis presented here strongly suggests that the main areas of concern for the CORE mission can be addressed using existing knowledge, techniques and algorithms.
1998-06-01
Public Policy Analysis and Management Vol. 5 (Connecticut: JAI Press Inc ., 1992) 20. 38 Thomas A Simcik, Reengineering the Navy Program Objective...Winston Inc .,1969. Olvey, Lee D. The Economics of National Security, Avery Publishing Group : 1984. Premchand, A., Government Budgeting And Expenditure... the current process is presented and analyzed against relevant theory on policy analysis , reengineering, and contemporary budgeting systems, in
Cluster mislocation in kinematic Sunyaev-Zel'dovich effect extraction
NASA Astrophysics Data System (ADS)
Calafut, Victoria; Bean, Rachel; Yu, Byeonghee
2017-12-01
We investigate the impact of a variety of analysis assumptions that influence cluster identification and location on the kinematic Sunyaev-Zel'dovich (kSZ) pairwise momentum signal and covariance estimation. Photometric and spectroscopic galaxy tracers from SDSS, WISE, and DECaLs, spanning redshifts 0.05
Carvalho, Natalie; Jit, Mark; Cox, Sarah; Yoong, Joanne; Hutubessy, Raymond C W
2018-01-01
In low- and middle-income countries, budget impact is an important criterion for funding new interventions, particularly for large public health investments such as new vaccines. However, budget impact analyses remain less frequently conducted and less well researched than cost-effectiveness analyses. The objective of this study was to fill the gap in research on budget impact analyses by assessing (1) the quality of stand-alone budget impact analyses, and (2) the feasibility of extending cost-effectiveness analyses to capture budget impact. We developed a budget impact analysis checklist and scoring system for budget impact analyses, which we then adapted for cost-effectiveness analyses, based on current International Society for Pharmacoeconomics and Outcomes Research Task Force recommendations. We applied both budget impact analysis and cost-effectiveness analysis checklists and scoring systems to examine the extent to which existing economic evaluations provide sufficient evidence about budget impact to enable decision making. We used rotavirus vaccination as an illustrative case in which low- and middle-income countries uptake has been limited despite demonstrated cost effectiveness. A systematic literature review was conducted to identify economic evaluations of rotavirus vaccine in low- and middle-income countries published between January 2000 and February 2017. We critically appraised the quality of budget impact analyses, and assessed the extension of cost-effectiveness analyses to provide useful budget impact information. Six budget impact analyses and 60 cost-effectiveness analyses were identified. Budget impact analyses adhered to most International Society for Pharmacoeconomics and Outcomes Research recommendations, with key exceptions being provision of undiscounted financial streams for each budget period and model validation. Most cost-effectiveness analyses could not be extended to provide useful budget impact information; cost-effectiveness analyses also rarely presented undiscounted annual costs, or estimated financial streams during the first years of programme scale-up. Cost-effectiveness analyses vastly outnumber budget impact analyses of rotavirus vaccination, despite both being critical for policy decision making. Straightforward changes to the presentation of cost-effectiveness analyses results could facilitate their adaptation into budget impact analyses.
Wilkinson, S N; Dougall, C; Kinsey-Henderson, A E; Searle, R D; Ellis, R J; Bartley, R
2014-01-15
The use of river basin modelling to guide mitigation of non-point source pollution of wetlands, estuaries and coastal waters has become widespread. To assess and simulate the impacts of alternate land use or climate scenarios on river washload requires modelling techniques that represent sediment sources and transport at the time scales of system response. Building on the mean-annual SedNet model, we propose a new D-SedNet model which constructs daily budgets of fine sediment sources, transport and deposition for each link in a river network. Erosion rates (hillslope, gully and streambank erosion) and fine sediment sinks (floodplains and reservoirs) are disaggregated from mean annual rates based on daily rainfall and runoff. The model is evaluated in the Burdekin basin in tropical Australia, where policy targets have been set for reducing sediment and nutrient loads to the Great Barrier Reef (GBR) lagoon from grazing and cropping land. D-SedNet predicted annual loads with similar performance to that of a sediment rating curve calibrated to monitored suspended sediment concentrations. Relative to a 22-year reference load time series at the basin outlet derived from a dynamic general additive model based on monitoring data, D-SedNet had a median absolute error of 68% compared with 112% for the rating curve. RMS error was slightly higher for D-SedNet than for the rating curve due to large relative errors on small loads in several drought years. This accuracy is similar to existing agricultural system models used in arable or humid environments. Predicted river loads were sensitive to ground vegetation cover. We conclude that the river network sediment budget model provides some capacity for predicting load time-series independent of monitoring data in ungauged basins, and for evaluating the impact of land management on river sediment load time-series, which is challenging across large regions in data-poor environments. © 2013. Published by Elsevier B.V. All rights reserved.
Budget impact analysis of trastuzumab in early breast cancer: a hospital district perspective.
Purmonen, Timo T; Auvinen, Päivi K; Martikainen, Janne A
2010-04-01
Adjuvant trastuzumab is widely used in HER2-positive (HER2+) early breast cancer, and despite its cost-effectiveness, it causes substantial costs for health care. The purpose of the study was to develop a tool for estimating the budget impact of new cancer treatments. With this tool, we were able to estimate the budget impact of adjuvant trastuzumab, as well as the probability of staying within a given budget constraint. The created model-based evaluation tool was used to explore the budget impact of trastuzumab in early breast cancer in a single Finnish hospital district with 250,000 inhabitants. The used model took into account the number of patients, HER2+ prevalence, length and cost of treatment, and the effectiveness of the therapy. Probabilistic sensitivity analysis and alternative case scenarios were performed to ensure the robustness of the results. Introduction of adjuvant trastuzumab caused substantial costs for a relatively small hospital district. In base-case analysis the 4-year net budget impact was 1.3 million euro. The trastuzumab acquisition costs were partially offset by the reduction in costs associated with the treatment of cancer recurrence and metastatic disease. Budget impact analyses provide important information about the overall economic impact of new treatments, and thus offer complementary information to cost-effectiveness analyses. Inclusion of treatment outcomes and probabilistic sensitivity analysis provides more realistic estimates of the net budget impact. The length of trastuzumab treatment has a strong effect on the budget impact.
Deuterium target data for precision neutrino-nucleus cross sections
Meyer, Aaron S.; Betancourt, Minerba; Gran, Richard; ...
2016-06-23
Amplitudes derived from scattering data on elementary targets are basic inputs to neutrino-nucleus cross section predictions. A prominent example is the isovector axial nucleon form factor, F A(q 2), which controls charged current signal processes at accelerator-based neutrino oscillation experiments. Previous extractions of F A from neutrino-deuteron scattering data rely on a dipole shape assumption that introduces an unquantified error. A new analysis of world data for neutrino-deuteron scattering is performed using a model-independent, and systematically improvable, representation of F A. A complete error budget for the nucleon isovector axial radius leads to r A 2 = 0.46(22)fm 2, withmore » a much larger uncertainty than determined in the original analyses. The quasielastic neutrino-neutron cross section is determined as σ(ν μn → μ -p)| Ev=1 GeV = 10.1(0.9)×10 -39cm 2. The propagation of nucleon-level constraints and uncertainties to nuclear cross sections is illustrated using MINERvA data and the GENIE event generator. Furthermore, these techniques can be readily extended to other amplitudes and processes.« less
77 FR 1743 - Discount Rates for Cost-Effectiveness Analysis of Federal Programs
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-11
... OFFICE OF MANAGEMENT AND BUDGET Discount Rates for Cost-Effectiveness Analysis of Federal Programs AGENCY: Office of Management and Budget. ACTION: Revisions to Appendix C of OMB Circular A-94. SUMMARY: The Office of Management and Budget revised Circular A-94 in 1992. The revised Circular specified...
Achievable flatness in a large microwave power transmitting antenna
NASA Technical Reports Server (NTRS)
Ried, R. C.
1980-01-01
A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.
Understanding error generation in fused deposition modeling
NASA Astrophysics Data System (ADS)
Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David
2015-03-01
Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
Economics of human performance and systems total ownership cost.
Onkham, Wilawan; Karwowski, Waldemar; Ahram, Tareq Z
2012-01-01
Financial costs of investing in people is associated with training, acquisition, recruiting, and resolving human errors have a significant impact on increased total ownership costs. These costs can also affect the exaggerate budgets and delayed schedules. The study of human performance economical assessment in the system acquisition process enhances the visibility of hidden cost drivers which support program management informed decisions. This paper presents the literature review of human total ownership cost (HTOC) and cost impacts on overall system performance. Economic value assessment models such as cost benefit analysis, risk-cost tradeoff analysis, expected value of utility function analysis (EV), growth readiness matrix, multi-attribute utility technique, and multi-regressions model were introduced to reflect the HTOC and human performance-technology tradeoffs in terms of the dollar value. The human total ownership regression model introduces to address the influencing human performance cost component measurement. Results from this study will increase understanding of relevant cost drivers in the system acquisition process over the long term.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... FEDERAL TRADE COMMISSION [File No. 102 3094] Franklin Budget Car Sales, Inc.; Analysis of Proposed Consent Order To Aid Public Comment AGENCY: Federal Trade Commission. ACTION: Proposed Consent Agreement... from Franklin's Budget Car Sales, Inc., also doing business as Franklin Toyota/Scion (``Franklin Toyota...
Second-moment budgets in cloud topped boundary layers: A large-eddy simulation study
NASA Astrophysics Data System (ADS)
Heinze, Rieke; Mironov, Dmitrii; Raasch, Siegfried
2015-06-01
A detailed analysis of second-order moment budgets for cloud topped boundary layers (CTBLs) is performed using high-resolution large-eddy simulation (LES). Two CTBLs are simulated—one with trade wind shallow cumuli, and the other with nocturnal marine stratocumuli. Approximations to the ensemble-mean budgets of the Reynolds-stress components, of the fluxes of two quasi-conservative scalars, and of the scalar variances and covariance are computed by averaging the LES data over horizontal planes and over several hundred time steps. Importantly, the subgrid scale contributions to the budget terms are accounted for. Analysis of the LES-based second-moment budgets reveals, among other things, a paramount importance of the pressure scrambling terms in the Reynolds-stress and scalar-flux budgets. The pressure-strain correlation tends to evenly redistribute kinetic energy between the components, leading to the growth of horizontal-velocity variances at the expense of the vertical-velocity variance which is produced by buoyancy over most of both CTBLs. The pressure gradient-scalar covariances are the major sink terms in the budgets of scalar fluxes. The third-order transport proves to be of secondary importance in the scalar-flux budgets. However, it plays a key role in maintaining budgets of TKE and of the scalar variances and covariance. Results from the second-moment budget analysis suggest that the accuracy of description of the CTBL structure within the second-order closure framework strongly depends on the fidelity of parameterizations of the pressure scrambling terms in the flux budgets and of the third-order transport terms in the variance budgets. This article was corrected on 26 JUN 2015. See the end of the full text for details.
van Asselt, Thea; Ramaekers, Bram; Corro Ramos, Isaac; Joore, Manuela; Al, Maiwenn; Lesman-Leegte, Ivonne; Postma, Maarten; Vemer, Pepijn; Feenstra, Talitha
2018-01-01
The costs of performing research are an important input in value of information (VOI) analyses but are difficult to assess. The aim of this study was to investigate the costs of research, serving two purposes: (1) estimating research costs for use in VOI analyses; and (2) developing a costing tool to support reviewers of grant proposals in assessing whether the proposed budget is realistic. For granted study proposals from the Netherlands Organization for Health Research and Development (ZonMw), type of study, potential cost drivers, proposed budget, and general characteristics were extracted. Regression analysis was conducted in an attempt to generate a 'predicted budget' for certain combinations of cost drivers, for implementation in the costing tool. Of 133 drug-related research grant proposals, 74 were included for complete data extraction. Because an association between cost drivers and budgets was not confirmed, we could not generate a predicted budget based on regression analysis, but only historic reference budgets given certain study characteristics. The costing tool was designed accordingly, i.e. with given selection criteria the tool returns the range of budgets in comparable studies. This range can be used in VOI analysis to estimate whether the expected net benefit of sampling will be positive to decide upon the net value of future research. The absence of association between study characteristics and budgets may indicate inconsistencies in the budgeting or granting process. Nonetheless, the tool generates useful information on historical budgets, and the option to formally relate VOI to budgets. To our knowledge, this is the first attempt at creating such a tool, which can be complemented with new studies being granted, enlarging the underlying database and keeping estimates up to date.
Budgeting in Tertiary Educational Institutions: An Analysis.
ERIC Educational Resources Information Center
Shand, D. A.
1982-01-01
Administrators should examine the role and limitations of their budgeting systems and consider overemphasis on control of spending rather than efficiency or effectiveness, limitations on cash-based budgeting, need for information on operating costs, and the short range and fragmented nature of most budgets. (MSE)
NASA Astrophysics Data System (ADS)
Da Deppo, Vania; Poletto, Luca; Crescenzio, Giuseppe; Fineschi, Silvano; Antonucci, Ester; Naletto, Giampiero
2017-11-01
METIS, the Multi Element Telescope for Imaging and Spectroscopy, is the solar coronagraph foreseen for the ESA Solar Orbiter mission. METIS is conceived to image the solar corona from a near-Sun orbit in three different spectral bands: in the HeII EUV narrow band at 30.4 nm, in the HI UV narrow band at 121.6 nm, and in the polarized visible light band (590 - 650 nm). It also incorporates the capability of multi-slit spectroscopy of the corona in the UV/EUV range at different heliocentric heights. METIS is an externally occulted coronagraph which adopts an "inverted occulted" configuration. The Inverted external occulter (IEO) is a small circular aperture at the METIS entrance; the Sun-disk light is rejected by a spherical mirror M0 through the same aperture, while the coronal light is collected by two annular mirrors M1-M2 realizing a Gregorian telescope. To allocate the spectroscopic part, one portion of the M2 is covered by a grating (i.e. approximately 1/8 of the solar corona will not be imaged). This paper presents the error budget analysis for this new concept coronagraph configuration, which incorporates 3 different sub-channels: UV and EUV imaging sub-channel, in which the UV and EUV light paths have in common the detector and all of the optical elements but a filter, the polarimetric visible light sub-channel which, after the telescope optics, has a dedicated relay optics and a polarizing unit, and the spectroscopic sub-channel, which shares the filters and the detector with the UV-EUV imaging one, but includes a grating instead of the secondary mirror. The tolerance analysis of such an instrument is quite complex: in fact not only the optical performance for the 3 sub-channels has to be maintained simultaneously, but also the positions of M0 and of the occulters (IEO, internal occulter and Lyot stop), which guarantee the optimal disk light suppression, have to be taken into account as tolerancing parameters. In the aim of assuring the scientific requirements are optimally fulfilled for all the sub-channels, the preliminary results of manufacturing, alignment and stability tolerance analysis for the whole instrument will be described and discussed.
NASA Astrophysics Data System (ADS)
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
NASA Astrophysics Data System (ADS)
Ganesan, A.; Lunt, M. F.; Rigby, M. L.; Chatterjee, A.; Boesch, H.; Parker, R.; Prinn, R. G.; van der Schoot, M. V.; Krummel, P. B.; Tiwari, Y. K.; Mukai, H.; Machida, T.; Terao, Y.; Nomura, S.; Patra, P. K.
2015-12-01
We present an analysis of the regional methane (CH4) budget from South Asia, using new measurements and new modelling techniques. South Asia contains some of the largest anthropogenic CH4 sources in the world, mainly from rice agriculture and ruminants. However, emissions from this region have been highly uncertain largely due to insufficient constraints from atmospheric measurements. Compared to parts of the developed world, which have well-developed monitoring networks, South Asia is very under-sampled, particularly given its importance to the global CH4 budget. Over the past few years, data have been collected from a variety of surface sites around the region, ranging from in situ to flask-based sampling. We have used these data, in conjunction with column methane data from the GOSAT satellite, to quantify emissions at a regional scale. Using the Met Office's Lagrangian NAME model, we calculated sensitivities to surface fluxes at 12 km resolution, allowing us to simulate the high-resolution impacts of emissions on concentrations. In addition, we used a newly developed hierarchical Bayesian inverse estimation scheme to estimate regional fluxes over the period of 2012-2014 in addition to ancillary "hyper-parameters" that characterize uncertainties in the system. Through this novel approach, we have characterized the effect of "aggregation" errors, model uncertainties as well as the effects of correlated errors when using regional measurement networks. We have also assessed the effects of biases on the GOSAT CH4 retrievals, which has been made possible for the first time for this region through the expanded surface measurements. In this talk, we will discuss a) regional CH4 fluxes from South Asia, with a particular focus on the densely populated Indo-Gangetic Plains b) derived model uncertainties, including the effects of correlated errors c) the impacts of combining surface and satellite data for emissions estimation in regions where poor satellite validation exists and d) the challenges in estimating emissions for regions of the world with a sparse measurement network.
NASA Astrophysics Data System (ADS)
Zhang, Chengzhu; Xie, Shaocheng; Klein, Stephen A.; Ma, Hsi-yen; Tang, Shuaiqi; Van Weverberg, Kwinten; Morcrette, Cyril J.; Petch, Jon
2018-03-01
All the weather and climate models participating in the Clouds Above the United States and Errors at the Surface project show a summertime surface air temperature (T2 m) warm bias in the region of the central United States. To understand the warm bias in long-term climate simulations, we assess the Atmospheric Model Intercomparison Project simulations from the Coupled Model Intercomparison Project Phase 5, with long-term observations mainly from the Atmospheric Radiation Measurement program Southern Great Plains site. Quantities related to the surface energy and water budget, and large-scale circulation are analyzed to identify possible factors and plausible links involved in the warm bias. The systematic warm season bias is characterized by an overestimation of T2 m and underestimation of surface humidity, precipitation, and precipitable water. Accompanying the warm bias is an overestimation of absorbed solar radiation at the surface, which is due to a combination of insufficient cloud reflection and clear-sky shortwave absorption by water vapor and an underestimation in surface albedo. The bias in cloud is shown to contribute most to the radiation bias. The surface layer soil moisture impacts T2 m through its control on evaporative fraction. The error in evaporative fraction is another important contributor to T2 m. Similar sources of error are found in hindcast from other Clouds Above the United States and Errors at the Surface studies. In Atmospheric Model Intercomparison Project simulations, biases in meridional wind velocity associated with the low-level jet and the 500 hPa vertical velocity may also relate to T2 m bias through their control on the surface energy and water budget.
An audit of the global carbon budget: identifying and reducing sources of uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Tans, P. P.; Marland, G.; Stocker, B. D.
2012-12-01
Uncertainties in our carbon accounting practices may limit our ability to objectively verify emission reductions on regional scales. Furthermore uncertainties in the global C budget must be reduced to benchmark Earth System Models that incorporate carbon-climate interactions. Here we present an audit of the global C budget where we try to identify sources of uncertainty for major terms in the global C budget. The atmospheric growth rate of CO2 has increased significantly over the last 50 years, while the uncertainty in calculating the global atmospheric growth rate has been reduced from 0.4 ppm/yr to 0.2 ppm/yr (95% confidence). Although we have greatly reduced global CO2 growth rate uncertainties, there remain regions, such as the Southern Hemisphere, Tropics and Arctic, where changes in regional sources/sinks will remain difficult to detect without additional observations. Increases in fossil fuel (FF) emissions are the primary factor driving the increase in global CO2 growth rate; however, our confidence in FF emission estimates has actually gone down. Based on a comparison of multiple estimates, FF emissions have increased from 2.45 ± 0.12 PgC/yr in 1959 to 9.40 ± 0.66 PgC/yr in 2010. Major sources of increasing FF emission uncertainty are increased emissions from emerging economies, such as China and India, as well as subtle differences in accounting practices. Lastly, we evaluate emission estimates from Land Use Change (LUC). Although relative errors in emission estimates from LUC are quite high (2 sigma ~ 50%), LUC emissions have remained fairly constant in recent decades. We evaluate the three commonly used approaches to estimating LUC emissions- Bookkeeping, Satellite Imagery, and Model Simulations- to identify their main sources of error and their ability to detect net emissions from LUC.; Uncertainties in Fossil Fuel Emissions over the last 50 years.
NASA Astrophysics Data System (ADS)
Plazas, A. A.; Shapiro, C.; Kannawadi, A.; Mandelbaum, R.; Rhodes, J.; Smith, R.
2016-10-01
Weak gravitational lensing (WL) is one of the most powerful techniques to learn about the dark sector of the universe. To extract the WL signal from astronomical observations, galaxy shapes must be measured and corrected for the point-spread function (PSF) of the imaging system with extreme accuracy. Future WL missions—such as NASA’s Wide-Field Infrared Survey Telescope (WFIRST)—will use a family of hybrid near-infrared complementary metal-oxide-semiconductor detectors (HAWAII-4RG) that are untested for accurate WL measurements. Like all image sensors, these devices are subject to conversion gain nonlinearities (voltage response to collected photo-charge) that bias the shape and size of bright objects such as reference stars that are used in PSF determination. We study this type of detector nonlinearity (NL) and show how to derive requirements on it from WFIRST PSF size and ellipticity requirements. We simulate the PSF optical profiles expected for WFIRST and measure the fractional error in the PSF size (ΔR/R) and the absolute error in the PSF ellipticity (Δe) as a function of star magnitude and the NL model. For our nominal NL model (a quadratic correction), we find that, uncalibrated, NL can induce an error of ΔR/R = 1 × 10-2 and Δe 2 = 1.75 × 10-3 in the H158 bandpass for the brightest unsaturated stars in WFIRST. In addition, our simulations show that to limit the bias of ΔR/R and Δe in the H158 band to ˜10% of the estimated WFIRST error budget, the quadratic NL model parameter β must be calibrated to ˜1% and ˜2.4%, respectively. We present a fitting formula that can be used to estimate WFIRST detector NL requirements once a true PSF error budget is established.
A Children's Defense Budget: An Analysis of the FY 1987 Federal Budget and Children.
ERIC Educational Resources Information Center
Children's Defense Fund, Washington, DC.
This analysis of the implications for children of the FY 1987 Federal budget begins by criticizing the Reagan administration's policy on poor children and families and recommending needed action. Chapter 1 provides a rationale for investing in children and families. Specific attention is given to costs of child poverty, declining Federal help for…
NASA Technical Reports Server (NTRS)
Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh;
2016-01-01
Urban emissions of greenhouse gases (GHG) represent more than 70% of the global fossil fuel GHG emissions. Unless mitigation strategies are successfully implemented, the increase in urban GHG emissions is almost inevitable as large metropolitan areas are projected to grow twice as fast as the world population in the coming 15 years. Monitoring these emissions becomes a critical need as their contribution to the global carbon budget increases rapidly. In this study, we developed the first comprehensive monitoring systems of CO2 emissions at high resolution using a dense network of CO2 atmospheric measurements over the city of Indianapolis. The inversion system was evaluated over a 8-month period and showed an increase compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product, with a 20% increase in the total emissions over the area (from 4.5 to 5.7 Metric Megatons of Carbon +/- 0.23 Metric Megatons of Carbon). However, several key parameters of the inverse system need to be addressed to carefully characterize the spatial distribution of the emissions and the aggregated total emissions.We found that spatial structures in prior emission errors, mostly undetermined, affect significantly the spatial pattern in the inverse solution, as well as the carbon budget over the urban area. Several other parameters of the inversion were sufficiently constrained by additional observations such as the characterization of the GHG boundary inflow and the introduction of hourly transport model errors estimated from the meteorological assimilation system. Finally, we estimated the uncertainties associated with remaining systematic errors and undetermined parameters using an ensemble of inversions. The total CO2 emissions for the Indianapolis urban area based on the ensemble mean and quartiles are 5.26 - 5.91 Metric Megatons of Carbon, i.e. a statistically significant difference compared to the prior total emissions of 4.1 to 4.5 Metric Megatons of Carbon. We therefore conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emissions and their associated error structures are required if we are to determine the spatial structures of urban emissions at high resolution.
Budget Impact Analysis of Veterans Affairs Medical Foster Homes versus Community Living Centers.
Sutton, Bryce S; Pracht, Étienne; Williams, Arthur R; Alemi, Farrokh; Williams, Allison E; Levy, Cari
2017-02-01
The objectives were to determine whether and by what amounts the US Department of Veterans Affairs (VA) use of Medical Foster Homes (MFH) rather than Community Living Centers (CLC) reduced budget impacts to the VA. This was a retrospective, matched, case-control study of veterans residing in MFH or CLC in the VA health care system from 2008 to 2012. Administrative data sets, nearest neighbor matching, generalized linear models, and a secondary analysis were used to capture and analyze budget impacts by veterans who used MFH or CLC exclusively in 2008-2012. Controls of 1483 veterans in CLC were matched to 203 cases of veterans in MFH. Use of MFH instead of CLC reduced budget impacts to the VA by at least $2645 per veteran per month. A secondary analysis of the data using different matching criteria and statistical methods produced similar results, demonstrating the robustness of the estimates of budget impact. When the average out-of-pocket payments made by MFH residents, not made by CLC residents, were included in the analysis, the net reduction of budget impact ranged from $145 to $2814 per veteran per month or a savings of $1740 to $33,768 per veteran per year. Even though outpatient costs of MFH are higher, much of the reduced budget impact of MFH use arises from lower inpatient or hospital costs. Reduced budget impacts on the VA system indicate that expansion of the MFH program may be cost-effective. Implications for further research are suggested.
Budget Analysis: Review of the Governor's Proposed Budget, 1999-00.
ERIC Educational Resources Information Center
New York State Office of the Comptroller, Albany.
This report provides an overview of the 1999-2000 executive budget for New York State. The budget calls for $72.7 billion in all funds spending and proposes that a $1.8 billion surplus from the 1998-99 fiscal year be used to fill budget gaps in fiscal years 2000-01 and 2001-02. The report focuses on spending for education, health and social…
NASA Astrophysics Data System (ADS)
Pan, Ming; Troy, Tara; Sahoo, Alok; Sheffield, Justin; Wood, Eric
2010-05-01
Documentation of the water cycle and its evolution over time is a primary scientific goal of the Global Energy and Water Cycle Experiment (GEWEX) and fundamental to assessing global change impacts. In developed countries, observation systems that include in-situ, remote sensing and modeled data can provide long-term, consistent and generally high quality datasets of water cycle variables. The export of these technologies to less developed regions has been rare, but it is these regions where information on water availability and change is probably most needed in the face of regional environmental change due to climate, land use and water management. In these data sparse regions, in situ data alone are insufficient to develop a comprehensive picture of how the water cycle is changing, and strategies that merge in-situ, model and satellite observations within a framework that results in consistent water cycle records is essential. Such an approach is envisaged by the Global Earth Observing System of Systems (GOESS), but has yet to be applied. The goal of this study is to quantify the variation and changes in the global water cycle over the past 50 years. We evaluate the global water cycle using a variety of independent large-scale datasets of hydrologic variables that are used to bridge the gap between sparse in-situ observations, including remote-sensing based retrievals, observation-forced hydrologic modeling, and weather model reanalyses. A data assimilation framework that blends these disparate sources of information together in a consistent fashion with attention to budget closure is applied to make best estimates of the global water cycle and its variation. The framework consists of a constrained Kalman filter applied to the water budget equation. With imperfect estimates of the water budget components, the equation additionally has an error residual term that is redistributed across the budget components using error statistics, which are estimated from the uncertainties among data products. The constrained Kalman filter treats the budget closure constraint as a perfect observation within the assimilation framework. Precipitation is estimated using gauge observations, reanalysis products, and remote sensing products for below 50°N. Evapotranspiration is estimated in a number of ways: from the VIC land surface hydrologic model forced with a hybrid reanalysis-observation global forcing dataset, from remote sensing retrievals based on a suite of energy balance and process based models, and from an atmospheric water budget approach using reanalysis products for the atmospheric convergence and storage terms and our best estimate for precipitation. Terrestrial water storage changes, including surface and subsurface changes, are estimated using estimates from both VIC and the GRACE remote sensing retrievals. From these components, discharge can then be calculated as a residual of the water budget and compared with gauge observations to evaluate the closure of the water budget. Through the use of these largely independent data products, we estimate both the mean seasonal cycle of the water budget components and their uncertainties for a set of 20 large river basins across the globe. We particularly focus on three regions of interest in global changes studies: the Northern Eurasian region which is experiencing rapid change in terrestrial processes; the Amazon which is a central part of the global water, energy and carbon budgets; and Africa, which is predicted to face some of the most critical challenges for water and food security in the coming decades.
EXPRES: a next generation RV spectrograph in the search for earth-like worlds
NASA Astrophysics Data System (ADS)
Jurgenson, C.; Fischer, D.; McCracken, T.; Sawyer, D.; Szymkowiak, A.; Davis, A.; Muller, G.; Santoro, F.
2016-08-01
The EXtreme PREcision Spectrograph (EXPRES) is an optical fiber fed echelle instrument being designed and built at the Yale Exoplanet Laboratory to be installed on the 4.3-meter Discovery Channel Telescope operated by Lowell Observatory. The primary science driver for EXPRES is to detect Earth-like worlds around Sun-like stars. With this in mind, we are designing the spectrograph to have an instrumental precision of 15 cm/s so that the on-sky measurement precision (that includes modeling for RV noise from the star) can reach to better than 30 cm/s. This goal places challenging requirements on every aspect of the instrument development, including optomechanical design, environmental control, image stabilization, wavelength calibration, and data analysis. In this paper we describe our error budget, and instrument optomechanical design.
The Soil Sink for Nitrous Oxide: Trivial Amount but Challenging Question
NASA Astrophysics Data System (ADS)
Davidson, E. A.; Savage, K. E.; Sihi, D.
2015-12-01
Net uptake of atmospheric nitrous oxide (N2O) has been observed sporadically for many years. Such observations have often been discounted as measurement error or noise, but they were reported frequently enough to gain some acceptance as valid. The advent of fast response field instruments with good sensitivity and precision has permitted confirmation that some soils can be small sinks of N2O. With regards to "closing the global N2O budget" the soil sink is trivial, because it is smaller than the error terms of most other budget components. Although not important from a global budget perspective, the existence of a soil sink for atmospheric N2O presents a fascinating challenge for understanding the physical, chemical, and biological processes that explain the sink. Reduction of N2O by classical biological denitrification requires reducing conditions generally found in wet soil, and yet we have measured the N2O sink in well drained soils, where we also simultaneously measure a sink for atmospheric methane (CH4). Co-occurrence of N2O reduction and CH4 oxidation would require a broad range of microsite conditions within the soil, spanning high and low oxygen concentrations. Abiotic sinks for N2O or other biological processes that consume N2O could exist, but have not yet been identified. We are attempting to simulate processes of diffusion of N2O, CH4, and O2 from the atmosphere and within a soil profile to determine if classical biological N2O reduction and CH4 oxidation at rates consistent with measured fluxes are plausible.
The East Asian Atmospheric Water Cycle and Monsoon Circulation in the Met Office Unified Model
NASA Astrophysics Data System (ADS)
Rodríguez, José M.; Milton, Sean F.; Marzin, Charline
2017-10-01
In this study the low-level monsoon circulation and observed sources of moisture responsible for the maintenance and seasonal evolution of the East Asian monsoon are examined, studying the detailed water budget components. These observational estimates are contrasted with the Met Office Unified Model (MetUM) climate simulation performance in capturing the circulation and water cycle at a variety of model horizontal resolutions and in fully coupled ocean-atmosphere simulations. We study the role of large-scale circulation in determining the hydrological cycle by analyzing key systematic errors in the model simulations. MetUM climate simulations exhibit robust circulation errors, including a weakening of the summer west Pacific Subtropical High, which leads to an underestimation of the southwesterly monsoon flow over the region. Precipitation and implied diabatic heating biases in the South Asian monsoon and Maritime Continent region are shown, via nudging sensitivity experiments, to have an impact on the East Asian monsoon circulation. By inference, the improvement of these tropical biases with increased model horizontal resolution is hypothesized to be a factor in improvements seen over East Asia with increased resolution. Results from the annual cycle of the hydrological budget components in five domains show a good agreement between MetUM simulations and ERA-Interim reanalysis in northern and Tibetan domains. In simulations, the contribution from moisture convergence is larger than in reanalysis, and they display less precipitation recycling over land. The errors are closely linked to monsoon circulation biases.
[HAS budget impact analysis guidelines: A new decision-making tool].
Ghabri, Salah; Poullié, Anne-Isabelle; Autin, Erwan; Josselin, Jean-Michel
2017-10-02
Budget impact analysis (BIA) provides short and medium-term estimates on changes in budgets and resources resulting from the adoption of new health interventions. The objective of this article is to present the main messages of the newly developed French National Authority for Health (HAS) guidelines on budget impact analysis : issues, recommendations and perspectives. The HAS guidelines development process was based on data derived from a literature review on BIA (search dates : January 2000 to June 2016), an HAS retrospective investigation, a public consultation, international expert advice, and approval from the HAS Board and the Economic and Public Health Evaluation Committee. Based on its research findings, HAS developed its first BIA guidelines, which include recommendations on the following topics : BIA definition, perspective, populations, time horizon, compared scenarios, budget impact models, costing, discounting, choice of clinical data, reporting of results and uncertainty analysis. The HAS BIA guidelines are expected to enhance the usefulness of BIA as an essential part of a comprehensive economic assessment of healthcare interventions, which itself includes cost-effectiveness analysis and equity of access to healthcare.
Radiometric Spacecraft Tracking for Deep Space Navigation
NASA Technical Reports Server (NTRS)
Lanyi, Gabor E.; Border, James S.; Shin, Dong K.
2008-01-01
Interplanetary spacecraft navigation relies on three types of terrestrial tracking observables.1) Ranging measures the distance between the observing site and the probe. 2) The line-of-sight velocity of the probe is inferred from Doppler-shift by measuring the frequency shift of the received signal with respect to the unshifted frequency. 3) Differential angular coordinates of the probe with respect to natural radio sources are nominally obtained via a differential delay technique of (Delta) DOR (Delta Differential One-way Ranging). The accuracy of spacecraft coordinate determination depends on the measurement uncertainties associated with each of these three techniques. We evaluate the corresponding sources of error and present a detailed error budget.
Decoding small surface codes with feedforward neural networks
NASA Astrophysics Data System (ADS)
Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen
2018-01-01
Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.
ERIC Educational Resources Information Center
Ogden, Daniel M., Jr.
1978-01-01
Suggests that the most practical budgeting system for most managers is a formalized combination of incremental and zero-based analysis because little can be learned about most programs from an annual zero-based budget. (Author/IRT)
X-band uplink ground systems development: Part 2
NASA Technical Reports Server (NTRS)
Johns, C. E.
1987-01-01
The prototype X-band exciter testing has been completed. Stability and single-sideband phase noise measurements have been made on the X-band exciter signal (7.145-7.235 GHz) and on the coherent X- and S-band receiver test signals (8.4-8.5 GHz and 2.29-2.3 GHz) generated within the exciter equipment. Outputs are well within error budgets.
NASA Astrophysics Data System (ADS)
Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.
2014-06-01
This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack Y.; Rokni, Mohammad
1990-01-01
The testing and comparison of two Extended Kalman Filters (EKFs) developed for the Earth Radiation Budget Satellite (ERBS) is described. One EKF updates the attitude quaternion using a four component additive error quaternion. This technique is compared to that of a second EKF, which uses a multiplicative error quaternion. A brief development of the multiplicative algorithm is included. The mathematical development of the additive EKF was presented in the 1989 Flight Mechanics/Estimation Theory Symposium along with some preliminary testing results using real spacecraft data. A summary of the additive EKF algorithm is included. The convergence properties, singularity problems, and normalization techniques of the two filters are addressed. Both filters are also compared to those from the ERBS operational ground support software, which uses a batch differential correction algorithm to estimate attitude and gyro biases. Sensitivity studies are performed on the estimation of sensor calibration states. The potential application of the EKF for real time and non-real time ground attitude determination and sensor calibration for future missions such as the Gamma Ray Observatory (GRO) and the Small Explorer Mission (SMEX) is also presented.
NFIRAOS in 2015: engineering for future integration of complex subsystems
NASA Astrophysics Data System (ADS)
Atwood, Jenny; Andersen, David; Byrnes, Peter; Densmore, Adam; Fitzsimmons, Joeleff; Herriot, Glen; Hill, Alexis
2016-07-01
The Narrow Field InfraRed Adaptive Optics System (NFIRAOS) will be the first-light facility Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). NFIRAOS will be able to host three science instruments that can take advantage of this high performance system. NRC Herzberg is leading the design effort for this critical TMT subsystem. As part of the final design phase of NFIRAOS, we have identified multiple subsystems to be sub-contracted to Canadian industry. The scope of work for each subcontract is guided by the NFIRAOS Work Breakdown Structure (WBS) and is divided into two phases: the completion of the final design and the fabrication, assembly and delivery of the final product. Integration of the subsystems at NRC will require a detailed understanding of the interfaces between the subsystems, and this work has begun by defining the interface physical characteristics, stability, local coordinate systems, and alignment features. In order to maintain our stringent performance requirements, the interface parameters for each subsystem are captured in multiple performance budgets, which allow a bottom-up error estimate. In this paper we discuss our approach for defining the interfaces in a consistent manner and present an example error budget that is influenced by multiple subsystems.
1999-03-01
Budget (Oficina Central de Presupuesto [OCEPRE]), which is the presidential agency with overall responsibility to formulate the national budget...Budget (Oficina Central de Presupuesto OCEPRE), and they receive a special treatment in the Venezuelan Budgetary process. The OCEPRE is the...the Central Office of Budget (Oficina Central de Presupuesto , OCEPRE). This occurs when funds for weapons acquisitions come from the ordinary budget
NASA Astrophysics Data System (ADS)
Taylor, Thomas E.; L'Ecuyer, Tristan; Slusser, James; Stephens, Graeme; Krotkov, Nick; Davis, John; Goering, Christian
2005-08-01
Extensive sensitivity and error characteristics of a recently developed optimal estimation retrieval algorithm which simultaneously determines aerosol optical depth (AOD), aerosol single scatter albedo (SSA) and total ozone column (TOC) from ultra-violet irradiances are described. The algorithm inverts measured diffuse and direct irradiances at 7 channels in the UV spectral range obtained from the United States Department of Agriculture's (USDA) UV-B Monitoring and Research Program's (UVMRP) network of 33 ground-based UV-MFRSR instruments to produce aerosol optical properties and TOC at all seven wavelengths. Sensitivity studies of the Tropospheric Ultra-violet/Visible (TUV) radiative transfer model performed for various operating modes (Delta-Eddington versus n-stream Discrete Ordinate) over domains of AOD, SSA, TOC, asymmetry parameter and surface albedo show that the solutions are well constrained. Realistic input error budgets and diagnostic and error outputs from the retrieval are analyzed to demonstrate the atmospheric conditions under which the retrieval provides useful and significant results. After optimizing the algorithm for the USDA site in Panther Junction, Texas the retrieval algorithm was run on a cloud screened set of irradiance measurements for the month of May 2003. Comparisons to independently derived AOD's are favorable with root mean square (RMS) differences of about 3% to 7% at 300nm and less than 1% at 368nm, on May 12 and 22, 2003. This retrieval method will be used to build an aerosol climatology and provide ground-truthing of satellite measurements by running it operationally on the USDA UV network database.
Traveltime budgets and mobility in urban areas
DOT National Transportation Integrated Search
1974-05-01
The study tests by empirical comparative analysis the concept that tripmakers have a stable daily traveltime budget and discusses the implication of such a budget to transportation modeling techniques and the evaluation of alternative transportation ...
Alaska Department of Labor Administrative Services Division
Sections Fiscal Research & Analysis Procurement Data Processing Budget Links State Employee Directory FY 2019 DOLWD Budget (Proposed) FY 2018 DOLWD Budget (Enacted) FY 2018 Indirect Cost Proposal FY 2016
Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams
NASA Astrophysics Data System (ADS)
Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng
2006-12-01
This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).
Author Correction: Emission budgets and pathways consistent with limiting warming to 1.5 °C
NASA Astrophysics Data System (ADS)
Millar, Richard J.; Fuglestvedt, Jan S.; Friedlingstein, Pierre; Rogelj, Joeri; Grubb, Michael J.; Matthews, H. Damon; Skeie, Ragnhild B.; Forster, Piers M.; Frame, David J.; Allen, Myles R.
2018-06-01
In the version of this Article originally published, a coding error resulted in the erroneous inclusion of a subset of RCP4.5 and RCP8.5 simulations in the sets used for RCP2.6 and RCP6, respectively, leading to an incorrect depiction of the data of the latter two sets in Fig. 1b and RCP2.6 in Table 2. This coding error has now been corrected. The graphic and quantitative changes in the corrected Fig. 1b and Table 2 are contrasted with the originally published display items below. The core conclusions of the paper are not affected, but some numerical values and statements have also been updated as a result; these are listed below. All these errors have now been corrected in the online versions of this Article.
The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration
NASA Technical Reports Server (NTRS)
Marathay, Arvind S.; Shiefman, Joe
1996-01-01
This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).
Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin
2018-03-01
In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.
An Analysis of the President’s 2015 Budget
2014-04-01
President’s tax proposals that were prepared by the staff of the Joint Committee on Taxation (JCT).1 In conjunction with analyzing the President’s budget...more details about the President’s tax proposals, see Joint Committee on Taxation , Estimated Budget Effects of the Revenue Provisions Contained in...in CBO’s Estimate of the President’s Budget (Billions of dollars) Sources: Congressional Budget Office; staff of the Joint Committee on Taxation
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
NASA Astrophysics Data System (ADS)
Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.
2011-12-01
Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.
Formula Budgeting for Higher Education: State Practices in 1979-80. Working Paper Series.
ERIC Educational Resources Information Center
Gross, Francis M.
Budget formulas used by states for state-supported colleges and universities are described, along with budgeting guidelines. A comparative analysis of the budget formulas in use in 1979-1980 reveals the similarities and differences in design among 19 states. Functional areas of expenditure used in the formula calculation are also compared for each…
ERIC Educational Resources Information Center
Luna, Andrew L.; Brennan, Kelly A.
2009-01-01
This study uses a regression model to determine if a significant difference exists between the actual budget allocation that an academic department received and the model's predicted budget allocation for that same department. Budget data from a Southeastern Master's/Comprehensive state university were used as the dependent variable, and the…
A New Direction: The Clinton Budget and Economic Plan.
ERIC Educational Resources Information Center
Greenstein, Robert; Leonard, Paul
This booklet offers an in-depth analysis of the Clinton Administration budget and economic plan. The initial overview explains why the budget can be regarded as both a poverty-reduction and deficit-reduction budget. Chapter 2 offers detailed descriptions of the three main components of the plan--the deficit reduction measure (i.e., spending cuts…
Global Patterns of Legacy Nitrate Storage in the Vadose Zone
NASA Astrophysics Data System (ADS)
Ascott, M.; Gooddy, D.; Wang, L.; Stuart, M.; Lewis, M.; Ward, R.; Binley, A. M.
2017-12-01
Global-scale nitrogen (N) budgets have been developed to quantify the impact of man's influence on the nitrogen cycle. However, these budgets often do not consider legacy effects such as accumulation of nitrate in the deep vadose zone. In this presentation we show that the vadose zone is an important store of nitrate which should be considered in future nitrogen budgets for effective policymaking. Using estimates of depth to groundwater and nitrate leaching for 1900-2000, we quantify for the first time the peak global storage of nitrate in the vadose zone, estimated as 605 - 1814 Teragrams (Tg). Estimates of nitrate storage are validated using previous national and basin scale estimates of N storage and observed groundwater nitrate data for North America and Europe. Nitrate accumulation per unit area is greatest in North America, China and Central and Eastern Europe where thick vadose zones are present and there is an extensive history of agriculture. In these areas the long solute travel time in the vadose zone means that the anticipated impact of changes in agricultural practices on groundwater quality may be substantially delayed. We argue that in these areas use of conventional nitrogen budget approaches is inappropriate and their continued use will lead to significant errors.
NASA Astrophysics Data System (ADS)
McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert
2016-08-01
Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20°C, representing the worst-case temperature gradient, the spatial rms wavefront error in units of wavelength is 0.178 (88.69 nm at λ = 500 nm).
Asner, Gregory P; Joseph, Shijo
2015-01-01
Conservation and monitoring of tropical forests requires accurate information on their extent and change dynamics. Cloud cover, sensor errors and technical barriers associated with satellite remote sensing data continue to prevent many national and sub-national REDD+ initiatives from developing their reference deforestation and forest degradation emission levels. Here we present a framework for large-scale historical forest cover change analysis using free multispectral satellite imagery in an extremely cloudy tropical forest region. The CLASlite approach provided highly automated mapping of tropical forest cover, deforestation and degradation from Landsat satellite imagery. Critically, the fractional cover of forest photosynthetic vegetation, non-photosynthetic vegetation, and bare substrates calculated by CLASlite provided scene-invariant quantities for forest cover, allowing for systematic mosaicking of incomplete satellite data coverage. A synthesized satellite-based data set of forest cover was thereby created, reducing image incompleteness caused by clouds, shadows or sensor errors. This approach can readily be implemented by single operators with highly constrained budgets. We test this framework on tropical forests of the Colombian Pacific Coast (Chocó) – one of the cloudiest regions on Earth, with successful comparison to the Colombian government’s deforestation map and a global deforestation map. PMID:25678933
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
NASA Technical Reports Server (NTRS)
Edwards, C. D., Jr.; Border, J. S.
1992-01-01
In Part 1 of this two-part article, an error budget is presented for Earth-based delta differential one-way range (delta DOR) measurements between two spacecraft. Such observations, made between a planetary orbiter (or lander) and another spacecraft approaching that planet, would provide a powerful target-relative angular tracking data type for approach navigation. Accuracies of better than 5 nrad should be possible for a pair of spacecraft with 8.4-GHz downlinks, incorporating 40-MHz DOR tone spacings, while accuracies approaching 1 nrad will be possible if the spacecraft incorporate 32-GHz downlinks with DOR tone spacing on the order of 250 MHz; these accuracies will be available for the last few weeks or months of planetary approach for typical Earth-Mars trajectories. Operational advantages of this data type are discussed, and ground system requirements needed to enable spacecraft-spacecraft delta DOR observations are outlined. This tracking technique could be demonstrated during the final approach phase of the Mars '94 mission, using Mars Observer as the in-orbit reference spacecraft, if the Russian spacecraft includes an 8.4-GHz downlink incorporating DOR tones. Part 2 of this article will present an analysis of predicted targeting accuracy for this scenario.
View-Dependent Simplification of Arbitrary Polygonal Environments
2006-01-01
of backfacing nodes are not rendered [ Kumar 96]. 4.3 Triangle-Budget Simplification The screenspace error threshold and silhouette test allow the user...Greg Turk, and Dinesh Manocha for their invaluable guidance and support throughout this project. Funding for this work was provided by DARPA...Proceedings Visualization 95 , IEEE Computer Society Press (Atlanta, GA), 1995, pp. 296-303. [ Kumar 96] Kumar , Subodh, D. Manocha, W. Garrett, M. Lin
LORAN-C LATITUDE-LONGITUDE CONVERSION AT SEA: PROGRAMMING CONSIDERATIONS.
McCullough, James R.; Irwin, Barry J.; Bowles, Robert M.
1985-01-01
Comparisons are made of the precision of arc-length routines as computer precision is reduced. Overland propagation delays are discussed and illustrated with observations from offshore New England. Present practice of LORAN-C error budget modeling is then reviewed with the suggestion that additional terms be considered in future modeling. Finally, some detailed numeric examples are provided to help with new computer program checkout.
Social Security Fraud and Error Prevention Act of 2014
Rep. Becerra, Xavier [D-CA-34
2014-02-26
House - 02/26/2014 Referred to the Committee on Ways and Means, and in addition to the Committee on the Budget, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-05
... Management and Budget (``OMB'') to project aggregate offering price for purposes of the fiscal year 2010... methodology it developed in consultation with the CBO and OMB to project dollar volume for purposes of prior... AAMOP is given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n...
Sacks, Laura A.; Lee, Terrie M.; Swancar, Amy
2013-01-01
Groundwater inflow to a subtropical seepage lake was estimated using a transient isotope-balance approach for a decade (2001–2011) with wet and dry climatic extremes. Lake water δ18O ranged from +0.80 to +3.48 ‰, reflecting the 4 m range in stage. The transient δ18O analysis discerned large differences in semiannual groundwater inflow, and the overall patterns of low and high groundwater inflow were consistent with an independent water budget. Despite simplifying assumptions that the isotopic composition of precipitation (δP), groundwater inflow, and atmospheric moisture (δA) were constant, groundwater inflow was within the water-budget error for 12 of the 19 semiannual calculation periods. The magnitude of inflow was over or under predicted during periods of climatic extreme. During periods of high net precipitation from tropical cyclones and El Niño conditions, δP values were considerably more depleted in 18O than assumed. During an extreme dry period, δA values were likely more enriched in 18O than assumed due to the influence of local lake evaporate. Isotope balance results were most sensitive to uncertainties in relative humidity, evaporation, and δ18O of lake water, which can limit precise quantification of groundwater inflow. Nonetheless, the consistency between isotope-balance and water-budget results indicates that this is a viable approach for lakes in similar settings, allowing the magnitude of groundwater inflow to be estimated over less-than-annual time periods. Because lake-water δ18O is a good indicator of climatic conditions, these data could be useful in ground-truthing paleoclimatic reconstructions using isotopic data from lake cores in similar settings.
Slattery, Richard N.; Asquith, William H.; Gordon, John D.
2017-02-15
IntroductionIn 2016, the U.S. Geological Survey (USGS), in cooperation with the San Antonio Water System, began a study to refine previously derived estimates of groundwater outflows from Medina and Diversion Lakes in south-central Texas near San Antonio. When full, Medina and Diversion Lakes (hereinafter referred to as the Medina/Diversion Lake system) (fig. 1) impound approximately 255,000 acre-feet and 2,555 acre-feet of water, respectively.Most recharge to the Edwards aquifer occurs as seepage from streams as they cross the outcrop (recharge zone) of the aquifer (Slattery and Miller, 2017). Groundwater outflows from the Medina/Diversion Lake system have also long been recognized as a potentially important additional source of recharge. Puente (1978) published methods for estimating monthly and annual estimates of the potential recharge to the Edwards aquifer from the Medina/Diversion Lake system. During October 1995–September 1996, the USGS conducted a study to better define short-term rates of recharge and to reduce the error and uncertainty associated with estimates of monthly recharge from the Medina/Diversion Lake system (Lambert and others, 2000). As a followup to that study, Slattery and Miller (2017) published estimates of groundwater outflows from detailed water budgets for the Medina/Diversion Lake system during 1955–1964, 1995–1996, and 2001–2002. The water budgets were compiled for selected periods during which time the water-budget components were inferred to be relatively stable and the influence of precipitation, stormwater runoff, and changes in storage were presumably minimal. Linear regression analysis techniques were used by Slattery and Miller (2017) to assess the relation between the stage in Medina Lake and groundwater outflows from the Medina/Diversion Lake system.
A Study of Wake Development and Structure in Constant Pressure Gradients
NASA Technical Reports Server (NTRS)
Thomas, Flint O.; Nelson, R. C.; Liu, Xiaofeng
2000-01-01
Motivated by the application to high-lift aerodynamics for commercial transport aircraft, a systematic investigation into the response of symmetric/asymmetric planar turbulent wake development to constant adverse, zero, and favorable pressure gradients has been conducted. The experiments are performed at a Reynolds number of 2.4 million based on the chord of the wake generator. A unique feature of this wake study is that the pressure gradients imposed on the wake flow field are held constant. The experimental measurements involve both conventional LDV and hot wire flow field surveys of mean and turbulent quantities including the turbulent kinetic energy budget. In addition, similarity analysis and numerical simulation have also been conducted for this wake study. A focus of the research has been to isolate the effects of both pressure gradient and initial wake asymmetry on the wake development. Experimental results reveal that the pressure gradient has a tremendous influence on the wake development, despite the relatively modest pressure gradients imposed. For a given pressure gradient, the development of an initially asymmetric wake is different from the initially symmetric wake. An explicit similarity solution for the shape parameters of the symmetric wake is obtained and agrees with the experimental results. The turbulent kinetic energy budget measurements of the symmetric wake demonstrate that except for the convection term, the imposed pressure gradient does not change the fundamental flow physics of turbulent kinetic energy transport. Based on the turbulent kinetic energy budget measurements, an approach to correct the bias error associated with the notoriously difficult dissipation estimate is proposed and validated through the comparison of the experimental estimate with a direct numerical simulation result.
Data analysis and software support for the Earth radiation budget experiment
NASA Technical Reports Server (NTRS)
Edmonds, W.; Natarajan, S.
1987-01-01
Computer programming and data analysis efforts were performed in support of the Earth Radiation Budget Experiment (ERBE) at NASA/Langley. A brief description of the ERBE followed by sections describing software development and data analysis for both prelaunch and postlaunch instrument data are presented.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Lizarraga, Joy S.; Ockerman, Darwin J.
2010-01-01
The U.S. Geological Survey (USGS), in cooperation with the San Antonio River Authority, the Evergreen Underground Water Conservation District, and the Goliad County Groundwater Conservation District, configured, calibrated, and tested a watershed model for a study area consisting of about 2,150 square miles of the lower San Antonio River watershed in Bexar, Guadalupe, Wilson, Karnes, DeWitt, Goliad, Victoria, and Refugio Counties in south-central Texas. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge using rainfall, potential ET, and upstream discharge data obtained from National Weather Service meteorological stations and USGS streamflow-gaging stations. Additional time-series inputs to the model include wastewater treatment-plant discharges, withdrawals for cropland irrigation, and estimated inflows from springs. Model simulations of streamflow, ET, and groundwater recharge were done for 2000-2007. Because of the complexity of the study area, the lower San Antonio River watershed was divided into four subwatersheds; separate HSPF models were developed for each subwatershed. Simulation of the overall study area involved running simulations of the three upstream models, then running the downstream model. The surficial geology was simplified as nine contiguous water-budget zones to meet model computational limitations and also to define zones for which ET, recharge, and other water-budget information would be output by the model. The model was calibrated and tested using streamflow data from 10 streamflow-gaging stations; additionally, simulated ET was compared with measured ET from a meteorological station west of the study area. The model calibration is considered very good; streamflow volumes were calibrated to within 10 percent of measured streamflow volumes. During 2000-2007, the estimated annual mean rainfall for the water-budget zones ranged from 33.7 to 38.5 inches per year; the estimated annual mean rainfall for the entire watershed was 34.3 inches. Using the HSPF model it was estimated that for 2000-2007, less than 10 percent of the annual mean rainfall on the study watershed exited the watershed as streamflow, whereas about 82 percent, or an average of 28.2 inches per year, exited the watershed as ET. Estimated annual mean groundwater recharge for the entire study area was 3.0 inches, or about 9 percent of annual mean rainfall. Estimated annual mean recharge was largest in water-budget zone 3, the zone where the Carrizo Sand outcrops. In water-budget zone 3, the estimated annual mean recharge was 5.1 inches or about 15 percent of annual mean rainfall. Estimated annual mean recharge was smallest in water-budget zone 6, about 1.1 inches or about 3 percent of annual mean rainfall. The Cibolo Creek subwatershed and the subwatershed of the San Antonio River upstream from Cibolo Creek had the largest and smallest basin yields, about 4.8 inches and 1.2 inches, respectively. Estimated annual ET and annual recharge generally increased with increasing annual rainfall. Also, ET was larger in zones 8 and 9, the most downstream zones in the watershed. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error.
Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)
NASA Technical Reports Server (NTRS)
Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.
2006-01-01
Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
78 FR 6140 - Discount Rates for Cost-Effectiveness Analysis of Federal Programs
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-29
... OFFICE OF MANAGEMENT AND BUDGET Discount Rates for Cost-Effectiveness Analysis of Federal Programs AGENCY: Office of Management and Budget. ACTION: Revisions to Appendix C of OMB Circular A-94. [[Page 6141
Secondary Forest Age and Tropical Forest Biomass Estimation Using TM
NASA Technical Reports Server (NTRS)
Nelson, R. F.; Kimes, D. S.; Salas, W. A.; Routhier, M.
1999-01-01
The age of secondary forests in the Amazon will become more critical with respect to the estimation of biomass and carbon budgets as tropical forest conversion continues. Multitemporal Thematic Mapper data were used to develop land cover histories for a 33,000 Square kM area near Ariquemes, Rondonia over a 7 year period from 1989-1995. The age of the secondary forest, a surrogate for the amount of biomass (or carbon) stored above-ground, was found to be unimportant in terms of biomass budget error rates in a forested TM scene which had undergone a 20% conversion to nonforest/agricultural cover types. In such a situation, the 80% of the scene still covered by primary forest accounted for over 98% of the scene biomass. The difference between secondary forest biomass estimates developed with and without age information were inconsequential relative to the estimate of biomass for the entire scene. However, in futuristic scenarios where all of the primary forest has been converted to agriculture and secondary forest (55% and 42% respectively), the ability to age secondary forest becomes critical. Depending on biomass accumulation rate assumptions, scene biomass budget errors on the order of -10% to +30% are likely if the age of the secondary forests are not taken into account. Single-date TM imagery cannot be used to accurately age secondary forests into single-year classes. A neural network utilizing TM band 2 and three TM spectral-texture measures (bands 3 and 5) predicted secondary forest age over a range of 0-7 years with an RMSE of 1.59 years and an R(Squared) (sub actual vs predicted) = 0.37. A proposal is made, based on a literature review, to use satellite imagery to identify general secondary forest age groups which, within group, exhibit relatively constant biomass accumulation rates.
NASA Technical Reports Server (NTRS)
Yang, R.; Houser, P.; Joiner, J.
1998-01-01
The surface ground temperature (Tg) is an important meteorological variable, because it represents an integrated thermal state of the land surface determined by a complex surface energy budget. Furthermore, Tg affects both the surface sensible and latent heat fluxes. Through these fluxes. the surface budget is coupled with the atmosphere above. Accurate Tg data are useful for estimating the surface radiation budget and fluxes, as well as soil moisture. Tg is not included in conventional synoptical weather station reports. Currently, satellites provide Tg estimates globally. It is necessary to carefully consider appropriate methods of using these satellite data in a data assimilation system. Recently, an Off-line Land surface GEOS Assimilation (OLGA) system was implemented at the Data Assimilation Office at NASA-GSFC. One of the goals of OLGA is to assimilate satellite-derived Tg data. Prior to the Tg assimilation, a thorough investigation of satellite- and model-derived Tg, including error estimates, is required. In this study we examine the Tg from the n Project (ISCCP DI) data and the OLGA simulations. The ISCCP data used here are 3-hourly DI data (2.5x2.5 degree resolution) for 1992 summer months (June, July, and August) and winter months (January and February). The model Tg for the same periods were generated by OLGA. The forcing data for this OLGA 1992 simulation were generated from the GEOS-1 Data Assimilation System (DAS) at Data Assimilation Office NASA-GSFC. We examine the discrepancies between ISCCP and OLGA Tg with a focus on its spatial and temporal characteristics, particularly on the diurnal cycle. The error statistics in both data sets, including bias, will be estimated. The impact of surface properties, including vegetation cover and type, topography, etc, on the discrepancies will be addressed.
Cost-effectiveness of the US Geological Survey stream-gaging program in Arkansas
Darling, M.E.; Lamb, T.E.
1984-01-01
This report documents the results of the cost-effectiveness of the stream-gaging program in Arkansas. Data uses and funding sources were identified for the daily-discharge stations. All daily-discharge stations were found to be in one or more data use categories, and none were candidates for alternate methods which would result in discontinuation or conversion to a partial record station. The cost for operation of daily-discharge stations and routing costs to partial record stations, crest gages, pollution control stations as well as seven recording ground-water stations was evaluated in the Kalman-Filtering Cost-Effective Resource allocation (K-CERA) analysis. This operation under current practices requires a budget of $292,150. The average standard error of estimate of streamflow record for the Arkansas District was analyzed at 33 percent.
Wright, Scott A.; Grams, Paul E.
2010-01-01
This report describes numerical modeling simulations of sand transport and sand budgets for reaches of the Colorado River below Glen Canyon Dam. Two hypothetical Water Year 2011 annual release volumes were each evaluated with six hypothetical operational scenarios. The six operational scenarios include the current operation, scenarios with modifications to the monthly distribution of releases, and scenarios with modifications to daily flow fluctuations. Uncertainties in model predictions were evaluated by conducting simulations with error estimates for tributary inputs and mainstem transport rates. The modeling results illustrate the dependence of sand transport rates and sand budgets on the annual release volumes as well as the within year operating rules. The six operational scenarios were ranked with respect to the predicted annual sand budgets for Marble Canyon and eastern Grand Canyon reaches. While the actual WY 2011 annual release volume and levels of tributary inputs are unknown, the hypothetical conditions simulated and reported herein provide reasonable comparisons between the operational scenarios, in a relative sense, that may be used by decision makers within the Glen Canyon Dam Adaptive Management Program.
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2017-02-01
In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.
Colorado Children's Budget 2013
ERIC Educational Resources Information Center
Buck, Beverly; Baker, Robin
2013-01-01
The "Colorado Children's Budget" presents and analyzes investments and spending trends during the past five state fiscal years on services that benefit children. The "Children's Budget" focuses mainly on state investment and spending, with some analysis of federal investments and spending to provide broader context of state…
NASA Technical Reports Server (NTRS)
Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.
1984-01-01
Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.
EUV local CDU healing performance and modeling capability towards 5nm node
NASA Astrophysics Data System (ADS)
Jee, Tae Kwon; Timoshkov, Vadim; Choi, Peter; Rio, David; Tsai, Yu-Cheng; Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Schoofs, Stijn
2017-10-01
Both local variability and optical proximity correction (OPC) errors are big contributors to the edge placement error (EPE) budget which is closely related to the device yield. The post-litho contact hole healing will be demonstrated to meet after-etch local variability specifications using a low dose, 30mJ/cm2 dose-to-size, positive tone developed (PTD) resist with relevant throughput in high volume manufacturing (HVM). The total local variability of the node 5nm (N5) contact holes will be characterized in terms of local CD uniformity (LCDU), local placement error (LPE), and contact edge roughness (CER) using a statistical methodology. The CD healing process has complex etch proximity effects, so the OPC prediction accuracy is challenging to meet EPE requirements for the N5. Thus, the prediction accuracy of an after-etch model will be investigated and discussed using ASML Tachyon OPC model.
A Self-Referencing Intensity-Based Fiber Optic Sensor with Multipoint Sensing Characteristics
Choi, Sang-Jin; Kim, Young-Chon; Song, Minho; Pan, Jae-Kyung
2014-01-01
A self-referencing, intensity-based fiber optic sensor (FOS) is proposed and demonstrated. The theoretical analysis for the proposed design is given, and the validity of the theoretical analysis is confirmed via experiments. We define the measurement parameter, X, and the calibration factor, β, to find the transfer function, Hm,n, of the intensity-based FOS head. The self-referencing and multipoint sensing characteristics of the proposed system are validated by showing the measured Hm,n2 and relative error versus the optical power attenuation of the sensor head for four cases: optical source fluctuation, various remote sensing point distances, fiber Bragg gratings (FBGs) with different characteristics, and multiple sensor heads with cascade and/or parallel forms. The power-budget analysis and limitations of the measurement rates are discussed, and the measurement results of fiber-reinforced plastic (FRP) coupon strain using the proposed FOS are given as an actual measurement. The proposed FOS has several benefits, including a self-referencing characteristic, the flexibility to determine FBGs, and a simple structure in terms of the number of devices and measuring procedure. PMID:25046010
Program budgeting and marginal analysis: a case study in chronic airflow limitation.
Crockett, A; Cranston, J; Moss, J; Scown, P; Mooney, G; Alpers, J
1999-01-01
Program budgeting and marginal analysis is a method of priority-setting in health care. This article describes how this method was applied to the management of a disease-specific group, chronic airflow limitation. A sub-program flow chart clarified the major cost drivers. After assessment of the technical efficiency of the sub-programs and careful and detailed analysis, incremental and decremental wish lists of activities were established. Program budgeting and marginal analysis provides a framework for rational resource allocation. The nurturing of a vigorous program management group, with members representing all participants in the process (including patients/consumers), is the key to a successful outcome.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2014-10-01
A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2015-03-01
A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.
On a more rigorous gravity field processing for future LL-SST type gravity satellite missions
NASA Astrophysics Data System (ADS)
Daras, I.; Pail, R.; Murböck, M.
2013-12-01
In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.
Army Industrial Fund Analytical Study (AIFAS).
1984-08-01
Directorate b. Team Members Mr. Joel S. Gordon Mr. Charles Weber c. Other Contributors Mr. Carl B. Bates, Analysis Support Directorate 2. PRODUCT REVIEW...OSD OSD/OMB operati renie budget (COB) ___ budget rve guidance guidance MACGM I I ProgramI Prga MCOIMietr budget COBI iretr decisions VIACOMI
32 CFR 989.6 - Budgeting and funding.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 6 2011-07-01 2011-07-01 false Budgeting and funding. 989.6 Section 989.6... ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.6 Budgeting and funding. Contract EIAP efforts are proponent... unforeseen requirements, the proponent offices must provide the remaining funding. ...
32 CFR 989.6 - Budgeting and funding.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 6 2013-07-01 2013-07-01 false Budgeting and funding. 989.6 Section 989.6... ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.6 Budgeting and funding. Contract EIAP efforts are proponent... unforeseen requirements, the proponent offices must provide the remaining funding. ...
Earth radiation budget experiment software development
NASA Technical Reports Server (NTRS)
Edmonds, W. L.
1985-01-01
Computer programming and analysis efforts were carried out in support of the Earth Radiation Budget Experiment (ERBE) at NASA/Langley. The Earth Radiation Budget Experiment is described as well as data acquisition, analysis and modeling support for the testing of ERBE instruments. Also included are descriptions of the programs developed to analyze, format and display data collected during testing of the various ERBE instruments. Listings of the major programs developed under this contract are located in an appendix.
[Guidelines for budget impact analysis of health technologies in Brazil].
Ferreira-Da-Silva, Andre Luis; Ribeiro, Rodrigo Antonini; Santos, Vânia Cristina Canuto; Elias, Flávia Tavares Silva; d'Oliveira, Alexandre Lemgruber Portugal; Polanczyk, Carisi Anne
2012-07-01
Budget impact analysis (BIA) provides operational financial forecasts to implement new technologies in healthcare systems. There were no previous specific recommendations to conduct such analyses in Brazil. This paper reviews BIA methods for health technologies and proposes BIA guidelines for the public and private Brazilian healthcare system. The following recommendations were made: adopt the budget administrator's perspective; use a timeframe of 1 to 5 years; compare reference and alternative scenarios; consider the technology's rate of incorporation; estimate the target population by either an epidemiological approach or measured demand; consider restrictions on technologies' indication or factors that increase the demand for them; consider direct and averted costs; do not adjust for inflation or discounts; preferably, integrate information on a spreadsheet; calculate the incremental budget impact between scenarios; and summarize information in a budget impact report.
NASA Astrophysics Data System (ADS)
Maurer, Edwin P.; O'Donnell, Greg M.; Lettenmaier, Dennis P.; Roads, John O.
2001-08-01
The ability of the National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis (NRA1) and the follow-up NCEP/Department of Energy (DOE) reanalysis (NRA2), to reproduce the hydrologic budgets over the Mississippi River basin is evaluated using a macroscale hydrology model. This diagnosis is aided by a relatively unconstrained global climate simulation using the NCEP global spectral model, and a more highly constrained regional climate simulation using the NCEP regional spectral model, both employing the same land surface parameterization (LSP) as the reanalyses. The hydrology model is the variable infiltration capacity (VIC) model, which is forced by gridded observed precipitation and temperature. It reproduces observed streamflow, and by closure is constrained to balance other terms in the surface water and energy budgets. The VIC-simulated surface fluxes therefore provide a benchmark for evaluating the predictions from the reanalyses and the climate models. The comparisons, conducted for the 10-year period 1988-1997, show the well-known overestimation of summer precipitation in the southeastern Mississippi River basin, a consistent overestimation of evapotranspiration, and an underprediction of snow in NRA1. These biases are generally lower in NRA2, though a large overprediction of snow water equivalent exists. NRA1 is subject to errors in the surface water budget due to nudging of modeled soil moisture to an assumed climatology. The nudging and precipitation bias alone do not explain the consistent overprediction of evapotranspiration throughout the basin. Another source of error is the gravitational drainage term in the NCEP LSP, which produces the majority of the model's reported runoff. This may contribute to an overprediction of persistence of surface water anomalies in much of the basin. Residual evapotranspiration inferred from an atmospheric balance of NRA1, which is more directly related to observed atmospheric variables, matches the VIC prediction much more closely than the coupled models. However, the persistence of the residual evapotranspiration is much less than is predicted by the hydrological model or the climate models.
Mullan, F; Bartlett, D; Austin, R S
2017-06-01
To investigate the measurement performance of a chromatic confocal profilometer for quantification of surface texture of natural human enamel in vitro. Contributions to the measurement uncertainty from all potential sources of measurement error using a chromatic confocal profilometer and surface metrology software were quantified using a series of surface metrology calibration artifacts and pre-worn enamel samples. The 3D surface texture analysis protocol was optimized across 0.04mm 2 of natural and unpolished enamel undergoing dietary acid erosion (pH 3.2, titratable acidity 41.3mmolOH/L). Flatness deviations due to the x, y stage mechanical movement were the major contribution to the measurement uncertainty; with maximum Sz flatness errors of 0.49μm. Whereas measurement noise; non-linearity's in x, y, z and enamel sample dimensional instability contributed minimal errors. The measurement errors were propagated into an uncertainty budget following a Type B uncertainty evaluation in order to calculate the Standard Combined Uncertainty (u c ), which was ±0.28μm. Statistically significant increases in the median (IQR) roughness (Sa) of the polished samples occurred after 15 (+0.17 (0.13)μm), 30 (+0.12 (0.09)μm) and 45 (+0.18 (0.15)μm) min of erosion (P<0.001 vs. baseline). In contrast, natural unpolished enamel samples revealed a statistically significant decrease in Sa roughness of -0.14 (0.34) μm only after 45min erosion (P<0.05s vs. baseline). The main contribution to measurement uncertainty using chromatic confocal profilometry was from flatness deviations however by optimizing measurement protocols the profilometer successfully characterized surface texture changes in enamel from erosive wear in vitro. Copyright © 2017 The Academy of Dental Materials. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Xin, Xiaozhou; Peng, Zhiqing; Zhang, Hailong; Li, Li; Shao, Shanshan; Liu, Qinhuo
2017-10-01
Evapotranspiration (ET) plays an important role in surface-atmosphere interactions and can be monitored using remote sensing data. The visible infrared imaging radiometer suite (VIIRS) sensor is a generation of optical satellite sensors that provide daily global coverage at 375- to 750-m spatial resolutions with 22 spectral channels (0.412 to 12.05 μm) and capable of monitoring ET from regional to global scales. However, few studies have focused on methods of acquiring ET from VIIRS images. The objective of this study is to introduce an algorithm that uses the VIIRS data and meteorological variables to estimate the energy budgets of land surfaces, including the net radiation, soil heat flux, sensible heat flux, and latent heat fluxes. A single-source model that based on surface energy balance equation is used to obtain surface heat fluxes within the Zhangye oasis in China. The results were validated using observations collected during the HiWATER (Heihe Watershed Allied Telemetry Experimental Research) project. To facilitate comparison, we also use moderate resolution imaging spectrometer (MODIS) data to retrieve the regional surface heat fluxes. The validation results show that it is feasible to estimate the turbulent heat flux based on the VIIRS sensor and that these data have certain advantages (i.e., the mean bias error of sensible heat flux is 15.23 W m-2) compared with MODIS data (i.e., the mean bias error of sensible heat flux is -29.36 W m-2). Error analysis indicates that, in our model, the accuracies of the estimated sensible heat fluxes rely on the errors in the retrieved surface temperatures and the canopy heights.
AFGL Atmospheric Constituent Profiles (0.120km)
1986-05-15
compilations and (d) individual constituents. Each species is followed by the set of journal refer- ences which contributed either directly or indirectly to... enced materials; those publications that can be associated with particular molecules are so identified. 3. ERROR ESTIMATES/VARIABILITY The practical...budgets, J. Geophys. Res; 88, 10785-10807. (NO, NO 2 , HNO 3 , NO 3] Louisnard, N., Fergant, G., Girard, A., Gramont, L., Lado -Bordowsky, 0., Laurent, J
In Situ Metrology for the Corrective Polishing of Replicating Mandrels
2010-06-08
distribution is unlimited. 13. SUPPLEMENTARY NOTES Presented at Mirror Technology Days, Boulder, Colorado, USA, 7-9 June 2010. 14...ABSTRACT The International X-ray Observatory (IXO) will require mandrel metrology with extremely tight tolerances on mirrors with up to 1.6 meter radii...ideal. Error budgets for the IXO mirror segments are presented. A potential solution is presented that uses a voice-coil controlled gauging head, air
Simoens, Steven
2011-01-01
This study aims to compute the budget impact of lacosamide, a new adjunctive therapy for partial-onset seizures in epilepsy patients from 16 years of age who are uncontrolled and having previously used at least three anti-epileptic drugs from a Belgian healthcare payer perspective. The budget impact analysis compared the 'world with lacosamide' to the 'world without lacosamide' and calculated how a change in the mix of anti-epileptic drugs used to treat uncontrolled epilepsy would impact drug spending from 2008 to 2013. Data on the number of patients and on the market shares of anti-epileptic drugs were taken from Belgian sources and from the literature. Unit costs of anti-epileptic drugs originated from Belgian sources. The budget impact was calculated from two scenarios about the market uptake of lacosamide. The Belgian target population is expected to increase from 5333 patients in 2008 to 5522 patients in 2013. Assuming that the market share of lacosamide increases linearly over time and is taken evenly from all other anti-epileptic drugs (AEDs), the budget impact of adopting adjunctive therapy with lacosamide increases from €5249 (0.1% of reference drug budget) in 2008 to €242,700 (4.7% of reference drug budget) in 2013. Assuming that 10% of patients use standard AED therapy plus lacosamide, the budget impact of adopting adjunctive therapy with lacosamide is around €800,000-900,000 per year (or 16.7% of the reference drug budget). Adjunctive therapy with lacosamide would raise drug spending for this patient population by as much as 16.7% per year. However, this budget impact analysis did not consider the fact that lacosamide reduces costs of seizure management and withdrawal. The literature suggests that, if savings in other healthcare costs are taken into account, adjunctive therapy with lacosamide may be cost saving.
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
Analyses and forecasts with LAWS winds
NASA Technical Reports Server (NTRS)
Wang, Muyin; Paegle, Jan
1994-01-01
Horizontal fluxes of atmospheric water vapor are studied for summer months during 1989 and 1992 over North and South America based on analyses from European Center for Medium Range Weather Forecasts, US National Meteorological Center, and United Kingdom Meteorological Office. The calculations are performed over 20 deg by 20 deg box-shaped midlatitude domains located to the east of the Rocky Mountains in North America, and to the east of the Andes Mountains in South America. The fluxes are determined from operational center gridded analyses of wind and moisture. Differences in the monthly mean moisture flux divergence determined from these analyses are as large as 7 cm/month precipitable water equivalent over South America, and 3 cm/month over North America. Gridded analyses at higher spatial and temporal resolution exhibit better agreement in the moisture budget study. However, significant discrepancies of the moisture flux divergence computed from different gridded analyses still exist. The conclusion is more pessimistic than Rasmusson's estimate based on station data. Further analysis reveals that the most significant sources of error result from model surface elevation fields, gaps in the data archive, and uncertainties in the wind and specific humidity analyses. Uncertainties in the wind analyses are the most important problem. The low-level jets, in particular, are substantially different in the different data archives. Part of the reason for this may be due to the way the different analysis models parameterized physical processes affecting low-level jets. The results support the inference that the noise/signal ratio of the moisture budget may be improved more rapidly by providing better wind observations and analyses than by providing better moisture data.
Solution of Stochastic Capital Budgeting Problems in a Multidivisional Firm.
1980-06-01
linear programming with simple recourse (see, for example, Dantzig (9) or Ziemba (35)) - 12 - and has been applied to capital budgeting problems with...New York, 1972 34. Weingartner, H.M., Mathematical Programming and Analysis of Capital Budgeting Problems, Markham Pub. Co., Chicago, 1967 35. Ziemba
Break-even Analysis: Tool for Budget Planning
ERIC Educational Resources Information Center
Lohmann, Roger A.
1976-01-01
Multiple funding creates special management problems for the administrator of a human service agency. This article presents a useful analytic technique adapted from business practice that can help the administrator draw up and balance a unified budget. Such a budget also affords reliable overview of the agency's financial status. (Author)
CAUT Analysis of Federal Budget 2013
ERIC Educational Resources Information Center
Canadian Association of University Teachers, 2013
2013-01-01
The 2013 federal Budget was delivered ironically the same day as the Parliamentary Budget Officer was in court seeking more information about the impact of the government's $5.2 billion in spending cuts announced last year. The lack of budgetary transparency and accountability has become a hallmark of the Conservative government. Anyone expecting…
Analysis of surface energy budget data over varying land-cover conditions.
USDA-ARS?s Scientific Manuscript database
The surface energy budget plays an important role in boundary-layer meteorology and quantifying these budgets over varying land surface types is important in studying land-atmosphere interactions. In late April 2007, eddy covariance towers were erected at four sites in the Little Washita Watershed i...
NASA Astrophysics Data System (ADS)
Conley, Stephen A.
Energy exchange between the tropical oceans and the atmosphere plays an important role in the climate of the planet. By far the most abundant form of this transfer occurs in regions of shallow (generally non-precipitating) convection that takes place underneath the gentle lid of the trade wind inversion. Understanding the atmospheric dynamics and exchange of chemical species between the ocean and atmosphere in this region is a critical step on the path to accurate modeling of the earth's climate. This work focuses on dimethyl sulfide (DMS), ozone (O3) and the boundary layer dynamics of the region. In the MBL, DMS and O3 both exhibited the well-known diurnal cycle of buildup at night followed by daytime destruction. DMS ranged from 50-95 pptv in the daytime to 90-110 pptv at night and O3 from 16-18 ppb during the daytime to 17-21 ppb at night. Contributions from horizontal advection are included using a multivariate regression of the observed mixing ratio as a function of time and space within the MBL to estimate the mean gradients and trends. With this technique we can use the residual term in the budget as an estimate of overall photochemical oxidation. Error analysis of the various terms in the DMS budget indicate that chemical losses acting on time scales of up to 110 hours can be inferred with this technique. On average, photochemistry accounted for ˜ 7.4 ppt hr-1 loss rate for the seven daytime flights, with an estimated error of 0.6 ppt hr-1. The loss rate due to expected OH oxidation is sufficient to explain the net DMS destruction without invoking the action of additional oxidants (e.g., reactive halogens) . The observed ocean flux of DMS averaged 3.1 (+/- 1.5) mumol m-2d-1, and generally decreased throughout the sunlit hours. Averaged over the mission, horizontal advection was negligible in the DMS budget but was significant in the budgets of individual flights. The ozone budget included the same dynamical terms as the DMS budget but also included loss to photolysis, OH and HO2. Photolysis is the dominant chemical sink (˜ 0.29 ppb/hour). Horizontal advection and vertical flux divergence contribute similar amounts to the budget (0.08 ppb hr-1 , 0.06 ppb hr-1). The advective source is consistent with the picture from the Total Ozone Mapping Spectrometer (TOMS) indicating higher levels of ozone upwind from the PASE region. The entrainment flux from the FT to the BuL was estimated at 0.07 ppb m s-1. A budget of turbulent kinetic energy (TKE) exhibited evenly distributed shear production throughout the MBL along with an expected linear profile of buoyancy production. Two loci of approximately equal parts shear production, transport, and buoyancy production sustain TKE in the BuL at levels of ˜70% that within the MBL. A mean cloud fraction profile from the experiment confirms a bimodal distribution of trade wind cumuli with a major peak at the top and a secondary peak in the lower third of the BuL, consistent with the picture of shallow convection supplying the bulk of the TKE to this layer, but not uniformly in the vertical. Surface latent heat fluxes were measured by eddy covariance and were on average found to be 30% less the standard NOAA bulk model. The Bowen ratio averaged 0.05 with very little flight to flight variability (+/-0.03). The observed east-southeasterly winds averaged 8 ms -1 (at 10 meters) in this region feeding into the ITCZ located at approximately 10 degrees N. On most flights a low level jet was observed either within or just above the BuL. During the four week mission, SST over the entire region decreased by 1.5C as a tropical instability wave brought colder water to the equatorial mid Pacific with winds surface winds increasing by 0.5 m s-1 during the experiment. The shape of the cospectra between vertical wind speed and potential temperature exhibited the traditional Kaimal form; however, water vapor and DMS cospectra exhibited less power at the highest frequencies with their cospectral peaks shifted toward larger scales. (Abstract shortened by UMI.)
High-frequency variations in Earth rotation and the planetary momentum budget
NASA Technical Reports Server (NTRS)
Rosen, Richard D.
1995-01-01
The major focus of the subject contract was on helping to resolve one of the more notable discrepancies still existing in the axial momentum budget of the solid Earth-atmosphere system, namely the disappearance of coherence between length-of-day (l.o.d.) and atmospheric angular momentum (AAM) at periods shorter than about a fortnight. Recognizing the importance of identifying the source of the high-frequency momentum budget anomaly, the scientific community organized two special measurement campaigns (SEARCH '92 and CONT '94) to obtain the best possible determinations of l.o.d. and AAM. An additional goal was to analyze newly developed estimates of the torques that transfer momentum between the atmosphere and its underlying surface to determine whether the ocean might be a reservoir of momentum on short time scales. Discrepancies between AAM and l.o.d. at sub-fortnightly periods have been attributed to either measurement errors in these quantities or the need to incorporate oceanic angular momentum into the planetary budget. Results from the SEARCH '92 and CONT '94 campaigns suggest that when special attention is paid to the quality of the measurements, better agreement between l.o.d. and AAM at high frequencies can be obtained. The mechanism most responsible for the high-frequency changes observed in AAM during these campaigns involves a direct coupling to the solid Earth, i.e, the mountain torque, thereby obviating a significant oceanic role.
An Analysis of the President’s 2014 Budget
2013-05-01
Administration’s, and incorporates estimates by the staff of the Joint Committee on Taxation (JCT) for the President’s tax proposals.1 In conjunction...consequences for the budget: 1. For more details about the President’s tax proposals, see Joint Committee on Taxation , Estimated Budget Effects of the Revenue...Congressional Budget Office; staff of the Joint Committee on Taxation . Note: n.a. = not applicable; GDP = gross domestic product. a. Negative numbers
Belgian guidelines for budget impact analyses.
Neyt, M; Cleemput, I; Sande, S Van De; Thiry, N
2015-06-01
To develop methodological guidelines for budget impact analysis submitted to the Belgian health authorities as part of a reimbursement request. A review of the literature was performed and provided the basis for preliminary budget impact guidelines. These guidelines were improved after discussion with health economists from the Belgian Health Care Knowledge Centre (KCE) and different Belgian stakeholders from both government and industry. Preliminary guidelines were also discussed in a workshop with health economists from The German Institute for Quality and Efficiency in Healthcare. Finally, the guidelines were also externally validated by three external experts. The guidelines give explicit guidance for the following components of a budget impact analysis: perspective of the evaluation, target population, comparator, costs, time horizon, modeling, handling uncertainty and discount rate. Special attention is given to handling varying target population sizes over time, applying a time horizon up to the steady state instead of short-term predictions, and similarities and differences between budget impact analysis and economic evaluations. The guidelines provide a framework for both researchers and assessors to set up budget impact analyses that are transparent, relevant, of high quality and apply a consistent methodology. This might improve the extent to which such evaluations can reliably and consistently be used in the reimbursement decision making process.
[Use of medical inpatient services by heavy users: a case of hypochondriasis].
Höfer, Peter; Ossege, Michael; Aigner, Martin
2012-01-01
Hypochondriasis is defined by ICD-10 and DSM-IV through the persistent preoccupation with the possibility of having one or more serious and progressive physical disorders. Patients suffering from hypochondriasis can be responsible for a high utilization of mental health system services. Data have shown that "Heavy User" require a disproportionate part of inpatient admissions and mental health budget costs. We assume that a psychotherapeutic approach, targeting a cognitive behavioral model in combination with neuropsychopharmacological treatment is useful. In our case report we present the "Heavy Using-Phenomenon" based on a patient hospitalized predominantly in neurological inpatient care facilities. From a medical point of view we want to point out to possible treatment errors, on the other hand we want to make aware of financial-socioeconomic factors leading to a massive burden on the global mental health budget.
Improvements in lake water budget computations using Landsat data
NASA Technical Reports Server (NTRS)
Gervin, J. C.; Shih, S. F.
1979-01-01
A supervised multispectral classification was performed on Landsat data for Lake Okeechobee's extensive littoral zone to provide two types of information. First, the acreage of a given plant species as measured by satellite was combined with a more accurate transpiration rate to give a better estimate of evapotranspiration from the littoral zone. Second, the surface area coupled by plant communities was used to develop a better estimate of the water surface as a function of lake stage. Based on this information, more detailed representations of evapotranspiration and total water surface (and hence total lake volume) were provided to the water balance budget model for lake volume predictions. The model results based on information derived from satellite demonstrated a 94 percent reduction in cumulative lake stage error and a 70 percent reduction in the maximum deviation of the lake stage.
Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
The future X-ray observatory missions, such as International X-ray Observatory, require grazing incidence replicated optics of extremely large collecting area (3 m2) in combination with angular resolution of less than 5 arcsec half-power diameter. The resolution of a mirror shell depends ultimately on the quality of the cylindrical mandrels from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation studies have been performed to optimize the operational parameters as well as the polishing lap configuration. Furthermore, depending upon the surface error profile, a model for localized polishing based on dwell time approach is developed. Using the inputs from the mathematical model, a mandrel, having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. We report our first experimental results and discuss plans for further improvements in the polishing process.
Detecting and Understanding Changing Arctic Carbon Emissions
NASA Astrophysics Data System (ADS)
Bruhwiler, L.
2017-12-01
Warming in the Arctic has proceeded faster than anyplace on Earth. Our current understanding of biogeochemistry suggests that we can expect feedbacks between climate and carbon in the Arctic. Changes in terrestrial fluxes of carbon can be expected as the Arctic warms, and the vast stores of organic carbon frozen in Arctic soils could be mobilized to the atmosphere, with possible significant impacts on global climate. Quantifying trends in Arctic carbon exchanges is important for policymaking because greater reductions in anthropogenic emissions may be required to meet climate goals. Observations of greenhouse gases in the Arctic and globally have been collected for several decades. Analysis of this data does not currently support significantly changed Arctic emissions of CH4, however it is difficult to detect changes in Arctic emissions because of transport from lower latitudes and large inter-annual variability. Unfortunately, current space-based remote sensing systems have limitations at Arctic latitudes. Modeling systems can help untangle the Arctic budget of greenhouse gases, but they are dependent on underlying prior fluxes, wetland distributions and global anthropogenic emissions. Also, atmospheric transport models may have significant biases and errors. For example, unrealistic near-surface stability can lead to underestimation of emissions in atmospheric inversions. We discuss our current understanding of the Arctic carbon budget from both top-down and bottom-up approaches. We show that current atmospheric inversions agree well on the CH4 budget. On the other hand, bottom-up models vary widely in their predictions of natural emissions, with some models predicting emissions too large to be accommodated by the budget implied by global observations. Large emissions from the shallow Arctic ocean are also inconsistent with atmospheric observations. We also discuss the sensitivity of the current atmospheric network to what is likely small, gradual increases in emissions over time by examining modeled and observed spatial and seasonal variability. An issue we will consider is whether well-mixed background atmospheric records are more likely to detect changing Arctic emissions compared to stronger, but more variable signal from local sources.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; Metz, P. A.
2014-12-01
Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.
Budget impact analysis of drugs for ultra-orphan non-oncological diseases in Europe.
Schlander, Michael; Adarkwah, Charles Christian; Gandjour, Afschin
2015-02-01
Ultra-orphan diseases (UODs) have been defined by a prevalence of less than 1 per 50,000 persons. However, little is known about budget impact of ultra-orphan drugs. For analysis, the budget impact analysis (BIA) had a time horizon of 10 years (2012-2021) and a pan-European payer's perspective, based on prevalence data for UODs for which patented drugs are available and/or for which drugs are in clinical development. A total of 18 drugs under patent protection or orphan drug designation for non-oncological UODs were identified. Furthermore, 29 ultra-orphan drugs for non-oncological diseases under development that have the potential of reaching the market by 2021 were found. Total budget impact over 10 years was estimated to be €15,660 and €4965 million for approved and pipeline ultra-orphan drugs, respectively (total: €20,625 million). The analysis does not support concerns regarding an uncontrolled growth in expenditures for drugs for UODs.
Chiggiato, Jacopo; Zavatarelli, Marco; Castellari, Sergio; Deserti, Marco
2005-12-15
Surface heat fluxes of the Adriatic Sea are estimated for the period 1998-2001 through bulk formulae with the goal to assess the uncertainties related to their estimations and to describe their interannual variability. In addition a comparison to observations is conducted. We computed the components of the sea surface heat budget by using two different operational meteorological data sets as inputs: the ECMWF operational analysis and the regional limited area model LAMBO operational forecast. Both results are consistent with previous long-term climatology and short-term analyses present in the literature. In both cases we obtained that the Adriatic Sea loses 26 W/m2 on average, that is consistent with the assessments found in the literature. Then we conducted a comparison with observations of the radiative components of the heat budget collected on offshore platforms and one coastal station. In the case of shortwave radiation, results show a little overestimation on the annual basis. Values obtained in this case are 172 W/m2 when using ECMWF data and 169 W/m2 when using LAMBO data. The use of either Schiano's or Gilman's and Garrett's corrections help to get even closer values. More difficult is to assess the comparison in the case of longwave radiation, with relative errors of an order of 10-20%.
Patterned wafer geometry grouping for improved overlay control
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Park, Junbeom; Song, Changrock; Anis, Fatima; Vukkadala, Pradeep; Jeon, Sanghuck; Choi, DongSub; Huang, Kevin; Heo, Hoyoung; Smith, Mark D.; Robinson, John C.
2017-03-01
Process-induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget including non-uniform wafer stress. Previous studies have shown the correlation between process-induced stress and overlay and the opportunity for improvement in process control, including the use of patterned wafer geometry (PWG) metrology to reduce stress-induced overlay signatures. Key challenges of volume semiconductor manufacturing are how to improve not only the magnitude of these signatures, but also the wafer to wafer variability. This work involves a novel technique of using PWG metrology to provide improved litho-control by wafer-level grouping based on incoming process induced overlay, relevant for both 3D NAND and DRAM. Examples shown in this study are from 19 nm DRAM manufacturing.
CAUT Analysis of Federal Budget 2012
ERIC Educational Resources Information Center
Canadian Association of University Teachers, 2012
2012-01-01
The 2012 federal Budget marks the beginning of a painful and unnecessary fiscal retrenchment. Despite boasting one of the lowest debt-to-GDP ratios amongst industrialized countries, the Conservative government is pressing ahead with deep cuts of more than $5 billion across departmental budgets by 2014-15. For post-secondary education and research,…
Budget Brief: 2015 Proposed Budget Milwaukee Public Schools
ERIC Educational Resources Information Center
Allen, Vanessa; Chapman, Anne; Henken, Rob
2014-01-01
In this report, the authors provide a detailed analysis of the major changes in revenue and expenditures in the Milwaukee Public Schools (MPS) 2015 proposed budget, and the manner in which MPS has responded to recent legislative changes and turbulent workforce challenges. The objective is to provide an independent assessment of the district's…
Daly, Rich
2011-11-21
Providers say the administration's growing emphasis on billing audits is pushing them to the limit and threatens to increase their costs. Many billing problems stem from simple errors, not fraud, they say. "When you get into the nuts and bolts of some of these programs you realize it's not as easy as taking the overpayment line out of the budget," says Michael Regier, of VHA.
2015-01-01
emissivity and the radiative intensity of the gas over a spectral band. The temperature is then calculated from the Planck function. The technique does not...pressure budget for cooling channels reduces pump horsepower and turbine inlet temperature DISTRIBUTION STATEMENT A – Approved for public release...distribution unlimited 4 Status of Modeling and Simulation • Existing data set for film cooling effectiveness consists of wall heat flux measurements • CFD
Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data
NASA Technical Reports Server (NTRS)
Larden, D. R.; Bender, P. L.
1982-01-01
The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm.
Simultaneous orbit determination
NASA Technical Reports Server (NTRS)
Wright, J. R.
1988-01-01
Simultaneous orbit determination is demonstrated using live range and Doppler data for the NASA/Goddard tracking configuration defined by the White Sands Ground Terminal (WSGT), the Tracking and Data Relay Satellite (TDRS), and the Earth Radiation Budget Satellite (ERBS). A physically connected sequential filter-smoother was developed for this demonstration. Rigorous necessary conditions are used to show that the state error covariance functions are realistic; and this enables the assessment of orbit estimation accuracies for both TDRS and ERBS.
Low-Power Fault Tolerance for Spacecraft FPGA-Based Numerical Computing
2006-09-01
Ranganathan , “Power Management – Guest Lecture for CS4135, NPS,” Naval Postgraduate School, Nov 2004 [32] R. L. Phelps, “Operational Experiences with the...4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2...undesirable, are not necessarily harmful. Our intent is to prevent errors by properly managing faults. This research focuses on developing fault-tolerant
Estimating terrestrial aboveground biomass estimation using lidar remote sensing: a meta-analysis
NASA Astrophysics Data System (ADS)
Zolkos, S. G.; Goetz, S. J.; Dubayah, R.
2012-12-01
Estimating biomass of terrestrial vegetation is a rapidly expanding research area, but also a subject of tremendous interest for reducing carbon emissions associated with deforestation and forest degradation (REDD). The accuracy of biomass estimates is important in the context carbon markets emerging under REDD, since areas with more accurate estimates command higher prices, but also for characterizing uncertainty in estimates of carbon cycling and the global carbon budget. There is particular interest in mapping biomass so that carbon stocks and stock changes can be monitored consistently across a range of scales - from relatively small projects (tens of hectares) to national or continental scales - but also so that other benefits of forest conservation can be factored into decision making (e.g. biodiversity and habitat corridors). We conducted an analysis of reported biomass accuracy estimates from more than 60 refereed articles using different remote sensing platforms (aircraft and satellite) and sensor types (optical, radar, lidar), with a particular focus on lidar since those papers reported the greatest efficacy (lowest errors) when used in the a synergistic manner with other coincident multi-sensor measurements. We show systematic differences in accuracy between different types of lidar systems flown on different platforms but, perhaps more importantly, differences between forest types (biomes) and plot sizes used for field calibration and assessment. We discuss these findings in relation to monitoring, reporting and verification under REDD, and also in the context of more systematic assessment of factors that influence accuracy and error estimation.
A Multi-Objective Decision-Making Model for Resources Allocation in Humanitarian Relief
2007-03-01
Applied Mathematics and Computation 163, 2005, pp756 19. Malczewski, J., GIS and Multicriteria Decision Analysis , John Wiley and Sons, New York... used when interpreting the results of the analysis . (Raimo et al. 2002) (7) Sensitivity analysis Sensitivity analysis in a DA process answers...Budget Scenario Analysis The MILP is solved ( using LINDO 6.1) for high, medium and low budget scenarios in both damage degree levels. Tables 17 and
Aeronautical audio broadcasting via satellite
NASA Technical Reports Server (NTRS)
Tzeng, Forrest F.
1993-01-01
A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.
Energy budgets of animals: behavioral and ecological implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, W P
1979-01-01
This year's progress has been: (1) to extend the general microclimate model two ways: (a) to incorporate wet ground surfaces (bogs), and (b) to incorporate slope effects. Tests of the model in a Michigan bog and the Galapagos Islands show temperature accuracies to within 4/sup 0/C at worst at any soil or air location, which is about a 2% error in estimation of metabolism. (2) The addition to ectotherm modeling an analysis of: (a) reproduction in heterogeneous and uncertain environments; (b) prediction of distribution limits due to egg incubation requirements; (c) addition of appendage-torso modeling and tests on large ectotherms;more » (d) social systems interactions with environmental and physiological variables; and (3) to continue the endotherm (deer mouse) experimental research and extend the growth and reproduction studies to include the entire reproductive and growth cycle in the deer mouse.« less
NASA Technical Reports Server (NTRS)
Weber, C. L.; Udalov, S.; Alem, W.
1977-01-01
The performance of the space shuttle orbiter's Ku-Band integrated radar and communications equipment is analyzed for the radar mode of operation. The block diagram of the rendezvous radar subsystem is described. Power budgets for passive target detection are calculated, based on the estimated values of system losses. Requirements for processing of radar signals in the search and track modes are examined. Time multiplexed, single-channel, angle tracking of passive scintillating targets is analyzed. Radar performance in the presence of main lobe ground clutter is considered and candidate techniques for clutter suppression are discussed. Principal system parameter drivers are examined for the case of stationkeeping at ranges comparable to target dimension. Candidate ranging waveforms for short range operation are analyzed and compared. The logarithmic error discriminant utilized for range, range rate and angle tracking is formulated and applied to the quantitative analysis of radar subsystem tracking loops.
The Phoretic Motion Experiment (PME) definition phase
NASA Technical Reports Server (NTRS)
Eaton, L. R.; Neste, S. L. (Editor)
1982-01-01
The aerosol generator and the charge flow devices (CFD) chamber which were designed for zero-gravity operation was analyzed. Characteristics of the CFD chamber and aerosol generator which would be useful for cloud physics experimentation in a one-g as well as a zero-g environment are documented. The Collision type of aerosol generator is addressed. Relationships among the various input and output parameters are derived and subsequently used to determine the requirements on the controls of the input parameters to assure a given error budget of an output parameter. The CFD chamber operation in a zero-g environment is assessed utilizing a computer simulation program. Low nuclei critical supersaturation and high experiment accuracies are emphasized which lead to droplet growth times extending into hundreds of seconds. The analysis was extended to assess the performance constraints of the CFD chamber in a one-g environment operating in the horizontal mode.
NASA Astrophysics Data System (ADS)
Takenaka, Hideki; Koyama, Yoshisada; Akioka, Maki; Kolev, Dimitar; Iwakiri, Naohiko; Kunimori, Hiroo; Carrasco-Casado, Alberto; Munemasa, Yasushi; Okamoto, Eiji; Toyoshima, Morio
2016-03-01
Research and development of space optical communications is conducted in the National Institute of Information and Communications Technology (NICT). The NICT developed the Small Optical TrAnsponder (SOTA), which was embarked on a 50kg-class satellite and launched into a low earth orbit (LEO). The space-to-ground laser communication experiments have been conducted with the SOTA. Atmospheric turbulence causes signal fadings and becomes an issue to be solved in satellite-to-ground laser communication links. Therefore, as error-correcting functions, a Reed-Solomon (RS) code and a Low-Density Generator Matrix (LDGM) code are implemented in the communication system onboard the SOTA. In this paper, we present the in-orbit verification results of SOTA including the characteristic of the functions, the communication performance with the LDGM code via satellite-to-ground atmospheric paths, and the link budget analysis and the comparison between theoretical and experimental results.
A Statistical Theory of Bidirectionality
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2013-01-01
Original concepts related to the quantification and assessment of bidirectionality in strain-gage balances were introduced by Ulbrich in 2012. These concepts are extended here in three ways: 1) the metric originally proposed by Ulbrich is normalized, 2) a categorical variable is introduced in the regression analysis to account for load polarity, and 3) the uncertainty in both normalized and non-normalized bidirectionality metrics is quantified. These extensions are applied to four representative balances to assess the bidirectionality characteristics of each. The paper is tutorial in nature, featuring reviews of certain elements of regression and formal inference. Principal findings are that bidirectionality appears to be a common characteristic of most balance outputs and that unless it is taken into account, it is likely to consume the entire error budget of a typical balance calibration experiment. Data volume and correlation among calibration loads are shown to have a significant impact on the precision with which bidirectionality metrics can be assessed.
NASA Technical Reports Server (NTRS)
Tighe, R. J.; Shen, M. Y. H.
1984-01-01
The Nimbus 7 ERB MATRIX Tape is a computer program in which radiances and irradiances are converted into fluxes which are used to compute the basic scientific output parameters, emitted flux, albedo, and net radiation. They are spatially averaged and presented as time averages over one-day, six-day, and monthly periods. MATRIX data for the period November 16, 1978 through October 31, 1979 are presented. Described are the Earth Radiation Budget experiment, the Science Quality Control Report, Items checked by the MATRIX Science Quality Control Program, and Science Quality Control Data Analysis Report. Additional material from the detailed scientific quality control of the tapes which may be very useful to a user of the MATRIX tapes is included. Known errors and data problems and some suggestions on how to use the data for further climatologic and atmospheric physics studies are also discussed.
Link performance optimization for digital satellite broadcasting systems
NASA Astrophysics Data System (ADS)
de Gaudenzi, R.; Elia, C.; Viola, R.
The authors introduce the concept of digital direct satellite broadcasting (D-DBS), which allows unprecedented flexibility by providing a large number of audiovisual services. The concept assumes an information rate of 40 Mb/s, which is compatible with practically all present-day transponders. After discussion of the general system concept, the results of transmission system optimization are presented. Channel and interference effects are taken into account. Numerical results show that the scheme with the best performance is trellis-coded 8-PSK (phase shift keying) modulation concatenated with Reed-Solomon block code. For a net data rate of 40 Mb/s a bit error rate of 10-10 can be achieved with an equivalent bit energy to noise density of 9.5 dB, including channel, interference, and demodulator impairments. A link budget analysis shows how a medium-power direct-to-home TV satellite can provide multimedia services to users equipped with small (60-cm) dish antennas.
Uncertainty Budget Analysis for Dimensional Inspection Processes (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdez, Lucas M.
2012-07-26
This paper is intended to provide guidance and describe how to prepare an uncertainty analysis of a dimensional inspection process through the utilization of an uncertainty budget analysis. The uncertainty analysis is stated in the same methodology as that of the ISO GUM standard for calibration and testing. There is a specific distinction between how Type A and Type B uncertainty analysis is used in a general and specific process. All theory and applications are utilized to represent both a generalized approach to estimating measurement uncertainty and how to report and present these estimations for dimensional measurements in a dimensionalmore » inspection process. The analysis of this uncertainty budget shows that a well-controlled dimensional inspection process produces a conservative process uncertainty, which can be attributed to the necessary assumptions in place for best possible results.« less
A blinded determination of H0 from low-redshift Type Ia supernovae, calibrated by Cepheid variables
NASA Astrophysics Data System (ADS)
Zhang, Bonnie R.; Childress, Michael J.; Davis, Tamara M.; Karpenka, Natallia V.; Lidman, Chris; Schmidt, Brian P.; Smith, Mathew
2017-10-01
Presently, a >3σ tension exists between values of the Hubble constant H0 derived from analysis of fluctuations in the cosmic microwave background by Planck, and local measurements of the expansion using calibrators of Type Ia supernovae (SNe Ia). We perform a blinded re-analysis of Riess et al. (2011) to measure H0 from low-redshift SNe Ia, calibrated by Cepheid variables and geometric distances including to NGC 4258. This paper is a demonstration of techniques to be applied to the Riess et al. (2016) data. Our end-to-end analysis starts from available Harvard -Smithsonian Center for Astrophysics (CfA3) and Lick Observatory Supernova Search (LOSS) photometries, providing an independent validation of Riess et al. (2011). We obscure the value of H0 throughout our analysis and the first stage of the referee process, because calibration of SNe Ia requires a series of often subtle choices, and the potential for results to be affected by human bias is significant. Our analysis departs from that of Riess et al. (2011) by incorporating the covariance matrix method adopted in Supernova Legacy Survey and Joint Lightcurve Analysis to quantify SN Ia systematics, and by including a simultaneous fit of all SN Ia and Cepheid data. We find H_0 = 72.5 ± 3.1 ({stat}) ± 0.77 ({sys}) km s-1 Mpc-1with a three-galaxy (NGC 4258+LMC+MW) anchor. The relative uncertainties are 4.3 per cent statistical, 1.1 per cent systematic, and 4.4 per cent total, larger than in Riess et al. (2011) (3.3 per cent total) and the Efstathiou (2014) re-analysis (3.4 per cent total). Our error budget for H0 is dominated by statistical errors due to the small size of the SN sample, whilst the systematic contribution is dominated by variation in the Cepheid fits, and for the SNe Ia, uncertainties in the host galaxy mass dependence and Malmquist bias.
NASA Astrophysics Data System (ADS)
Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh; Song, Yang; Karion, Anna; Oda, Tomohiro; Patarasuk, Risa; Razlivanov, Igor; Sarmiento, Daniel; Shepson, Paul; Sweeney, Colm; Turnbull, Jocelyn; Wu, Kai
2016-05-01
Based on a uniquely dense network of surface towers measuring continuously the atmospheric concentrations of greenhouse gases (GHGs), we developed the first comprehensive monitoring systems of CO2 emissions at high resolution over the city of Indianapolis. The urban inversion evaluated over the 2012-2013 dormant season showed a statistically significant increase of about 20% (from 4.5 to 5.7 MtC ± 0.23 MtC) compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product. Spatial structures in prior emission errors, mostly undetermined, appeared to affect the spatial pattern in the inverse solution and the total carbon budget over the entire area by up to 15%, while the inverse solution remains fairly insensitive to the CO2 boundary inflow and to the different prior emissions (i.e., ODIAC). Preceding the surface emission optimization, we improved the atmospheric simulations using a meteorological data assimilation system also informing our Bayesian inversion system through updated observations error variances. Finally, we estimated the uncertainties associated with undetermined parameters using an ensemble of inversions. The total CO2 emissions based on the ensemble mean and quartiles (5.26-5.91 MtC) were statistically different compared to the prior total emissions (4.1 to 4.5 MtC). Considering the relatively small sensitivity to the different parameters, we conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emission error structures are required to determine the spatial structures of urban emissions at high resolution.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Robertson, Franklin R.; Chen, Junye
2008-01-01
The Modern. Era Retrospective-analysis for Research and Applications (MERRA) reanalyses has produced several years of data, on the way to a completing. the 1979-present modern satellite era. Here, we present a preliminary evaluation of those years currently available, includin g comparisons with the existing long reanalyses (ERA40, JRA25 and NCE P I and II) as well as with global data sets for the water and energy cycle Time series shows that the MERRA budgets can change with some of the variations in observing systems. We will present all terms of the budgets in MERRA including the time rates of change and analysis increments (tendency due to the analysis of observations)
Aerocapture Performance Analysis of A Venus Exploration Mission
NASA Technical Reports Server (NTRS)
Starr, Brett R.; Westhelle, Carlos H.
2005-01-01
A performance analysis of a Discovery Class Venus Exploration Mission in which aerocapture is used to capture a spacecraft into a 300km polar orbit for a two year science mission has been conducted to quantify its performance. A preliminary performance assessment determined that a high heritage 70 sphere-cone rigid aeroshell with a 0.25 lift to drag ratio has adequate control authority to provide an entry flight path angle corridor large enough for the mission s aerocapture maneuver. A 114 kilograms per square meter ballistic coefficient reference vehicle was developed from the science requirements and the preliminary assessment s heating indicators and deceleration loads. Performance analyses were conducted for the reference vehicle and for sensitivity studies on vehicle ballistic coefficient and maximum bank rate. The performance analyses used a high fidelity flight simulation within a Monte Carlo executive to define the aerocapture heating environment and deceleration loads and to determine mission success statistics. The simulation utilized the Program to Optimize Simulated Trajectories (POST) that was modified to include Venus specific atmospheric and planet models, aerodynamic characteristics, and interplanetary trajectory models. In addition to Venus specific models, an autonomous guidance system, HYPAS, and a pseudo flight controller were incorporated in the simulation. The Monte Carlo analyses incorporated a reference set of approach trajectory delivery errors, aerodynamic uncertainties, and atmospheric density variations. The reference performance analysis determined the reference vehicle achieves 100% successful capture and has a 99.87% probability of attaining the science orbit with a 90 meters per second delta V budget for post aerocapture orbital adjustments. A ballistic coefficient trade study conducted with reference uncertainties determined that the 0.25 L/D vehicle can achieve 100% successful capture with a ballistic coefficient of 228 kilograms per square meter and that the increased ballistic coefficient increases post aerocapture V budget to 134 meters per second for a 99.87% probability of attaining the science orbit. A trade study on vehicle bank rate determined that the 0.25 L/D vehicle can achieve 100% successful capture when the maximum bank rate is decreased from 30 deg/s to 20 deg/s. The decreased bank rate increases post aerocapture delta V budget to 102 meters per second for a 99.87% probability of attaining the science orbit.
ERIC Educational Resources Information Center
LaFaive, Michael D.
2004-01-01
As the debate rages in Lansing over the size and scope of the 2004-2005 Fiscal Year state budget, the Mackinac Center for Public Policy is republishing and updating budget cutting ideas from its March 2003 study, "Recommendations to Strengthen Civil Society and Balance Michigan's Budget." The 2003 study made over 200 recommendations…
Automatic performance budget: towards a risk reduction
NASA Astrophysics Data System (ADS)
Laporte, Philippe; Blake, Simon; Schmoll, Jürgen; Rulten, Cameron; Savoie, Denis
2014-08-01
In this paper, we discuss the performance matrix of the SST-GATE telescope developed to allow us to partition and allocate the important characteristics to the various subsystems as well as to describe the process in order to verify that the current design will deliver the required performance. Due to the integrated nature of the telescope, a large number of parameters have to be controlled and effective calculation tools must be developed such as an automatic performance budget. Its main advantages consist in alleviating the work of the system engineer when changes occur in the design, in avoiding errors during any re-allocation process and recalculate automatically the scientific performance of the instrument. We explain in this paper the method to convert the ensquared energy (EE) and the signal-to-noise ratio (SNR) required by the science cases into the "as designed" instrument. To ensure successful design, integration and verification of the next generation instruments, it is of the utmost importance to have methods to control and manage the instrument's critical performance characteristics at its very early design steps to limit technical and cost risks in the project development. Such a performance budget is a tool towards this goal.
Effects of Energy Needs and Expenditures on U.S. Public Schools. Statistical Analysis Report.
ERIC Educational Resources Information Center
Smith, Timothy; Porch, Rebecca; Farris, Elizabeth; Fowler, William
This report provides national estimates on energy needs and expenditures of U.S. public school districts. The survey provides estimates of Fiscal Year (FY) 2000 energy expenditures, FY 2001 energy budgets and expenditures, and FY 2002 energy budgets; methods used to cover energy budget shortfalls in FY 2001; and possible reasons for those…
Summary and Analysis of President Obama's Education Budget Request, Fiscal Year 2012: Issue Brief
ERIC Educational Resources Information Center
New America Foundation, 2011
2011-01-01
President Barack Obama submitted his third budget request to Congress on February 14th, 2011. The detailed budget request includes proposed funding levels for federal programs and agencies in aggregate for the upcoming 10 fiscal years, and specific fiscal year 2012 funding levels for individual programs subject to appropriations. Congress will use…
The Budget Enforcement Act: Implications for Children and Families.
ERIC Educational Resources Information Center
Baehler, Karen
This analysis of the Budget Enforcement Act of 1990 (BEA) and its implications for public financing of education and other children's services notes that voters want more and better education and related services, and at the same time want to pay less in taxes and balance budgets at every governmental level. The first section details recent…
Analysis of President Bush's Education Budget Request: Fiscal Year 2009
ERIC Educational Resources Information Center
New America Foundation, 2009
2009-01-01
President George W. Bush submitted his eighth and final budget request to the Congress on Monday. Under the proposal, fiscal year 2009 discretionary spending--spending subject to annual appropriations--would be at the same level as in the prior year for domestic programs and agencies not involved in homeland security efforts. The budget request…
ERIC Educational Resources Information Center
Lyons, Lucy Eleonore; Blosser, John
2012-01-01
The "Comprehensive Allocation Process" (CAP) is a reproducible decision-making structure for the allocation of new collections funds, for the reallocation of funds within stagnant budgets, and for budget cuts in the face of reduced funding levels. This system was designed to overcome common shortcomings of current methods. Its philosophical…
Design of an environmental field observatory for quantifying the urban water budget
Claire Welty; Andrew J. Miller; Kenneth T. Belt; James A. Smith; Lawrence E. Band; Peter M. Groffman; Todd M. Scanlon; Juying Warner; Robert J. Ryan; Robert J. Shedlock; Michael P. McGuire
2007-01-01
Quantifying the water budget of urban areas presents special challenges, owing to the influence of subsurface infrastructure that can cause short-circuiting of natural flowpaths. In this paper we review some considerations for data collection and analysis in support of determining urban water budget components, with a particular emphasis on groundwater, using Baltimore...
The DOE water cycle pilot study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, N. L.; King, A. W.; Miller, M. A.
In 1999, the U.S. Global Change Research Program (USGCRP) formed a Water Cycle Study Group (Hornberger et al. 2001) to organize research efforts in regional hydrologic variability, the extent to which this variability is caused by human activity, and the influence of ecosystems. The USGCRP Water Cycle Study Group was followed by a U.S. Department of Energy (DOE) Water Cycle Research Plan (Department of Energy 2002) that outlined an approach toward improving seasonal-to-interannual hydroclimate predictability and closing a regional water budget. The DOE Water Cycle Research Plan identified key research areas, including a comprehensive long-term observational database to support modelmore » development, and to develop a better understanding of the relationship between the components of local water budgets and large scale processes. In response to this plan, a multilaboratory DOE Water Cycle Pilot Study (WCPS) demonstration project began with a focus on studying the water budget and its variability at multiple spatial scales. Previous studies have highlighted the need for continued efforts to observationally close a local water budget, develop a numerical model closure scheme, and further quantify the scales in which predictive accuracy are optimal. A concerted effort within the National Oceanic and Atmospheric Administration (NOAA)-funded Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) put forth a strategy to understand various hydrometeorological processes and phenomena with an aim toward closing the water and energy budgets of regional watersheds (Lawford 1999, 2001). The GCIP focus on such regional budgets includes the measurement of all components and reduction of the error in the budgets to near zero. To approach this goal, quantification of the uncertainties in both measurements and modeling is required. Model uncertainties within regional climate models continue to be evaluated within the Program to Intercompare Regional Climate Simulations (Takle et al. 1999), and model uncertainties within land surface models are being evaluated within the Program to Intercompare Land Surface Schemes (e.g., Henderson-Sellers 1993; Wood et al. 1998; Lohmann et al. 1998). In the context of understanding the water budget at watershed scales, the following two research questions that highlight DOE's unique water isotope analysis and high-performance modeling capabilities were posed as the foci of this pilot study: (1) Can the predictability of the regional water budget be improved using high-resolution model simulations that are constrained and validated with new hydrospheric water measurements? (2) Can water isotopic tracers be used to segregate different pathways through the water cycle and predict a change in regional climate patterns? To address these questions, numerical studies using regional atmospheric-land surface models and multiscale land surface hydrologic models were generated and, to the extent possible, the results were evaluated with observations. While the number of potential processes that may be important in the local water budget is large, several key processes were examined in detail. Most importantly, a concerted effort was made to understand water cycle processes and feedbacks at the land surface-atmosphere interface at spatial scales ranging from 30 m to hundreds of kilometers. A simple expression for the land surface water budget at the watershed scale is expressed as {Delta}S = P + G{sub in} - ET - Q - G{sub out}, where {Delta}S is the change in water storage, P is precipitation, ET is evapotranspiration, Q is streamflow, G{sub in} is groundwater entering the watershed, and G{sub out} is groundwater leaving the watershed, per unit time. The WCPS project identified data gaps and necessary model improvements that will lead to a more accurate representation of the terms in Eq. (1). Table 1 summarizes the components of this water cycle pilot study and the respective participants. The following section provides a description of the surface observation and modeling sites. This is followed by a section on model analyses, and then the summary and concluding remarks.« less
A Methodological Review of US Budget-Impact Models for New Drugs.
Mauskopf, Josephine; Earnshaw, Stephanie
2016-11-01
A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.
NASA Astrophysics Data System (ADS)
Lv, M.; Ma, Z.; Yuan, X.
2017-12-01
It is important to evaluate the water budget closure on the basis of the currently available data including precipitation, evapotranspiration (ET), runoff, and GRACE-derived terrestrial water storage change (TWSC) before using them to resolve water-related issues. However, it remains challenging to achieve the balance without the consideration of human water use (e.g., inter-basin water diversion and irrigation) for the estimation of other water budget terms such as the ET. In this study, the terrestrial water budget closure is tested over the Yellow River Basin (YRB) and Changjiang River Basin (CJB, Yangtze River Basin) of China. First, the actual ET is reconstructed by using the GLDAS-1 land surface models, the high quality observation-based precipitation, naturalized streamflow, and the irrigation water (hereafter, ETrecon). The ETrecon, evaluated using the mean annual water-balance equation, is of good quality with the absolute relative errors less than 1.9% over the two studied basins. The total basin discharge (Rtotal) is calculated as the residual of the water budget among the observation-based precipitation, ETrecon, and the GRACE-TWSC. The value of the Rtotal minus the observed total basin discharge is used to evaluate the budget closure, with the consideration of inter-basin water diversion. After the ET reconstruction, the mean absolute imbalance value reduced from 3.31 cm/year to 1.69 cm/year and from 15.40 cm/year to 1.96 cm/year over the YRB and CJB, respectively. The estimation-to-observation ratios of total basin discharge improved from 180.8% to 86.8% over the YRB, and from 67.0% to 101.1% over the CJB. The proposed ET reconstruction method is applicable to other human-managed river basins to provide an alternative estimation.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Pérez, Rafael
2017-04-01
The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005
Mehrotra, Sanjay; Kim, Kibaek
2011-12-01
We consider the problem of outcomes based budget allocations to chronic disease prevention programs across the United States (US) to achieve greater geographical healthcare equity. We use Diabetes Prevention and Control Programs (DPCP) by the Center for Disease Control and Prevention (CDC) as an example. We present a multi-criteria robust weighted sum model for such multi-criteria decision making in a group decision setting. The principal component analysis and an inverse linear programming techniques are presented and used to study the actual 2009 budget allocation by CDC. Our results show that the CDC budget allocation process for the DPCPs is not likely model based. In our empirical study, the relative weights for different prevalence and comorbidity factors and the corresponding budgets obtained under different weight regions are discussed. Parametric analysis suggests that money should be allocated to states to promote diabetes education and to increase patient-healthcare provider interactions to reduce disparity across the US.
NASA Technical Reports Server (NTRS)
Wier, C. E.; Wobber, F. J.; Russell, O. R.; Martin, K. R. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Mined land reclamation analysis procedures developed within the Indiana portion of the Illinois Coal Basin were independently tested in Ohio utilizing 1:80,000 scale enlargements of ERTS-1 image 1029-15361-7 (dated August 21, 1972). An area in Belmont County was selected for analysis due to the extensive surface mining and the different degrees of reclamation occurring in this area. Contour mining in this area provided the opportunity to extend techniques developed for analysis of relatively flat mining areas in Indiana to areas of rolling topography in Ohio. The analysts had no previous experience in the area. Field investigations largely confirmed office analysis results although in a few areas estimates of vegetation percentages were found to be too high. In one area this error approximated 25%. These results suggest that systematic ERTS-1 analysis in combination with selective field sampling can provide reliable vegetation percentage estimates in excess of 25% accuracy with minimum equipment investment and training. The utility of ERTS-1 for practical and reasonably reliable update of mined lands information for groups with budget limitations is suggested. Many states can benefit from low cost updates using ERTS-1 imagery from public sources.
40-Gb/s PAM4 with low-complexity equalizers for next-generation PON systems
NASA Astrophysics Data System (ADS)
Tang, Xizi; Zhou, Ji; Guo, Mengqi; Qi, Jia; Hu, Fan; Qiao, Yaojun; Lu, Yueming
2018-01-01
In this paper, we demonstrate 40-Gb/s four-level pulse amplitude modulation (PAM4) transmission with 10 GHz devices and low-complexity equalizers for next-generation passive optical network (PON) systems. Simple feed-forward equalizer (FFE) and decision feedback equalizer (DFE) enable 20 km fiber transmission while high-complexity Volterra algorithm in combination with FFE and DFE can extend the transmission distance to 40 km. A simplified Volterra algorithm is proposed for reducing computational complexity. Simulation results show that the simplified Volterra algorithm reduces up to ∼75% computational complexity at a relatively low cost of only 0.4 dB power budget. At a forward error correction (FEC) threshold of 10-3 , we achieve 31.2 dB and 30.8 dB power budget over 40 km fiber transmission using traditional FFE-DFE-Volterra and our simplified FFE-DFE-Volterra, respectively.
Pellicle transmission uniformity requirements
NASA Astrophysics Data System (ADS)
Brown, Thomas L.; Ito, Kunihiro
1998-12-01
Controlling critical dimensions of devices is a constant battle for the photolithography engineer. Current DUV lithographic process exposure latitude is typically 12 to 15% of the total dose. A third of this exposure latitude budget may be used up by a variable related to masking that has not previously received much attention. The emphasis on pellicle transmission has been focused on increasing the average transmission. Much less, attention has been paid to transmission uniformity. This paper explores the total demand on the photospeed latitude budget, the causes of pellicle transmission nonuniformity and examines reasonable expectations for pellicle performance. Modeling is used to examine how the two primary errors in pellicle manufacturing contribute to nonuniformity in transmission. World-class pellicle transmission uniformity standards are discussed and a comparison made between specifications of other components in the photolithographic process. Specifications for other materials or parameters are used as benchmarks to develop a proposed industry standard for pellicle transmission uniformity.
The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.
1985-05-08
also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies
Geostationary Operational Environmental Satellite (GOES-N report). Volume 2: Technical appendix
NASA Technical Reports Server (NTRS)
1992-01-01
The contents include: operation with inclinations up to 3.5 deg to extend life; earth sensor improvements to reduce noise; sensor configurations studied; momentum management system design; reaction wheel induced dynamic interaction; controller design; spacecraft motion compensation; analog filtering; GFRP servo design - modern control approach; feedforward compensation as applied to GOES-1 sounder; discussion of allocation of navigation, inframe registration and image-to-image error budget overview; and spatial response and cloud smearing study.
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurtis; Hair, Jason; McAndrew, Brendan; Jennings, Don; Rabin, Douglas; Daw, Adrian; Lundsford, Allen
2012-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission key goals include enabling observation of high accuracy long-term climate change trends, use of these observations to test and improve climate forecasts, and calibration of operational and research sensors. The spaceborne instrument suites include a reflected solar spectroradiometer, emitted infrared spectroradiometer, and radio occultation receivers. The requirement for the RS instrument is that derived reflectance must be traceable to Sl standards with an absolute uncertainty of <0.3% and the error budget that achieves this requirement is described in previo1L5 work. This work describes the Solar/Lunar Absolute Reflectance Imaging Spectroradiometer (SOLARIS), a calibration demonstration system for RS instrument, and presents initial calibration and characterization methods and results. SOLARIS is an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm over a full field-of-view of 10 degrees with 0.27 milliradian sampling. Results from laboratory measurements including use of integrating spheres, transfer radiometers and spectral standards combined with field-based solar and lunar acquisitions are presented. These results will be used to assess the accuracy and repeatability of the radiometric and spectral characteristics of SOLARIS, which will be presented against the sensor-level requirements addressed in the CLARREO RS instrument error budget.
The next generation in optical transport semiconductors: IC solutions at the system level
NASA Astrophysics Data System (ADS)
Gomatam, Badri N.
2005-02-01
In this tutorial overview, we survey some of the challenging problems facing Optical Transport and their solutions using new semiconductor-based technologies. Advances in 0.13um CMOS, SiGe/HBT and InP/HBT IC process technologies and mixed-signal design strategies are the fundamental breakthroughs that have made these solutions possible. In combination with innovative packaging and transponder/transceiver architectures IC approaches have clearly demonstrated enhanced optical link budgets with simultaneously lower (perhaps the lowest to date) cost and manufacturability tradeoffs. This paper will describe: *Electronic Dispersion Compensation broadly viewed as the overcoming of dispersion based limits to OC-192 links and extending link budgets, *Error Control/Coding also known as Forward Error Correction (FEC), *Adaptive Receivers for signal quality monitoring for real-time estimation of Q/OSNR, eye-pattern, signal BER and related temporal statistics (such as jitter). We will discuss the theoretical underpinnings of these receiver and transmitter architectures, provide examples of system performance and conclude with general market trends. These Physical layer IC solutions represent a fundamental new toolbox of options for equipment designers in addressing systems level problems. With unmatched cost and yield/performance tradeoffs, it is expected that IC approaches will provide significant flexibility in turn, for carriers and service providers who must ultimately manage the network and assure acceptable quality of service under stringent cost constraints.
Optomechanical design of the vacuum compatible EXCEDE's mission testbed
NASA Astrophysics Data System (ADS)
Bendek, Eduardo A.; Belikov, Ruslan; Lozi, Julien; Schneider, Glenn; Thomas, Sandrine; Pluzhnik, Eugene; Lynch, Dana
2014-08-01
In this paper we describe the opto-mechanical design, tolerance error budget an alignment strategies used to build the Starlight Suppression System (SSS) for the Exoplanetary Circumstellar Environments and Disk Explorer (EXCEDE) NASA's mission. EXCEDE is a highly efficient 0.7m space telescope concept designed to directly image and spatially resolve circumstellar disks with as little as 10 zodis of circumstellar dust, as well as large planets. The main focus of this work was the design of a vacuum compatible opto-mechanical system that allows remote alignment and operation of the main components of the EXCEDE. SSS, which are: a Phase Induced Amplitude Apodization (PIAA) coronagraph to provide high throughput and high contrast at an inner working angle (IWA) equal to the diffraction limit (IWA = 1.2 l/D), a wavefront (WF) control system based on a Micro-Electro-Mechanical-System deformable mirror (MEMS DM), and low order wavefront sensor (LOWFS) for fine pointing and centering. We describe in strategy and tolerance error budget for this system, which is especially relevant to achieve the theoretical performance that PIAA coronagraph can offer. We also discuss the vacuum cabling design for the actuators, cameras and the Deformable Mirror. This design has been implemented at the vacuum chamber facility at Lockheed Martin (LM), which is based on successful technology development at the Ames Coronagraph Experiment (ACE) facility.
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2017-12-01
Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.
NASA Astrophysics Data System (ADS)
Fathrio, Ibnu; Manda, Atsuyoshi; Iizuka, Satoshi; Kodama, Yasu-Masa; Ishida, Sachinobu
2018-05-01
This study presents ocean heat budget analysis on seas surface temperature (SST) anomalies during strong-weak Asian summer monsoon (southwest monsoon). As discussed by previous studies, there was close relationship between variations of Asian summer monsoon and SST anomaly in western Indian Ocean. In this study we utilized ocean heat budget analysis to elucidate the dominant mechanism that is responsible for generating SST anomaly during weak-strong boreal summer monsoon. Our results showed ocean advection plays more important role to initate SST anomaly than the atmospheric prcess (surface heat flux). Scatterplot analysis showed that vertical advection initiated SST anomaly in western Arabian Sea and southwestern Indian Ocean, while zonal advection initiated SST anomaly in western equatorial Indian Ocean.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Robertson, F. R.; Chen, J.
2010-01-01
The Modern Era Retrospective-analysis for Research and Applications (MERRA) reanalyses has completed 27 years of data) soon to be caught up to present. Here) we present an evaluation of those years currently available) including comparisons with the existing long reanalyses (ERA40) JRA25 and NCEP I and II) as well as with global data sets for the water and energy cycle. Time series shows that the MERRA budgets can change with some of the variations in observing systems, but that the magnitude of energy imbalance in the system is improved with more observations. We will present all terms of the budgets in MERRA including the time rates of change and analysis increments (tendency due to the analysis of observations).
Engineering Considerations Applied to Starshade Repointing
NASA Technical Reports Server (NTRS)
Rioux, Norman; Dichmann, Donald; Domagal-Goldman, Shawn; Mandell, Avi; Roberge, Aki; Starke, Chris; Stoneking, Eric; Willis, Dewey
2016-01-01
Engineering analysis has been carried out on orbit dynamics that drive the delta-v budget for repointing a free-flying starshade occulter for viewing exoplanets with a space telescope. This analysis has application to the design of starshade spacecraft and yield calculations of observations of exoplanets using a space telescope and a starshade. Analysis was carried out to determine if there may be some advantage for the global delta-v budget if the telescope performs orbit changing delta-v maneuvers as part of the telescope-starshade alignment for observing exoplanets. Analysis of the orbit environmental forces at play found no significant advantage in having the telescope participate in delta-v maneuvers for exoplanet observation repointing. A separate analysis of starshade delta-v for repointing found that the orbit dynamics of the starshade is driven by multiple simultaneous variables that need to be considered together in order to create an effective estimate of delta-v over an exoplanet observation campaign. These include area of the starshade, dry mass of the starshade spacecraft, and propellant mass of the starshade spacecraft. Solar radiation pressure has the potential to play a dominant role in the orbit dynamics and delta-v budget. SRP effects are driven by the differences in the mass, area, and coefficients of reflectivity of the observing telescope and the starshade. The propellant budget cannot be effectively estimated without a conceptual design of a starshade spacecraft including the propulsion system. The varying propellant mass over the mission is a complexity that makes calculating the propellant budget less straightforward.
In-die photomask registration and overlay metrology with PROVE using 2D correlation methods
NASA Astrophysics Data System (ADS)
Seidel, D.; Arnz, M.; Beyer, D.
2011-11-01
According to the ITRS roadmap, semiconductor industry drives the 193nm lithography to its limits, using techniques like double exposure, double patterning, mask-source optimization and inverse lithography. For photomask metrology this translates to full in-die measurement capability for registration and critical dimension together with challenging specifications for repeatability and accuracy. Especially, overlay becomes more and more critical and must be ensured on every die. For this, Carl Zeiss SMS has developed the next generation photomask registration and overlay metrology tool PROVE® which serves the 32nm node and below and which is already well established in the market. PROVE® features highly stable hardware components for the stage and environmental control. To ensure in-die measurement capability, sophisticated image analysis methods based on 2D correlations have been developed. In this paper we demonstrate the in-die capability of PROVE® and present corresponding measurement results for shortterm and long-term measurements as well as the attainable accuracy for feature sizes down to 85nm using different illumination modes and mask types. Standard measurement methods based on threshold criteria are compared with the new 2D correlation methods to demonstrate the performance gain of the latter. In addition, mask-to-mask overlay results of typical box-in-frame structures down to 200nm feature size are presented. It is shown, that from overlay measurements a reproducibility budget can be derived that takes into account stage, image analysis and global effects like mask loading and environmental control. The parts of the budget are quantified from measurement results to identify critical error contributions and to focus on the corresponding improvement strategies.
ERIC Educational Resources Information Center
Nieb, Sharon Lynn
2014-01-01
This single-site qualitative study sought to identify the characteristics that contribute to the self sustainability of technology transfer services at universities with small research budgets through a case study analysis of a small research budget university that has been operating a financially self-sustainable technology transfer service for…
Proposed U.S. Space Weather Budget for Fiscal Year 2011 Would Fund Key Programs
NASA Astrophysics Data System (ADS)
Showstack, Randy
2010-09-01
The proposed U.S. federal budget for space weather research for fiscal year (FY) 2011 would provide funding for key space weather programs within several U.S. agencies, including NASA, NOAA, the National Science Foundation (NSF), and the Air Force. Funding for the programs comes ahead of the upcoming solar maximum, a period of the solar cycle with heightened solar activity, projected for 2013. Several officials indicated that while funding is not tied to a particular solar maximum or minimum, available assets could help with studying and preparing for the solar maximum. The proposed FY 2011 budget for the Heliophysics Division within NASA's Science Mission Directorate is $641.9 million, compared with the FY 2010 enacted budget of $627.4 million. Within the proposed budget is $166.9 million for heliophysics research, down slightly from $173 million for FY 2010. The proposed budget would include $31.7 million for heliophysics research and analysis (compared with $31 million for FY 2010); $66.7 million for “other missions and data analysis,” including Cluster II, the Advanced Composition Explorer (ACE), and the Time History of Events and Macroscale Interactions during Substorms (THEMIS) mission; and $48.9 million for sounding rockets.
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Cess, Robert D.; Charlock, Thomas P.; Coakley, James A.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 1 provides both summarized and detailed overviews of the CERES Release 1 data analysis system. CERES will produce global top-of-the-atmosphere shortwave and longwave radiative fluxes at the top of the atmosphere, at the surface, and within the atmosphere by using the combination of a large variety of measurements and models. The CERES processing system includes radiance observations from CERES scanning radiometers, cloud properties derived from coincident satellite imaging radiometers, temperature and humidity fields from meteorological analysis models, and high-temporal-resolution geostationary satellite radiances to account for unobserved times. CERES will provide a continuation of the ERBE record and the lowest error climatology of consistent cloud properties and radiation fields. CERES will also substantially improve our knowledge of the Earth's surface radiation budget.
... In 1999, President Clinton announced the Leadership and Investment in Fighting and Epidemic (LIFE) Initiative to address ... Foundation analysis of data from the Office of Management and Budget, Agency Congressional Budget Justifications, and Congressional ...
SHARK-NIR system design analysis overview
NASA Astrophysics Data System (ADS)
Viotto, Valentina; Farinato, Jacopo; Greggio, Davide; Vassallo, Daniele; Carolo, Elena; Baruffolo, Andrea; Bergomi, Maria; Carlotti, Alexis; De Pascale, Marco; D'Orazi, Valentina; Fantinel, Daniela; Magrin, Demetrio; Marafatto, Luca; Mohr, Lars; Ragazzoni, Roberto; Salasnich, Bernardo; Verinaud, Christophe
2016-08-01
In this paper, we present an overview of the System Design Analysis carried on for SHARK-NIR, the coronagraphic camera designed to take advantage of the outstanding performance that can be obtained with the FLAO facility at the LBT, in the near infrared regime. Born as a fast-track project, the system now foresees both coronagraphic direct imaging and spectroscopic observing mode, together with a first order wavefront correction tool. The analysis we here report includes several trade-offs for the selection of the baseline design, in terms of optical and mechanical engineering, and the choice of the coronagraphic techniques to be implemented, to satisfy both the main scientific drivers and the technical requirements set at the level of the telescope. Further care has been taken on the possible exploitation of the synergy with other LBT instrumentation, like LBTI. A set of system specifications is then flown down from the upper level requirements to finally ensure the fulfillment of the science drivers. The preliminary performance budgets are presented, both in terms of the main optical planes stability and of the image quality, including the contributions of the main error sources in different observing modes.
NASA Astrophysics Data System (ADS)
Yahiro, Takehisa; Sawamura, Junpei; Dosho, Tomonori; Shiba, Yuji; Ando, Satoshi; Ishikawa, Jun; Morita, Masahiro; Shibazaki, Yuichi
2018-03-01
One of the main components of an On-Product Overlay (OPO) error budget is the process induced wafer error. This necessitates wafer-to-wafer correction in order to optimize overlay accuracy. This paper introduces the Litho Booster (LB), standalone alignment station as a solution to improving OPO. LB can execute high speed alignment measurements without throughput (THP) loss. LB can be installed in any lithography process control loop as a metrology tool, and is then able to provide feed-forward (FF) corrections to the scanners. In this paper, the detailed LB design is described and basic LB performance and OPO improvement is demonstrated. Litho Booster's extendibility and applicability as a solution for next generation manufacturing accuracy and productivity challenges are also outlined
Astrometry for New Reductions: The ANR method
NASA Astrophysics Data System (ADS)
Robert, Vincent; Le Poncin-Lafitte, Christophe
2018-04-01
Accurate positional measurements of planets and satellites are used to improve our knowledge of their orbits and dynamics, and to infer the accuracy of the planet and satellite ephemerides. With the arrival of the Gaia-DR1 reference star catalog and its complete release afterward, the methods for ground-based astrometry become outdated in terms of their formal accuracy compared to the catalog's which is used. Systematic and zonal errors of the reference stars are eliminated, and the astrometric process now dominates in the error budget. We present a set of algorithms for computing the apparent directions of planets, satellites and stars on any date to micro-arcsecond precision. The expressions are consistent with the ICRS reference system, and define the transformation between theoretical reference data, and ground-based astrometric observables.
NASA Astrophysics Data System (ADS)
Sofyan, Hizir; Maulia, Eva; Miftahuddin
2017-11-01
A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).
Estimating the budget impact of orphan drugs in Sweden and France 2013–2020
2014-01-01
Background The growth in expenditure on orphan medicinal products (OMP) across Europe has been identified as a concern. Estimates of future expenditure in Europe have suggested that OMPs could account for a significant proportion of total pharmaceutical expenditure in some countries, but few of these forecasts have been well validated. This analysis aims to establish a robust forecast of the future budget impact of OMPs on the healthcare systems in Sweden and France. Methods A dynamic forecasting model was created to estimate the budget impact of OMPs in Sweden and France between 2013 and 2020. The model used historical data on OMP designation and approval rates to predict the number of new OMPs coming to the market. Average OMP sales were estimated for each year post-launch by regression analysis of historical sales data. Total forecast sales were compared with expected sales of all pharmaceuticals in each country to quantify the relative budget impact. Results The model predicts that by 2020, 152 OMPs will have marketing authorization in Europe. The base case OMP budget impacts are forecast to grow from 2.7% in Sweden and 3.2% in France of total drug expenditure in 2013 to 4.1% in Sweden and 4.9% in France by 2020. The principal driver of expenditure growth is the number of new OMPs obtaining OMP designation. This is tempered by the slowing success rate for new approvals and the loss of intellectual property protection on existing orphan medicines. Given the forward-looking nature of the analysis, uncertainty exists around model parameters and sensitivity analysis found peak year budget impact varying between 2% and 11%. Conclusion The budget impact of OMPs in Sweden and France is likely to remain sustainable over time and a relatively small proportion of total pharmaceutical expenditure. This forecast could be affected by changes in the success rate for OMP approvals, average cost of OMPs, and the type of companies developing OMPs. PMID:24524281
Estimating the budget impact of orphan drugs in Sweden and France 2013-2020.
Hutchings, Adam; Schey, Carina; Dutton, Richard; Achana, Felix; Antonov, Karolina
2014-02-13
The growth in expenditure on orphan medicinal products (OMP) across Europe has been identified as a concern. Estimates of future expenditure in Europe have suggested that OMPs could account for a significant proportion of total pharmaceutical expenditure in some countries, but few of these forecasts have been well validated. This analysis aims to establish a robust forecast of the future budget impact of OMPs on the healthcare systems in Sweden and France. A dynamic forecasting model was created to estimate the budget impact of OMPs in Sweden and France between 2013 and 2020. The model used historical data on OMP designation and approval rates to predict the number of new OMPs coming to the market. Average OMP sales were estimated for each year post-launch by regression analysis of historical sales data. Total forecast sales were compared with expected sales of all pharmaceuticals in each country to quantify the relative budget impact. The model predicts that by 2020, 152 OMPs will have marketing authorization in Europe. The base case OMP budget impacts are forecast to grow from 2.7% in Sweden and 3.2% in France of total drug expenditure in 2013 to 4.1% in Sweden and 4.9% in France by 2020. The principal driver of expenditure growth is the number of new OMPs obtaining OMP designation. This is tempered by the slowing success rate for new approvals and the loss of intellectual property protection on existing orphan medicines. Given the forward-looking nature of the analysis, uncertainty exists around model parameters and sensitivity analysis found peak year budget impact varying between 2% and 11%. The budget impact of OMPs in Sweden and France is likely to remain sustainable over time and a relatively small proportion of total pharmaceutical expenditure. This forecast could be affected by changes in the success rate for OMP approvals, average cost of OMPs, and the type of companies developing OMPs.
Forward to the Future: Estimating River Discharge with McFLI
NASA Astrophysics Data System (ADS)
Gleason, C. J.; Durand, M. T.; Garambois, P. A.
2016-12-01
The global surface water budget is still poorly understood, and improving our understanding of freshwater budgets requires coordination between in situ observations, models, and remote sensing. The upcoming launch of the NASA/CNES Surface Water and Ocean Topography (SWOT) satellite has generated considerable excitement as a new tool enabling hydrologists to tackle some of the most pressing questions facing their discipline. One question in particular which SWOT seems well suited to answer is river discharge (flow rate) estimation in ungauged basins: SWOT's anticipated measurements of river surface height and area have ushered in a new technique in hydrology- what we are here calling Mass conserved Flow Law Inversions, or McFLI. McFLI algorithms leverage classic hydraulic flow expressions (e.g. Manning's Equation, hydraulic geometry) within mass conserved river reaches to construct a simplified but still underconstrained system of equations to be solved for an unknown discharge. Most existing McFLI techniques have been designed to take advantage of SWOT's measurements and Manning's Equation: SWOT will observe changes in cross sectional area and river surface slope over time, so the McFLI need only solve for baseflow area and Manning's roughness coefficient. Recently published preliminary results have indicated that McFLI can be a viable tool in a global hydrologist's toolbox (discharge errors less than 30% as compared to gauges are possible in most cases). Therefore, we here outline the progress to date for McFLI techniques, and highlight three key areas for future development: 1) Maximize the accuracy and robustness of McFLI by incorporating ancillary data from satellites, models, and in situ observations. 2) Develop new McFLI techniques using novel or underutilized flow laws. 3) Systematically test McFLI to define different inversion classes of rivers with well-defined error budgets based on geography and available data for use in gauged and ungauged basins alike.
Iskrov, G; Jessop, E; Miteva-Katrandzhieva, T; Stefanov, R
2015-05-01
This study aimed to estimate the impact of rare disease (RD) drugs on Bulgaria's National Health Insurance Fund's (NHIF) total drug budget for 2011-2014. While standard budget impact analysis is usually used in a prospective way, assessing the impact of new health technologies on the health system's sustainability, we adopted a retrospective approach instead. Budget impact was quantified from a NHIF perspective. Descriptive statistics was used to analyse cost details, while dynamics was studied, using chain-linked growth rates (every period preceding the accounting period serves as a base). NHIF costs for RD therapies were expected to increase up to 74.5 million BGN in 2014 (7.8% of NHIF's total pharmaceutical expenditure). Greatest increase in cost per patient and number of patients treated was observed in conditions, for which there were newly approved for funding therapies. While simple cost drivers are well known - number of patients treated and mean cost per patient - in real-world settings these two factors are likely to depend on the availability and accessibility of effective innovative therapies. As RD were historically underdiagnosed, undertreated and underfunded in Bulgaria, improved access to RD drugs will inevitably lead to increasing budget burden for payers. Based on the evidence from this study, we propose a theoretical framework of a budget impact study for RD. First, a retrospective analysis could provide essential health policy insights in terms of impact on accessibility and population health, which are significant benchmarks in shaping funding decisions in healthcare. We suggest an interaction between the classical prospective BIA with the retrospective analysis in order to optimise health policy decision-making. Second, we recommend budget impact studies to focus on RD rather than orphan drugs (OD). In policy context, RD are the public health priority. OD are just one of the tools to address the complex issues of RD. Moreover, OD is a dynamic characteristic and compromises the consistency and comparability of the calculated budget indicators.
GCIP water and energy budget synthesis (WEBS)
Roads, J.; Lawford, R.; Bainto, E.; Berbery, E.; Chen, S.; Fekete, B.; Gallo, K.; Grundstein, A.; Higgins, W.; Kanamitsu, M.; Krajewski, W.; Lakshmi, V.; Leathers, D.; Lettenmaier, D.; Luo, L.; Maurer, E.; Meyers, T.; Miller, D.; Mitchell, Ken; Mote, T.; Pinker, R.; Reichler, T.; Robinson, D.; Robock, A.; Smith, J.; Srinivasan, G.; Verdin, K.; Vinnikov, K.; Vonder, Haar T.; Vorosmarty, C.; Williams, S.; Yarosh, E.
2003-01-01
As part of the World Climate Research Program's (WCRPs) Global Energy and Water-Cycle Experiment (GEWEX) Continental-scale International Project (GCIP), a preliminary water and energy budget synthesis (WEBS) was developed for the period 1996-1999 fromthe "best available" observations and models. Besides this summary paper, a companion CD-ROM with more extensive discussion, figures, tables, and raw data is available to the interested researcher from the GEWEX project office, the GAPP project office, or the first author. An updated online version of the CD-ROM is also available at http://ecpc.ucsd.edu/gcip/webs.htm/. Observations cannot adequately characterize or "close" budgets since too many fundamental processes are missing. Models that properly represent the many complicated atmospheric and near-surface interactions are also required. This preliminary synthesis therefore included a representative global general circulation model, regional climate model, and a macroscale hydrologic model as well as a global reanalysis and a regional analysis. By the qualitative agreement among the models and available observations, it did appear that we now qualitatively understand water and energy budgets of the Mississippi River Basin. However, there is still much quantitative uncertainty. In that regard, there did appear to be a clear advantage to using a regional analysis over a global analysis or a regional simulation over a global simulation to describe the Mississippi River Basin water and energy budgets. There also appeared to be some advantage to using a macroscale hydrologic model for at least the surface water budgets. Copyright 2003 by the American Geophysical Union.
ERIC Educational Resources Information Center
Public Affairs Counseling, San Francisco, CA.
To develop organizational recommendations, propose new management procedures, and assist in budget preparation, a management improvement analysis was undertaken at the Seattle Public Library. Discussions were held with library board and supervising staff on all aspects of organization, operations, and services; a trial budget procedure was…
Uncertainty of the 20th century sea-level rise due to vertical land motion errors
NASA Astrophysics Data System (ADS)
Santamaría-Gómez, Alvaro; Gravelle, Médéric; Dangendorf, Sönke; Marcos, Marta; Spada, Giorgio; Wöppelmann, Guy
2017-09-01
Assessing the vertical land motion (VLM) at tide gauges (TG) is crucial to understanding global and regional mean sea-level changes (SLC) over the last century. However, estimating VLM with accuracy better than a few tenths of a millimeter per year is not a trivial undertaking and many factors, including the reference frame uncertainty, must be considered. Using a novel reconstruction approach and updated geodetic VLM corrections, we found the terrestrial reference frame and the estimated VLM uncertainty may contribute to the global SLC rate error by ± 0.2 mmyr-1. In addition, a spurious global SLC acceleration may be introduced up to ± 4.8 ×10-3 mmyr-2. Regional SLC rate and acceleration errors may be inflated by a factor 3 compared to the global. The difference of VLM from two independent Glacio-Isostatic Adjustment models introduces global SLC rate and acceleration biases at the level of ± 0.1 mmyr-1 and 2.8 ×10-3 mmyr-2, increasing up to 0.5 mm yr-1 and 9 ×10-3 mmyr-2 for the regional SLC. Errors in VLM corrections need to be budgeted when considering past and future SLC scenarios.
Experimental demonstration of laser tomographic adaptive optics on a 30-meter telescope at 800 nm
NASA Astrophysics Data System (ADS)
Ammons, S., Mark; Johnson, Luke; Kupke, Renate; Gavel, Donald T.; Max, Claire E.
2010-07-01
A critical goal in the next decade is to develop techniques that will extend Adaptive Optics correction to visible wavelengths on Extremely Large Telescopes (ELTs). We demonstrate in the laboratory the highly accurate atmospheric tomography necessary to defeat the cone effect on ELTs, an essential milestone on the path to this capability. We simulate a high-order Laser Tomographic AO System for a 30-meter telescope with the LTAO/MOAO testbed at UCSC. Eight Sodium Laser Guide Stars (LGSs) are sensed by 99x99 Shack-Hartmann wavefront sensors over 75". The AO system is diffraction-limited at a science wavelength of 800 nm (S ~ 6-9%) over a field of regard of 20" diameter. Openloop WFS systematic error is observed to be proportional to the total input atmospheric disturbance and is nearly the dominant error budget term (81 nm RMS), exceeded only by tomographic wavefront estimation error (92 nm RMS). The total residual wavefront error for this experiment is comparable to that expected for wide-field tomographic adaptive optics systems of similar wavefront sensor order and LGS constellation geometry planned for Extremely Large Telescopes.
Mass load estimation errors utilizing grab sampling strategies in a karst watershed
Fogle, A.W.; Taraba, J.L.; Dinger, J.S.
2003-01-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
Evaluation of the Analysis Influence on Transport in Reanalysis Regional Water Cycles
NASA Technical Reports Server (NTRS)
Bosilovich, M. G.; Chen, J.; Robertson, F. R.
2011-01-01
Regional water cycles of reanalyses do not follow theoretical assumptions applicable to pure simulated budgets. The data analysis changes the wind, temperature and moisture, perturbing the theoretical balance. Of course, the analysis is correcting the model forecast error, so that the state fields should be more aligned with observations. Recently, it has been reported that the moisture convergence over continental regions, even those with significant quantities of radiosonde profiles present, can produce long term values not consistent with theoretical bounds. Specifically, long averages over continents produce some regions of moisture divergence. This implies that the observational analysis leads to a source of water in the region. One such region is the Unite States Great Plains, which many radiosonde and lidar wind observations are assimilated. We will utilize a new ancillary data set from the MERRA reanalysis called the Gridded Innovations and Observations (GIO) which provides the assimilated observations on MERRA's native grid allowing more thorough consideration of their impact on regional and global climatology. Included with the GIO data are the observation minus forecast (OmF) and observation minus analysis (OmA). Using OmF and OmA, we can identify the bias of the analysis against each observing system and gain a better understanding of the observations that are controlling the regional analysis. In this study we will focus on the wind and moisture assimilation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2007-03-01
This document summarizes the results of the benefits analysis of EERE's programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and for each of its nine Research, Development, Demonstration, and Deployment (RD3) programs. Benefits for the FY 2008 budget request are estimated for the midterm (2008-2030) and long term (2030-2050).
Saruta, Yuko; Puig-Junoy, Jaume
2016-06-01
Conventional intraoperative sentinel lymph node biopsy (SLNB) in breast cancer (BC) has limitations in establishing a definitive diagnosis of metastasis intraoperatively, leading to an unnecessary second operation. The one-step nucleic amplification assay (OSNA) provides accurate intraoperative diagnosis and avoids further testing. Only five articles have researched the cost and cost effectiveness of this diagnostic tool, although many hospitals have adopted it, and economic evaluation is needed for budget holders. We aimed to measure the budget impact in Japanese BC patients after the introduction of OSNA, and assess the certainty of the results. Budget impact analysis of OSNA on Japanese healthcare expenditure from 2015 to 2020. Local governments, society-managed health insurers, and Japan health insurance associations were the budget holders. In order to assess the cost gap between the gold standard (GS) and OSNA in intraoperative SLNB, a two-scenario comparative model that was structured using the clinical pathway of a BC patient group who received SLNB was applied. Clinical practice guidelines for BC were cited for cost estimation. The total estimated cost of all BC patients diagnosed by GS was US$1,023,313,850. The budget impact of OSNA in total health expenditure was -US$24,413,153 (-US$346 per patient). Two-way sensitivity analysis between survival rate (SR) of the GS and OSNA was performed by illustrating a cost-saving threshold: y ≅ 1.14x - 0.16 in positive patients, and y ≅ 0.96x + 0.029 in negative patients (x = SR-GS, y = SR-OSNA). Base inputs of the variables in these formulas demonstrated a cost saving. OSNA reduces healthcare costs, as confirmed by sensitivity analysis.
Global Energy and Water Budgets in MERRA
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Robertson, Franklin R.; Chen, Junye
2010-01-01
Reanalyses, retrospectively analyzing observations over climatological time scales, represent a merger between satellite observations and models to provide globally continuous data and have improved over several generations. Balancing the Earth s global water and energy budgets has been a focus of research for more than two decades. Models tend to their own climate while remotely sensed observations have had varying degrees of uncertainty. This study evaluates the latest NASA reanalysis, called the Modern Era Retrospective-analysis for Research and Applications (MERRA), from a global water and energy cycles perspective. MERRA was configured to provide complete budgets in its output diagnostics, including the Incremental Analysis Update (IAU), the term that represents the observations influence on the analyzed states, alongside the physical flux terms. Precipitation in reanalyses is typically sensitive to the observational analysis. For MERRA, the global mean precipitation bias and spatial variability are more comparable to merged satellite observations (GPCP and CMAP) than previous generations of reanalyses. Ocean evaporation also has a much lower value which is comparable to observed data sets. The global energy budget shows that MERRA cloud effects may be generally weak, leading to excess shortwave radiation reaching the ocean surface. Evaluating the MERRA time series of budget terms, a significant change occurs, which does not appear to be represented in observations. In 1999, the global analysis increments of water vapor changes sign from negative to positive, and primarily lead to more oceanic precipitation. This change is coincident with the beginning of AMSU radiance assimilation. Previous and current reanalyses all exhibit some sensitivity to perturbations in the observation record, and this remains a significant research topic for reanalysis development. The effect of the changing observing system is evaluated for MERRA water and energy budget terms.
Dos Santos, Mauro Augusto; Santos, Marisa Silva; Tura, Bernardo Rangel; Félix, Renata; Brito, Adriana Soares X; De Lorenzo, Andrea
2016-10-01
Myocardial perfusion imaging is widely used for the risk stratification of coronary artery disease. In view of its cost, besides radiation issues, judicious evaluation of the appropriateness of its indications is essential to prevent an unnecessary economic burden on the health system. We evaluated, at a tertiary-care, public Brazilian hospital, the appropriateness of myocardial perfusion scintigraphy indications, and estimated the budget impact of applying appropriateness criteria. An observational, cross-sectional study of 190 patients with suspected or known coronary artery disease referred for myocardial perfusion imaging was conducted. The appropriateness of myocardial perfusion imaging indications was evaluated with the Appropriate Use Criteria for Cardiac Radionuclide Imaging published in 2009. Budget impact analysis was performed with a deterministic model. The prevalence of appropriate requests was 78%; of inappropriate indications, 12%; and of uncertain indications, 10%. Budget impact analysis showed that the use of appropriateness criteria, applied to the population referred to myocardial perfusion scintigraphy within 1 year, could generate savings of $ 64,252.04 dollars. The 12% inappropriate requests for myocardial perfusion scintigraphy at a tertiary-care hospital suggest that a reappraisal of MPI indications is needed. Budget impact analysis estimated resource savings of 18.6% with the establishment of appropriateness criteria for MPI.
Terrestrial Planet Finder Interferometer Technology Status and Plans
NASA Technical Reports Server (NTRS)
Lawson, Perter R.; Ahmed, A.; Gappinger, R. O.; Ksendzov, A.; Lay, O. P.; Martin, S. R.; Peters, R. D.; Scharf, D. P.; Wallace, J. K.; Ware, B.
2006-01-01
A viewgraph presentation on the technology status and plans for Terrestrial Planet Finder Interferometer is shown. The topics include: 1) The Navigator Program; 2) TPF-I Project Overview; 3) Project Organization; 4) Technology Plan for TPF-I; 5) TPF-I Testbeds; 6) Nulling Error Budget; 7) Nulling Testbeds; 8) Nulling Requirements; 9) Achromatic Nulling Testbed; 10) Single Mode Spatial Filter Technology; 11) Adaptive Nuller Testbed; 12) TPF-I: Planet Detection Testbed (PDT); 13) Planet Detection Testbed Phase Modulation Experiment; and 14) Formation Control Testbed.
Development of a sub-miniature rubidium oscillator for SEEKTALK application
NASA Technical Reports Server (NTRS)
Fruehauf, H.; Weidemann, W.; Jechart, E.
1981-01-01
Warm-up and size challenges to oscillator construction are presented as well as the problems involved in these tasks. The performance of M-100 military rubidium oscillator is compared to that of a subminiture rubididum oscillator (M-1000). Methods of achieving 1.5 minute warm-up are discussed as well as improvements in performance under adverse environmental conditions, including temperature, vibration, and magnetics. An attempt is made to construct an oscillator error budget under a set of arbitrary mission conditions.
Implementing DRGs at Silas B. Hays Army Community Hospital: Enhancement of Utilization Review
1990-12-01
valuable assistance in creating this wordperfect document from both ASCII and ENABLE files. I thank them for their patience. Lastly, I wish to thank COL Jack...34error" predicate is called from a trap. A longmenu should eventually be used to assist in locating the RCMAS file. rcrnas-file:-not(existfile...B. Hays U.S. Army Community Hospital, Fort Ord, California has the potential to lose over $900 thousand in the supply budget category starting in
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2014-01-01
Based on 30 years of optical testing experience, a lot of mistakes, a lot of learning and a lot of experience, I have defined seven guiding principles for optical testing - regardless of how small or how large the optical testing or metrology task: Fully Understand the Task, Develop an Error Budget, Continuous Metrology Coverage, Know where you are, Test like you fly, Independent Cross-Checks, Understand All Anomalies. These rules have been applied with great success to the inprocess optical testing and final specification compliance testing of the JWST mirrors.
Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data
NASA Technical Reports Server (NTRS)
Larden, D. R.; Bender, P. L.
1983-01-01
The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm. Previously announced in STAR as N83-14605
Lago-Peñas, Carlos; Sampaio, Jaime
2015-01-01
The aim of the current study was (i) to identify how important is a good season start on elite soccer teams' performance and (ii) to examine whether this impact is related to the clubs' financial budget. The match performances and annual budgets of all teams were collected from the English FA Premier League, French Ligue 1, Spanish La Liga, Italian Serie A and German Bundesliga for three consecutive seasons (2010-2011 to 2012-2013). A k-means cluster analysis classified the clubs according to their budget as High Range Budget Clubs, Upper-Mid Range Budget Clubs, Lower-Mid Range Budget Clubs and Low Range Budget Clubs. Data were examined through linear regression models. Overall, the results suggested that the better the team performance at the beginning of the season, the better the ranking at the end of the season. However, the impact of the effect depended on the clubs' annual budget, with lower budgets being associated with a greater importance of having a good season start (P < 0.01). Moreover, there were differences in trends across the different leagues. These variables can be used to develop accurate models to estimate final rankings. Conversely, Lower-Mid and Lower Range Budget Clubs can benefit from fine-tuning preseason planning in order to accelerate the acquisition of optimal performances.
NASA Technical Reports Server (NTRS)
Randall, David A.; Fowler, Laura D.; Lin, Xin
1998-01-01
In order to improve our understanding of the interactions between clouds, radiation, and the hydrological cycle simulated in the Colorado State University General Circulation Model (CSU GCM), we focused our research on the analysis of the diurnal cycle of precipitation, top-of-the-atmosphere and surface radiation budgets, and cloudiness using 10-year long Atmospheric Model Intercomparison Project (AMIP) simulations. Comparisons the simulated diurnal cycle were made against the diurnal cycle of Earth Radiation Budget Experiment (ERBE) radiation budget and International Satellite Cloud Climatology Project (ISCCP) cloud products. This report summarizes our major findings over the Amazon Basin.
NASA Astrophysics Data System (ADS)
Caballero, R. N.; Lee, K. J.; Lentati, L.; Desvignes, G.; Champion, D. J.; Verbiest, J. P. W.; Janssen, G. H.; Stappers, B. W.; Kramer, M.; Lazarus, P.; Possenti, A.; Tiburzi, C.; Perrodin, D.; Osłowski, S.; Babak, S.; Bassa, C. G.; Brem, P.; Burgay, M.; Cognard, I.; Gair, J. R.; Graikou, E.; Guillemot, L.; Hessels, J. W. T.; Karuppusamy, R.; Lassus, A.; Liu, K.; McKee, J.; Mingarelli, C. M. F.; Petiteau, A.; Purver, M. B.; Rosado, P. A.; Sanidas, S.; Sesana, A.; Shaifullah, G.; Smits, R.; Taylor, S. R.; Theureau, G.; van Haasteren, R.; Vecchio, A.
2016-04-01
The sensitivity of Pulsar Timing Arrays to gravitational waves (GWs) depends on the noise present in the individual pulsar timing data. Noise may be either intrinsic or extrinsic to the pulsar. Intrinsic sources of noise will include rotational instabilities, for example. Extrinsic sources of noise include contributions from physical processes which are not sufficiently well modelled, for example, dispersion and scattering effects, analysis errors and instrumental instabilities. We present the results from a noise analysis for 42 millisecond pulsars (MSPs) observed with the European Pulsar Timing Array. For characterizing the low-frequency, stochastic and achromatic noise component, or `timing noise', we employ two methods, based on Bayesian and frequentist statistics. For 25 MSPs, we achieve statistically significant measurements of their timing noise parameters and find that the two methods give consistent results. For the remaining 17 MSPs, we place upper limits on the timing noise amplitude at the 95 per cent confidence level. We additionally place an upper limit on the contribution to the pulsar noise budget from errors in the reference terrestrial time standards (below 1 per cent), and we find evidence for a noise component which is present only in the data of one of the four used telescopes. Finally, we estimate that the timing noise of individual pulsars reduces the sensitivity of this data set to an isotropic, stochastic GW background by a factor of >9.1 and by a factor of >2.3 for continuous GWs from resolvable, inspiralling supermassive black hole binaries with circular orbits.
Jelacic, Srdjan; Craddick, Karen; Nair, Bala G; Bounthavong, Mark; Yeung, Kai; Kusulos, Dolly; Knutson, Jennifer A; Somani, Shabir; Bowdle, Andrew
2017-02-01
Anesthesia drugs can be prepared by anesthesia providers, hospital pharmacies or outsourcing facilities. The decision whether to outsource all or some anesthesia drugs is challenging since the costs associated with different anesthesia drug preparation methods remain poorly described. The costs associated with preparation of 8 commonly used anesthesia drugs were analyzed using a budget impact analysis for 4 different syringe preparation strategies: (1) all drugs prepared by anesthesiologist, (2) drugs prepared by anesthesiologist and hospital pharmacy, (3) drugs prepared by anesthesiologist and outsourcing facility, and (4) all drugs prepared by outsourcing facility. A strategy combining anesthesiologist and hospital pharmacy prepared drugs was associated with the lowest estimated annual cost in the base-case budget impact analysis with an annual cost of $225 592, which was lower than other strategies by a margin of greater than $86 000. A combination of anesthesiologist and hospital pharmacy prepared drugs resulted in the lowest annual cost in the budget impact analysis. However, the cost of drugs prepared by an outsourcing facility maybe lower if the capital investment needed for the establishment and maintenance of the US Pharmacopeial Convention Chapter <797> compliant facility is included in the budget impact analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
An Analysis of the Organizational Structures Supporting PPBE within the Military Departments
2008-06-01
correlation between the offices on the military side and offices on the civilian side. The top portion of the figure, the green part, is the...Management) (DASA( FIM )); Chief, Congressional Budget Liaison; Chief, Comptroller Proponency. The Military Deputy for Budget, although not directly...fall under the cognizance of the Military Deputy for Budget. The DASA( FIM ) oversees the financial management systems and processes within the Army to
An Analysis of Possible Federal Budget Process Reforms
1984-09-01
Page I. Congressional Budget Timetable Linder the 1974 Budget Act. ............. 44 Ii. Summary of Economic Assumptions ......... 66 III. Illustration... Export -Import Bank ............. 25 14 40 52 Grants to Amtrak ............ - - 880 -- Other ....................... 98 409 253 127 Total, guaranteed...Adminis- tration ......... 5.8 4.5 4.5 1.9 1.9 1.9 Foreign military sales .............. 3.9 5.1 5.7 5.1 5.2 5.3 Export -Import Bank ............... 3.5
Trends in public infrastructure spending
DOT National Transportation Integrated Search
1999-05-01
This Congressional Budget Office (CBO) paper highlights trends in public : spending for infrastructure over the past 42 years. The analysis of those : trends is based on data supplied by the Office of Management and Budget, the : Bureau of the Census...
Evaluating Observation Influence on Regional Water Budgets in Reanalyses
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Chern, Jiun-Dar; Mocko, David; Robertson, Franklin R.; daSilva, Arlindo M.
2014-01-01
The assimilation of observations in reanalyses incurs the potential for the physical terms of budgets to be balanced by a term relating the fit of the observations relative to a forecast first guess analysis. This may indicate a limitation in the physical processes of the background model, or perhaps inconsistencies in the observing system and its assimilation. In the MERRA reanalysis, an area of long term moisture flux divergence over land has been identified over the Central United States. Here, we evaluate the water vapor budget in this region, taking advantage of two unique features of the MERRA diagnostic output; 1) a closed water budget that includes the analysis increment and 2) a gridded diagnostic output data set of the assimilated observations and their innovations (e.g. forecast departures). In the Central United States, an anomaly occurs where the analysis adds water to the region, while precipitation decreases and moisture flux divergence increases. This is related more to a change in the observing system than to a deficiency in the model physical processes. MERRAs Gridded Innovations and Observations (GIO) data narrow the observations that influence this feature to the ATOVS and Aqua satellites during the 06Z and 18Z analysis cycles. Observing system experiments further narrow the instruments that affect the anomalous feature to AMSUA (mainly window channels) and AIRS. This effort also shows the complexities of the observing system, and the reactions of the regional water budgets in reanalyses to the assimilated observations.
Optical Testing of Diamond Machined, Aspheric Mirrors for Groundbased, Near-IR Astronomy
NASA Technical Reports Server (NTRS)
Chambers, V. John; Mink, Ronald G.; Ohl, Raymond G.; Connelly, Joseph A.; Mentzell, J. Eric; Arnold, Steven M.; Greenhouse, Matthew A.; Winsor, Robert S.; MacKenty, John W.
2002-01-01
The Infrared Multi-Object Spectrometer (IRMOS) is a facility-class instrument for the Kitt Peak National Observatory 4 and 2.1 meter telescopes. IRMOS is a near-IR (0.8-2.5 micron) spectrometer and operates at approximately 80 K. The 6061-T651 aluminum bench and mirrors constitute an athermal design. The instrument produces simultaneous spectra at low- to mid-resolving power (R=lambda/delta lambda= 300-3000) of approximately 100 objects in its 2.8 x 2.0 arcmin field. We describe ambient and cryogenic optical testing of the IRMOS mirrors across a broad range in spatial frequency (figure error, mid-frequency error, and microroughness). The mirrors include three rotationally symmetric, off-axis conic sections, one off-axis biconic, and several flat fold mirrors. The symmetric mirrors include convex and concave prolate and oblate ellipsoids. They range in aperture from 94x86 mm to 286x269 mm and in f-number from 0.9 to 2.4. The biconic mirror is concave and has a 94x76 mm aperture, R(sub x)=377 mm, k(sub x)=0.0778, R(sub y)=407 mm, and k(sub y)=0.1265 and is decentered by -2 mm in X and 227 mm in Y. All of the mirrors have an aspect ratio of approximately 6:1. The surface error fabrication tolerances are less than 10 nm RMS microroughness, 'best effort' for mid-frequency error, and less than 63.3 nm RMS figure error. Ambient temperature (approximately 293 K) testing is performed for each of the three surface error regimes, and figure testing is also performed at approximately 80 K. Operation of the ADE Phaseshift MicroXAM white light interferometer (micro-roughness) and the Bauer Model 200 profilometer (mid-frequency error) is described. Both the sag and conic values of the aspheric mirrors make these tests challenging. Figure testing is performed using a Zygo GPI interferometer, custom computer generated holograms (CGH), and optomechanical alignment fiducials. Cryogenic CGH null testing is discussed in detail. We discuss complications such as the change in prescription with temperature and thermal gradients. Correction for the effect of the dewar window is also covered. We discuss the error budget for the optical test and alignment procedure. Data reduction is accomplished using commercial optical design and data analysis software packages. Results from CGH testing at cryogenic temperatures are encouraging thus far.
Discovery and New Frontiers Project Budget Analysis Tool
NASA Technical Reports Server (NTRS)
Newhouse, Marilyn E.
2011-01-01
The Discovery and New Frontiers (D&NF) programs are multi-project, uncoupled programs that currently comprise 13 missions in phases A through F. The ability to fly frequent science missions to explore the solar system is the primary measure of program success. The program office uses a Budget Analysis Tool to perform "what-if" analyses and compare mission scenarios to the current program budget, and rapidly forecast the programs ability to meet their launch rate requirements. The tool allows the user to specify the total mission cost (fixed year), mission development and operations profile by phase (percent total mission cost and duration), launch vehicle, and launch date for multiple missions. The tool automatically applies inflation and rolls up the total program costs (in real year dollars) for comparison against available program budget. Thus, the tool allows the user to rapidly and easily explore a variety of launch rates and analyze the effect of changes in future mission or launch vehicle costs, the differing development profiles or operational durations of a future mission, or a replan of a current mission on the overall program budget. Because the tool also reports average monthly costs for the specified mission profile, the development or operations cost profile can easily be validate against program experience for similar missions. While specifically designed for predicting overall program budgets for programs that develop and operate multiple missions concurrently, the basic concept of the tool (rolling up multiple, independently-budget lines) could easily be adapted to other applications.
Dark Energy Survey Year 1 Results: Weak Lensing Mass Calibration of redMaPPer Galaxy Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClintock, T.; et al.
We constrain the mass--richness scaling relation of redMaPPer galaxy clusters identified in the Dark Energy Survey Year 1 data using weak gravitational lensing. We split clusters intomore » $$4\\times3$$ bins of richness $$\\lambda$$ and redshift $z$ for $$\\lambda\\geq20$$ and $$0.2 \\leq z \\leq 0.65$$ and measure the mean masses of these bins using their stacked weak lensing signal. By modeling the scaling relation as $$\\langle M_{\\rm 200m}|\\lambda,z\\rangle = M_0 (\\lambda/40)^F ((1+z)/1.35)^G$$, we constrain the normalization of the scaling relation at the 5.0 per cent level as $$M_0 = [3.081 \\pm 0.075 ({\\rm stat}) \\pm 0.133 ({\\rm sys})] \\cdot 10^{14}\\ {\\rm M}_\\odot$$ at $$\\lambda=40$$ and $z=0.35$. The richness scaling index is constrained to be $$F=1.356 \\pm 0.051\\ ({\\rm stat})\\pm 0.008\\ ({\\rm sys})$$ and the redshift scaling index $$G=-0.30\\pm 0.30\\ ({\\rm stat})\\pm 0.06\\ ({\\rm sys})$$. These are the tightest measurements of the normalization and richness scaling index made to date. We use a semi-analytic covariance matrix to characterize the statistical errors in the recovered weak lensing profiles. Our analysis accounts for the following sources of systematic error: shear and photometric redshift errors, cluster miscentering, cluster member dilution of the source sample, systematic uncertainties in the modeling of the halo--mass correlation function, halo triaxiality, and projection effects. We discuss prospects for reducing this systematic error budget, which dominates the uncertainty on $$M_0$$. Our result is in excellent agreement with, but has significantly smaller uncertainties than, previous measurements in the literature, and augurs well for the power of the DES cluster survey as a tool for precision cosmology and upcoming galaxy surveys such as LSST, Euclid and WFIRST.« less
Zhang, Xiaoyu; Li, Lingling
2016-03-21
Net surface shortwave radiation (NSSR) significantly affects regional and global climate change, and is an important aspect of research on surface radiation budget balance. Many previous studies have proposed methods for estimating NSSR. This study proposes a method to calculate NSSR using FY-2D short-wave channel data. Firstly, a linear regression model is established between the top-of-atmosphere (TOA) broadband albedo (r) and the narrowband reflectivity (ρ1), based on data simulated with MODTRAN 4.2. Secondly, the relationship between surface absorption coefficient (as) and broadband albedo (r) is determined by dividing the surface type into land, sea, or snow&ice, and NSSR can then be calculated. Thirdly, sensitivity analysis is performed for errors associated with sensor noise, vertically integrated atmospheric water content, view zenith angle and solar zenith angle. Finally, validation using ground measurements is performed. Results show that the root mean square error (RMSE) between the estimated and actual r is less than 0.011 for all conditions, and the RMSEs between estimated and real NSSR are 26.60 W/m2, 9.99 W/m2, and 23.40 W/m2, using simulated data for land, sea, and snow&ice surfaces, respectively. This indicates that the proposed method can be used to adequately estimate NSSR. Additionally, we compare field measurements from TaiYuan and ChangWu ecological stations with estimates using corresponding FY-2D data acquired from January to April 2012, on cloud-free days. Results show that the RMSE between the estimated and actual NSSR is 48.56W/m2, with a mean error of -2.23W/m2. Causes of errors also include measurement accuracy and estimations of atmospheric water vertical contents. This method is only suitable for cloudless conditions.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Chen, Junye
2009-01-01
In the Summer of 2009, NASA's Modern Era Retrospective-analysis for Research and Applications (MERRA) will have completed 28 years of global satellite data analyses. Here, we characterize the global water and energy budgets of MERRA, compared with available observations and the latest reanalyses. In this analysis, the climatology of the global average components are studied as well as the separate land and ocean averages. In addition, the time series of the global averages are evaluated. For example, the global difference of precipitation and evaporation generally shows the influence of water vapor observations on the system. Since the observing systems change in time, especially remotely sensed observations of water, significant temporal variations can occur across the 28 year record. These then are also closely connected to changes in the atmospheric energy and water budgets. The net imbalance of the energy budget at the surface can be large and different signs for different reanalyses. In MERRA, the imbalance of energy at the surface tends to improve with time being the smallest during the most recent and abundant satellite observations.
Differential tracking data types for accurate and efficient Mars planetary navigation
NASA Technical Reports Server (NTRS)
Edwards, C. D., Jr.; Kahn, R. D.; Folkner, W. M.; Border, J. S.
1991-01-01
Ways in which high-accuracy differential observations of two or more deep space vehicles can dramatically extend the power of earth-based tracking over conventional range and Doppler tracking are discussed. Two techniques - spacecraft-spacecraft differential very long baseline interferometry (S/C-S/C Delta(VLBI)) and same-beam interferometry (SBI) - are discussed. The tracking and navigation capabilities of conventional range, Doppler, and quasar-relative Delta(VLBI) are reviewed, and the S/C-S/C Delta (VLBI) and SBI types are introduced. For each data type, the formation of the observable is discussed, an error budget describing how physical error sources manifest themselves in the observable is presented, and potential applications of the technique for Space Exploration Initiative scenarios are examined. Requirements for spacecraft and ground systems needed to enable and optimize these types of observations are discussed.
Patterning control strategies for minimum edge placement error in logic devices
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim
2017-03-01
In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.
NASA Astrophysics Data System (ADS)
Zocchi, Fabio E.
2017-10-01
One of the approaches that is being tested for the integration of the mirror modules of the advanced telescope for high-energy astrophysics x-ray mission of the European Space Agency consists in aligning each module on an optical bench operated at an ultraviolet wavelength. The mirror module is illuminated by a plane wave and, in order to overcome diffraction effects, the centroid of the image produced by the module is used as a reference to assess the accuracy of the optical alignment of the mirror module itself. Among other sources of uncertainty, the wave-front error of the plane wave also introduces an error in the position of the centroid, thus affecting the quality of the mirror module alignment. The power spectral density of the position of the point spread function centroid is here derived from the power spectral density of the wave-front error of the plane wave in the framework of the scalar theory of Fourier diffraction. This allows the defining of a specification on the collimator quality used for generating the plane wave starting from the contribution to the error budget allocated for the uncertainty of the centroid position. The theory generally applies whenever Fourier diffraction is a valid approximation, in which case the obtained result is identical to that derived by geometrical optics considerations.
An Analysis of the President’s 2013 Budget
2012-03-01
and incorporates estimates by the staff of the Joint Commit- tee on Taxation for the President’s tax proposals.1 In conjunction with analyzing the...such debt would decline to 61 percent by the end of 2022.1. For more details about the President’s revenue proposals, see Joint Committee on Taxation ...the President’s Budget (Billions of dollars) Sources: Congressional Budget Office; Staff of the Joint Committee on Taxation . Note: n.a. = not
Cost Benefit Analysis. Community College of Vermont.
ERIC Educational Resources Information Center
Parker, Charles A.
A cost benefit analysis of the Community College of Vermont revealed that (1) the proportions of State support of the total budgets for Vermont's institutions of higher education are 22.7% at UVM, 37.2% at the VSC, and 12.7% for the Community College; (2) tuition is budgeted for FY73 to generate 27% of total cost at UVM, 29.6% at the VSC, and…
Water storage in marine sediment and implications for inferences of past global ice volume
NASA Astrophysics Data System (ADS)
Ferrier, K.; Li, Q.; Pico, T.; Austermann, J.
2017-12-01
Changes in past sea level are of wide interest because they provide information on the sensitivity of ice sheets to climate change, and thus inform predictions of future sea-level change. Sea level changes are influenced by many processes, including the storage of water in sedimentary pore space. Here we use a recent extension of gravitationally self-consistent sea-level models to explore the effects of marine sedimentary water storage on the global seawater balance and inferences of past global ice volume. Our analysis suggests that sedimentary water storage can be a significant component of the global seawater budget over the 105-year timescales associated with glacial-interglacial cycles, and an even larger component over longer timescales. Estimates of global sediment fluxes to the oceans suggest that neglecting marine sedimentary water storage may produce meter-scale errors in estimates of peak global mean sea level equivalent (GMSL) during the Last Interglacial (LIG). These calculations show that marine sedimentary water storage can be a significant contributor to the overall effects of sediment redistribution on sea-level change, and that neglecting sedimentary water storage can lead to substantial errors in inferences of global ice volume at past interglacials. This highlights the importance of accounting for the influences of sediment fluxes and sedimentary water storage on sea-level change over glacial-interglacial timescales.
Evaluating Micrometeorological Estimates of Groundwater Discharge from Great Basin Desert Playas.
Jackson, Tracie R; Halford, Keith J; Gardner, Philip M
2018-03-06
Groundwater availability studies in the arid southwestern United States traditionally have assumed that groundwater discharge by evapotranspiration (ET g ) from desert playas is a significant component of the groundwater budget. However, desert playa ET g rates are poorly constrained by Bowen ratio energy budget (BREB) and eddy-covariance (EC) micrometeorological measurement approaches. Best attempts by previous studies to constrain ET g from desert playas have resulted in ET g rates that are within the measurement error of micrometeorological approaches. This study uses numerical models to further constrain desert playa ET g rates that are within the measurement error of BREB and EC approaches, and to evaluate the effect of hydraulic properties and salinity-based groundwater density contrasts on desert playa ET g rates. Numerical models simulated ET g rates from desert playas in Death Valley, California and Dixie Valley, Nevada. Results indicate that actual ET g rates from desert playas are significantly below the uncertainty thresholds of BREB- and EC-based micrometeorological measurements. Discharge from desert playas likely contributes less than 2% of total groundwater discharge from Dixie and Death Valleys, which suggests discharge from desert playas also is negligible in other basins. Simulation results also show that ET g from desert playas primarily is limited by differences in hydraulic properties between alluvial fan and playa sediments and, to a lesser extent, by salinity-based groundwater density contrasts. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2016-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.
Decentralization and health resource allocation: a case study at the district level in Indonesia.
Abdullah, Asnawi; Stoelwinder, Johannes
2008-01-01
Health resource allocation has been an issue of political debate in many health systems. However, the debate has tended to concentrate on vertical allocation from the national to regional level. Allocation within regions or institutions has been largely ignored. This study was conducted to contribute analysis to this gap. The objective was to investigate health resource allocation within District Health Offices (DHOs) and to compare the trends and patterns of several budget categories before and after decentralization. The study was conducted in three districts in the Province of Nanggroe Aceh Darussalam. Six fiscal year budgets, two before decentralization and four after, were studied. Data was collected from the Local Government Planning Office and DHOs. Results indicated that in the first year of implementing a decentralization policy, the local government budget rose sharply, particularly in the wealthiest district. In contrast, in relatively poor districts the budget was only boosted slightly. Increasing total local government budgets had a positive impact on increasing the health budget. The absolute amount of health budgets increased significantly, but by percentage did not change very much. Budgets for several projects and budget items increased significantly, but others, such as health promotion, monitoring and evaluation, and public-goods-related activities, decreased. This study concluded that decentralization in Indonesia had made a positive impact on district government fiscal capacity and had affected DHO budgets positively. However, an imbalanced budget allocation between projects and budget items was obvious, and this needs serious attention from policy makers. Otherwise, decentralization will not significantly improve the health system in Indonesia.
Leitner, Stephan; Brauneis, Alexander; Rausch, Alexandra
2015-01-01
In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments' efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that-in some setups-a certain extent of misforecasting is desirable from the firm's point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that-in particular for relatively good forecasters-most of our results are robust to changes in setting the parameters of our multi-agent simulation model.
Leitner, Stephan; Brauneis, Alexander; Rausch, Alexandra
2015-01-01
In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments’ efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that—in some setups—a certain extent of misforecasting is desirable from the firm’s point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that—in particular for relatively good forecasters—most of our results are robust to changes in setting the parameters of our multi-agent simulation model. PMID:25803736
NASA Astrophysics Data System (ADS)
Griffiths, Ronald E.; Topping, David J.
2017-11-01
Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a river reach is in a state of sediment accumulation, deficit or stasis. Many sediment-budget studies have estimated the sediment loads of ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of loads in regions where rainfall events, contributing geology, and vegetation have large spatial and/or temporal variability. Previous estimates of the combined mean-annual sediment load of all ungaged tributaries to the Colorado River downstream from Glen Canyon Dam vary by over a factor of three; this range in estimated sediment loads has resulted in different researchers reaching opposite conclusions on the sign (accumulation or deficit) of the sediment budget for particular reaches of the Colorado River. To better evaluate the supply of fine sediment (sand, silt, and clay) from these tributaries to the Colorado River, eight gages were established on previously ungaged tributaries in Glen, Marble, and Grand canyons. Results from this sediment-monitoring network show that previous estimates of the annual sediment loads of these tributaries were too high and that the sediment budget for the Colorado River below Glen Canyon Dam is more negative than previously calculated by most researchers. As a result of locally intense rainfall events with footprints smaller than the receiving basin, floods from a single tributary in semi-arid regions can have large (≥ 10 ×) differences in sediment concentrations between equal magnitude flows. Because sediment loads do not necessarily correlate with drainage size, and may vary by two orders of magnitude on an annual basis, using techniques such as sediment-yield equations to estimate the sediment loads of ungaged tributaries may lead to large errors in sediment budgets.
Griffiths, Ronald; Topping, David
2017-01-01
Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a river reach is in a state of sediment accumulation, deficit or stasis. Many sediment-budget studies have estimated the sediment loads of ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of loads in regions where rainfall events, contributing geology, and vegetation have large spatial and/or temporal variability.Previous estimates of the combined mean-annual sediment load of all ungaged tributaries to the Colorado River downstream from Glen Canyon Dam vary by over a factor of three; this range in estimated sediment loads has resulted in different researchers reaching opposite conclusions on the sign (accumulation or deficit) of the sediment budget for particular reaches of the Colorado River. To better evaluate the supply of fine sediment (sand, silt, and clay) from these tributaries to the Colorado River, eight gages were established on previously ungaged tributaries in Glen, Marble, and Grand canyons. Results from this sediment-monitoring network show that previous estimates of the annual sediment loads of these tributaries were too high and that the sediment budget for the Colorado River below Glen Canyon Dam is more negative than previously calculated by most researchers. As a result of locally intense rainfall events with footprints smaller than the receiving basin, floods from a single tributary in semi-arid regions can have large (≥ 10 ×) differences in sediment concentrations between equal magnitude flows. Because sediment loads do not necessarily correlate with drainage size, and may vary by two orders of magnitude on an annual basis, using techniques such as sediment-yield equations to estimate the sediment loads of ungaged tributaries may lead to large errors in sediment budgets.
Analysis method comparison of on-time and on-budget data.
DOT National Transportation Integrated Search
2007-02-01
New Mexico Department of Transportation (NMDOT) results for On-Time and On-Budget performance measures as reported in (AASHTO/SCoQ) NCHRP 20-24(37) Project Measuring Performance Among State DOTs (Phase I) are lower than construction personnel kno...
ERIC Educational Resources Information Center
Dougherty, Richard M.; And Others
1989-01-01
Nine articles cover topics related to library financial resources: (1) escalating serials prices; (2) library budgeting; (3) entrepreneurship; (4) cutback management; (5) academic library budgets; (6) assessment of library effectiveness; (7) public library fund-raising; (8) capital investment; and (9) unit cost analysis at the Virginia Polytechnic…
Soto-Gordoa, Myriam; Arrospide, Arantzazu; Merino Hernández, Marisa; Mora Amengual, Joana; Fullaondo Zabala, Ane; Larrañaga, Igor; de Manuel, Esteban; Mar, Javier
2017-01-01
To develop a framework for the management of complex health care interventions within the Deming continuous improvement cycle and to test the framework in the case of an integrated intervention for multimorbid patients in the Basque Country within the CareWell project. Statistical analysis alone, although necessary, may not always represent the practical significance of the intervention. Thus, to ascertain the true economic impact of the intervention, the statistical results can be integrated into the budget impact analysis. The intervention of the case study consisted of a comprehensive approach that integrated new provider roles and new technological infrastructure for multimorbid patients, with the aim of reducing patient decompensations by 10% over 5 years. The study period was 2012 to 2020. Given the aging of the general population, the conventional scenario predicts an increase of 21% in the health care budget for care of multimorbid patients during the study period. With a successful intervention, this figure should drop to 18%. The statistical analysis, however, showed no significant differences in costs either in primary care or in hospital care between 2012 and 2014. The real costs in 2014 were by far closer to those in the conventional scenario than to the reductions expected in the objective scenario. The present implementation should be reappraised, because the present expenditure did not move closer to the objective budget. This work demonstrates the capacity of budget impact analysis to enhance the implementation of complex interventions. Its integration in the context of the continuous improvement cycle is transferable to other contexts in which implementation depth and time are important. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1989-01-01
Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.
2016-09-01
Global Positioning System (GPS). • DOD R&D budget analysis • GPS case study analysis These research areas will support the thesis on the defense... CASE STUDY ANALYSIS As successful as GPS has been on both the battlefield and in worldwide civilian life, the end state wasn’t realized when the...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA JOINT APPLIED PROJECT AN ANALYSIS OF THE GPS R&D PROGRAM AS A CASE STUDY
Solar adaptive optics with the DKIST: status report
NASA Astrophysics Data System (ADS)
Johnson, Luke C.; Cummings, Keith; Drobilek, Mark; Gregory, Scott; Hegwer, Steve; Johansson, Erik; Marino, Jose; Richards, Kit; Rimmele, Thomas; Sekulic, Predrag; Wöger, Friedrich
2014-08-01
The DKIST wavefront correction system will be an integral part of the telescope, providing active alignment control, wavefront correction, and jitter compensation to all DKIST instruments. The wavefront correction system will operate in four observing modes, diffraction-limited, seeing-limited on-disk, seeing-limited coronal, and limb occulting with image stabilization. Wavefront correction for DKIST includes two major components: active optics to correct low-order wavefront and alignment errors, and adaptive optics to correct wavefront errors and high-frequency jitter caused by atmospheric turbulence. The adaptive optics system is built around a fast tip-tilt mirror and a 1600 actuator deformable mirror, both of which are controlled by an FPGA-based real-time system running at 2 kHz. It is designed to achieve on-axis Strehl of 0.3 at 500 nm in median seeing (r0 = 7 cm) and Strehl of 0.6 at 630 nm in excellent seeing (r0 = 20 cm). We present the current status of the DKIST high-order adaptive optics, focusing on system design, hardware procurements, and error budget management.
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-06-01
The US General Accounting Office and executive agency Inspectors General have reported losses of millions of dollars in government funds resulting from fraud, waste and error. The Administration and the Congress have initiated determined efforts to eliminate such losses from government programs and activities. Primary emphasis in this effort is on the strengthening of accounting and administrative controls. Accordingly, the Office of Management and Budget (OMB) issued Circular No. A-123, Internal Control Systems, on October 28, 1981. The campaign to improve internal controls was endorsed by the Secretary of Energy in a memorandum to Heads of Departmental Components, dated Marchmore » 13, 1981, Subject: Internal Control as a Deterrent to Fraud, Waste and Error. A vulnerability assessment is a review of the susceptibility of a program or function to unauthorized use of resources, errors in reports and information, and illegal or unethical acts. It is based on considerations of the environment in which the program or function is carried out, the inherent riskiness of the program or function, and a preliminary evaluation as to whether adequate safeguards exist and are functioning.« less
NASA Technical Reports Server (NTRS)
Kirstetter, Pierre-Emmanuel; Hong, Y.; Gourley, J. J.; Schwaller, M.; Petersen, W; Zhang, J.
2012-01-01
Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving spaceborne passive and active microwave measurements for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem was addressed in a previous paper by comparison of 2A25 version 6 (V6) product with reference values derived from NOAA/NSSL's ground radar-based National Mosaic and QPE system (NMQ/Q2). The primary contribution of this study is to compare the new 2A25 version 7 (V7) products that were recently released as a replacement of V6. This new version is considered superior over land areas. Several aspects of the two versions are compared and quantified including rainfall rate distributions, systematic biases, and random errors. All analyses indicate V7 is an improvement over V6.
NASA Astrophysics Data System (ADS)
Vitharana, V. H. P.; Chinda, T.
2018-04-01
Lower back pain (LBP), prevalence is high among the heavy equipment operators leading to high compensation cost in the construction industry. It is found that proper training program assists in reducing chances of having LBP. This study, therefore aims to examine different safety related budget available to support LBP related training program for different age group workers, utilizing system dynamics modeling approach. The simulation results show that at least 2.5% of the total budget must be allocated in the safety and health budget to reduce the chances of having LBP cases.
A measurement of CMB cluster lensing with SPT and DES year 1 data
NASA Astrophysics Data System (ADS)
Baxter, E. J.; Raghunathan, S.; Crawford, T. M.; Fosalba, P.; Hou, Z.; Holder, G. P.; Omori, Y.; Patil, S.; Rozo, E.; Abbott, T. M. C.; Annis, J.; Aylor, K.; Benoit-Lévy, A.; Benson, B. A.; Bertin, E.; Bleem, L.; Buckley-Geer, E.; Burke, D. L.; Carlstrom, J.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Chang, C. L.; Cho, H.-M.; Crites, A. T.; Crocce, M.; Cunha, C. E.; da Costa, L. N.; D'Andrea, C. B.; Davis, C.; de Haan, T.; Desai, S.; Dietrich, J. P.; Dobbs, M. A.; Dodelson, S.; Doel, P.; Drlica-Wagner, A.; Estrada, J.; Everett, W. B.; Fausti Neto, A.; Flaugher, B.; Frieman, J.; García-Bellido, J.; George, E. M.; Gaztanaga, E.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Halverson, N. W.; Harrington, N. L.; Hartley, W. G.; Holzapfel, W. L.; Honscheid, K.; Hrubes, J. D.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Knox, L.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Lee, A. T.; Leitch, E. M.; Li, T. S.; Lima, M.; Luong-Van, D.; Manzotti, A.; March, M.; Marrone, D. P.; Marshall, J. L.; Martini, P.; McMahon, J. J.; Melchior, P.; Menanteau, F.; Meyer, S. S.; Miller, C. J.; Miquel, R.; Mocanu, L. M.; Mohr, J. J.; Natoli, T.; Nord, B.; Ogando, R. L. C.; Padin, S.; Plazas, A. A.; Pryke, C.; Rapetti, D.; Reichardt, C. L.; Romer, A. K.; Roodman, A.; Ruhl, J. E.; Rykoff, E.; Sako, M.; Sanchez, E.; Sayre, J. T.; Scarpine, V.; Schaffer, K. K.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Shirokoff, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Staniszewski, Z.; Stark, A.; Story, K.; Suchyta, E.; Tarle, G.; Thomas, D.; Troxel, M. A.; Vanderlinde, K.; Vieira, J. D.; Walker, A. R.; Williamson, R.; Zhang, Y.; Zuntz, J.
2018-05-01
Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalogue used in this analysis contains 3697 members with mean redshift of \\bar{z} = 0.45. We detect lensing of the CMB by the galaxy clusters at 8.1σ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly 17 {per cent} precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentring.
NASA Astrophysics Data System (ADS)
Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.
2014-02-01
An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier Transform Spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information is not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs are typically within 60 m of those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (Collision-Induced Absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC, CONTRAIL and HIPPO, yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS dataset is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.5 ± 0.7 ppm yr-1, in agreement with the currently accepted global growth rate based on ground-based measurements.