Advanced CD-SEM solution for edge placement error characterization of BEOL pitch 32nm metal layers
NASA Astrophysics Data System (ADS)
Charley, A.; Leray, P.; Lorusso, G.; Sutani, T.; Takemasa, Y.
2018-03-01
Metrology plays an important role in edge placement error (EPE) budgeting. Control for multi-patterning applications as new critical distances needs to be measured (edge to edge) and requirements become tighter and tighter in terms of accuracy and precision. In this paper we focus on imec iN7 BEOL platform and particularly on M2 patterning scheme using SAQP + block EUV for a 7.5 track logic design. Being able to characterize block to SAQP edge misplacement is important in a budgeting exercise (1) but is also extremely difficult due to challenging edge detection with CD-SEM (similar materials, thin layers, short distances, 3D features). In this study we develop an advanced solution to measure block to SAQP placement, we characterize it in terms of sensitivity, precision and accuracy through the comparison to reference metrology. In a second phase, the methodology is applied to budget local effects and the results are compared to the characterization of the SAQP and block independently.
Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin
2018-03-01
In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.
Pattern uniformity control in integrated structures
NASA Astrophysics Data System (ADS)
Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Biesemans, Serge; Enomoto, Masashi
2017-03-01
In our previous paper dealing with multi-patterning, we proposed a new indicator to quantify the quality of final wafer pattern transfer, called interactive pattern fidelity error (IPFE). It detects patterning failures resulting from any source of variation in creating integrated patterns. IPFE is a function of overlay and edge placement error (EPE) of all layers comprising the final pattern (i.e. lower and upper layers). In this paper, we extend the use cases with Via in additional to the bridge case (Block on Spacer). We propose an IPFE budget and CD budget using simple geometric and statistical models with analysis of a variance (ANOVA). In addition, we validate the model with experimental data. From the experimental results, improvements in overlay, local-CDU (LCDU) of contact hole (CH) or pillar patterns (especially, stochastic pattern noise (SPN)) and pitch walking are all critical to meet budget requirements. We also provide a special note about the importance of the line length used in analyzing LWR. We find that IPFE and CD budget requirements are consistent to the table of the ITRS's technical requirement. Therefore the IPFE concept can be adopted for a variety of integrated structures comprising digital logic circuits. Finally, we suggest how to use IPFE for yield management and optimization requirements for each process.
The CD control improvement by using CDSEM 2D measurement of complex OPC patterns
NASA Astrophysics Data System (ADS)
Chou, William; Cheng, Jeffrey; Lee, Adder; Cheng, James; Tzeng, Alex C.; Lu, Colbert; Yang, Ray; Lee, Hong Jen; Bandoh, Hideaki; Santo, Izumi; Zhang, Hao; Chen, Chien Kang
2016-10-01
As the process node becomes more advanced, the accuracy and precision in OPC pattern CD are required in mask manufacturing. CD SEM is an essential tool to confirm the mask quality such as CD control, CD uniformity and CD mean to target (MTT). Unfortunately, in some cases of arbitrary enclosed patterns or aggressive OPC patterns, for instance, line with tiny jogs and curvilinear SRAF, CD variation depending on region of interest (ROI) is a very serious problem in mask CD control, even it decreases the wafer yield. For overcoming this situation, the 2-dimensional (2D) method by Holon is adopted. In this paper, we summarize the comparisons of error budget between conventional (1D) and 2D data using CD SEM and the CD performance between mask and wafer by complex OPC patterns including ILT features.
Propagation of resist heating mask error to wafer level
NASA Astrophysics Data System (ADS)
Babin, S. V.; Karklin, Linard
2006-10-01
As technology is approaching 45 nm and below the IC industry is experiencing a severe product yield hit due to rapidly shrinking process windows and unavoidable manufacturing process variations. Current EDA tools are unable by their nature to deliver optimized and process-centered designs that call for 'post design' localized layout optimization DFM tools. To evaluate the impact of different manufacturing process variations on final product it is important to trace and evaluate all errors through design to manufacturing flow. Photo mask is one of the critical parts of this flow, and special attention should be paid to photo mask manufacturing process and especially to mask tight CD control. Electron beam lithography (EBL) is a major technique which is used for fabrication of high-end photo masks. During the writing process, resist heating is one of the sources for mask CD variations. Electron energy is released in the mask body mainly as heat, leading to significant temperature fluctuations in local areas. The temperature fluctuations cause changes in resist sensitivity, which in turn leads to CD variations. These CD variations depend on mask writing speed, order of exposure, pattern density and its distribution. Recent measurements revealed up to 45 nm CD variation on the mask when using ZEP resist. The resist heating problem with CAR resists is significantly smaller compared to other types of resists. This is partially due to higher resist sensitivity and the lower exposure dose required. However, there is no data yet showing CD errors on the wafer induced by CAR resist heating on the mask. This effect can be amplified by high MEEF values and should be carefully evaluated at 45nm and below technology nodes where tight CD control is required. In this paper, we simulated CD variation on the mask due to resist heating; then a mask pattern with the heating error was transferred onto the wafer. So, a CD error on the wafer was evaluated subject to only one term of the mask error budget - the resist heating CD error. In simulation of exposure using a stepper, variable MEEF was considered.
NASA Astrophysics Data System (ADS)
Sturtevant, John L.; Liubich, Vlad; Gupta, Rachit
2016-04-01
Edge placement error (EPE) was a term initially introduced to describe the difference between predicted pattern contour edge and the design target for a single design layer. Strictly speaking, this quantity is not directly measurable in the fab. What is of vital importance is the relative edge placement errors between different design layers, and in the era of multipatterning, the different constituent mask sublayers for a single design layer. The critical dimensions (CD) and overlay between two layers can be measured in the fab, and there has always been a strong emphasis on control of overlay between design layers. The progress in this realm has been remarkable, accelerated in part at least by the proliferation of multipatterning, which reduces the available overlay budget by introducing a coupling of overlay and CD errors for the target layer. Computational lithography makes possible the full-chip assessment of two-layer edge to edge distances and two-layer contact overlap area. We will investigate examples of via-metal model-based analysis of CD and overlay errors. We will investigate both single patterning and double patterning. For single patterning, we show the advantage of contour-to-contour simulation over contour to target simulation, and how the addition of aberrations in the optical models can provide a more realistic CD-overlay process window (PW) for edge placement errors. For double patterning, the interaction of 4-layer CD and overlay errors is very complex, but we illustrate that not only can full-chip verification identify potential two-layer hotspots, the optical proximity correction engine can act to mitigate such hotspots and enlarge the joint CD-overlay PW.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
EUV local CDU healing performance and modeling capability towards 5nm node
NASA Astrophysics Data System (ADS)
Jee, Tae Kwon; Timoshkov, Vadim; Choi, Peter; Rio, David; Tsai, Yu-Cheng; Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Schoofs, Stijn
2017-10-01
Both local variability and optical proximity correction (OPC) errors are big contributors to the edge placement error (EPE) budget which is closely related to the device yield. The post-litho contact hole healing will be demonstrated to meet after-etch local variability specifications using a low dose, 30mJ/cm2 dose-to-size, positive tone developed (PTD) resist with relevant throughput in high volume manufacturing (HVM). The total local variability of the node 5nm (N5) contact holes will be characterized in terms of local CD uniformity (LCDU), local placement error (LPE), and contact edge roughness (CER) using a statistical methodology. The CD healing process has complex etch proximity effects, so the OPC prediction accuracy is challenging to meet EPE requirements for the N5. Thus, the prediction accuracy of an after-etch model will be investigated and discussed using ASML Tachyon OPC model.
Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.
1998-01-01
Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.
The impact of 14nm photomask variability and uncertainty on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-09-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.
ArF scanner performance improvement by using track integrated CD optimization
NASA Astrophysics Data System (ADS)
Huang, Jacky; Yu, Shinn-Sheng; Ke, Chih-Ming; Wu, Timothy; Wang, Yu-Hsi; Gau, Tsai-Sheng; Wang, Dennis; Li, Allen; Yang, Wenge; Kaoru, Araki
2006-03-01
In advanced semiconductor processing, shrinking CD is one of the main objectives when moving to the next generation technology. Improving CD uniformity (CDU) with shrinking CD is one of the biggest challenges. From ArF lithography CD error budget analysis, PEB (post exposure bake) contributes more than 40% CD variations. It turns out that hot plate performance such as CD matching and within-plate temperature control play key roles in litho cell wafer per hour (WPH). Traditionally wired or wireless thermal sensor wafers were used to match and optimize hot plates. However, sensor-to-sensor matching and sensor data quality vs. sensor lifetime or sensor thermal history are still unknown. These concerns make sensor wafers more suitable for coarse mean-temperature adjustment. For precise temperature adjustment, especially within-hot-plate temperature uniformity, using CD instead of sensor wafer temperature is a better and more straightforward metrology to calibrate hot plates. In this study, we evaluated TEL clean track integrated optical CD metrology (IM) combined with TEL CD Optimizer (CDO) software to improve 193-nm resist within-wafer and wafer-to-wafer CD uniformity. Within-wafer CD uniformity is mainly affected by the temperature non-uniformity on the PEB hot plate. Based on CD and PEB sensitivity of photo resists, a physical model has been established to control the CD uniformity through fine-tuning PEB temperature settings. CD data collected by track integrated CD metrology was fed into this model, and the adjustment of PEB setting was calculated and executed through track internal APC system. This auto measurement, auto feed forward, auto calibration and auto adjustment system can reduce the engineer key-in error and improve the hot plate calibration cycle time. And this PEB auto calibration system can easily bring hot-plate-to-hot-plate CD matching to within 0.5nm and within-wafer CDU (3σ) to less than 1.5nm.
Wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope system
NASA Astrophysics Data System (ADS)
Wei, Kai; Zhang, Xuejun; Xian, Hao; Rao, Changhui; Zhang, Yudong
2010-05-01
We present the wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope. The error budget accounts for aberrations induced by optical design residual, manufacturing error, mounting effects, and misalignments. The initial error budget has been generated from the top-down. There will also be an ongoing effort to track the errors from the bottom-up. This will aid in identifying critical areas of concern. The resolution of conflicts will involve a continual process of review and comparison of the top-down and bottom-up approaches, modifying both as needed to meet the top level requirements in the end. As we all know, the adaptive optical system will correct for some of the telescope system imperfections but it cannot be assumed that all errors will be corrected. Therefore, two kinds of error budgets will be presented, one is non-AO top-down error budget and the other is with-AO system error budget. The main advantage of the method is that at the same time it describes the final performance of the telescope, and gives to the optical manufacturer the maximum freedom to define and possibly modify its own manufacturing error budget.
NASA Astrophysics Data System (ADS)
Zait, Eitan; Ben-Zvi, Guy; Dmitriev, Vladimir; Oshemkov, Sergey; Pforr, Rainer; Hennig, Mario
2006-05-01
Intra-field CD variation is, besides OPC errors, a main contributor to the total CD variation budget in IC manufacturing. It is caused mainly by mask CD errors. In advanced memory device manufacturing the minimum features are close to the resolution limit resulting in large mask error enhancement factors hence large intra-field CD variations. Consequently tight CD Control (CDC) of the mask features is required, which results in increasing significantly the cost of mask and hence the litho process costs. Alternatively there is a search for such techniques (1) which will allow improving the intrafield CD control for a given moderate mask and scanner imaging performance. Currently a new technique (2) has been proposed which is based on correcting the printed CD by applying shading elements generated in the substrate bulk of the mask by ultrashort pulsed laser exposure. The blank transmittance across a feature is controlled by changing the density of light scattering pixels. The technique has been demonstrated to be very successful in correcting intra-field CD variations caused by the mask and the projection system (2). A key application criterion of this technique in device manufacturing is the stability of the absorbing pixels against DUV light irradiation being applied during mask projection in scanners. This paper describes the procedures and results of such an investigation. To do it with acceptable effort a special experimental setup has been chosen allowing an evaluation within reasonable time. A 193nm excimer laser with pulse duration of 25 ns has been used for blank irradiation. Accumulated dose equivalent to 100,000 300 mm wafer exposures has been applied to Half Tone PSM mask areas with and without CDC shadowing elements. This allows the discrimination of effects appearing in treated and untreated glass regions. Several intensities have been investigated to define an acceptable threshold intensity to avoid glass compaction or generation of color centers in the glass. The impact of the irradiation on the mask transmittance of both areas has been studied by measurements of the printed CD on wafer using a wafer scanner before and after DUV irradiation.
Patterning control strategies for minimum edge placement error in logic devices
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim
2017-03-01
In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.
Brodszky, Valentin; Rencz, Fanni; Péntek, Márta; Baji, Petra; Lakatos, Péter L; Gulácsi, László
2016-01-01
To estimate the budget impact of the introduction of biosimilar infliximab for the treatment of Crohn's disease (CD) in Bulgaria, the Czech Republic, Hungary, Poland, Romania and Slovakia. A 3-year, prevalence-based budget impact analysis for biosimilar infliximab to treat CD was developed from third-party payers' perspective. The model included various scenarios depending on whether interchanging originator infliximab with biosimilar infliximab was allowed or not. Total cost savings achieved in biosimilar scenario 1 (interchanging not allowed) and BSc2 (interchanging allowed in 80% of the patients) were estimated to €8.0 million and €16.9 million in the six countries. Budget savings may cover the biosimilar infliximab therapy for 722-1530 additional CD patients. Introduction of biosimilar infliximab to treat CD may offset the inequity in access to biological therapy for CD between Central and Eastern European countries.
Meteorological Error Budget Using Open Source Data
2016-09-01
ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using Open- Source Data by J Cogan, J Smith, P...needed. Do not return it to the originator. ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using...Error Budget Using Open-Source Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) J Cogan, J Smith, P Haines
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
NASA Technical Reports Server (NTRS)
Miller, J. M.
1980-01-01
ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.
NASA Technical Reports Server (NTRS)
Thome, K.
2016-01-01
Knowledge of uncertainties and errors are essential for comparisons of remote sensing data across time, space, and spectral domains. Vicarious radiometric calibration is used to demonstrate the need for uncertainty knowledge and to provide an example error budget. The sample error budget serves as an example of the questions and issues that need to be addressed by the calibrationvalidation community as accuracy requirements for imaging spectroscopy data will continue to become more stringent in the future. Error budgets will also be critical to ensure consistency between the range of imaging spectrometers expected to be launched in the next five years.
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
Clean focus, dose and CD metrology for CD uniformity improvement
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, DongYoung; Oh, Eungryong; Choi, Ahlin; Kim, Nakyoon; Robinson, John C.; Mengel, Markus; Pablo, Rovira; Yoo, Sungchul; Getin, Raphael; Choi, Dongsub; Jeon, Sanghuck
2018-03-01
Lithography process control solutions require more exacting capabilities as the semiconductor industry goes forward to the 1x nm node DRAM device manufacturing. In order to continue scaling down the device feature sizes, critical dimension (CD) uniformity requires continuous improvement to meet the required CD error budget. In this study we investigate using optical measurement technology to improve over CD-SEM methods in focus, dose, and CD. One of the key challenges is measuring scanner focus of device patterns. There are focus measurement methods based on specially designed marks on scribe-line, however, one issue of this approach is that it will report focus of scribe line which is potentially different from that of the real device pattern. In addition, scribe-line marks require additional design and troubleshooting steps that add complexity. In this study, we investigated focus measurement directly on the device pattern. Dose control is typically based on using the linear correlation behavior between dose and CD. The noise of CD measurement, based on CD-SEM for example, will not only impact the accuracy, but also will make it difficult to monitor dose signature on product wafers. In this study we will report the direct dose metrology result using an optical metrology system which especially enhances the DUV spectral coverage to improve the signal to noise ratio. CD-SEM is often used to measure CD after the lithography step. This measurement approach has the advantage of easy recipe setup as well as the flexibility to measure critical feature dimensions, however, we observe that CD-SEM metrology has limitations. In this study, we demonstrate within-field CD uniformity improvement through the extraction of clean scanner slit and scan CD behavior by using optical metrology.
A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers
NASA Technical Reports Server (NTRS)
Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen;
2016-01-01
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
The study of CD side to side error in line/space pattern caused by post-exposure bake effect
NASA Astrophysics Data System (ADS)
Huang, Jin; Guo, Eric; Ge, Haiming; Lu, Max; Wu, Yijun; Tian, Mingjing; Yan, Shichuan; Wang, Ran
2016-10-01
In semiconductor manufacturing, as the design rule has decreased, the ITRS roadmap requires crucial tighter critical dimension (CD) control. CD uniformity is one of the necessary parameters to assure good performance and reliable functionality of any integrated circuit (IC) [1] [2], and towards the advanced technology nodes, it is a challenge to control CD uniformity well. The study of corresponding CD Uniformity by tuning Post-Exposure bake (PEB) and develop process has some significant progress[3], but CD side to side error happening to some line/space pattern are still found in practical application, and the error has approached to over the uniformity tolerance. After details analysis, even though use several developer types, the CD side to side error has not been found significant relationship to the developing. In addition, it is impossible to correct the CD side to side error by electron beam correction as such error does not appear in all Line/Space pattern masks. In this paper the root cause of CD side to side error is analyzed and the PEB module process are optimized as a main factor for improvement of CD side to side error.
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
When things go pear shaped: contour variations of contacts
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2013-04-01
Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.
Cost-effectiveness of the Federal stream-gaging program in Virginia
Carpenter, D.H.
1985-01-01
Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS
Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
1999-02-01
jjAi id ß g§s(ö rH^^-H ■J a) wm 4J ft & ü o o P .y rHrH id CDrH (D 0, > id 01 _Ö <D pQw > a oi rr) OCD lj 01a id *^r5 CD ft...Old cu-O § d id u cu •H u d co P P CD 3 M Ol O rH -H (D &CÜH43P CD 5H d CtJ d CDrH CU O PrH P ■O CO d-H CD d U-H s ^ Id CD QH...H P CN CD P P I -H <D drH EH M43 CDrH H O P-H 10 CD S-0 >iP PS drH g IdrH > CD CD Id Ol Ä4JJJ3S P 01 CU C0-H >ld"H Ol
Implementation and benefits of advanced process control for lithography CD and overlay
NASA Astrophysics Data System (ADS)
Zavyalova, Lena; Fu, Chong-Cheng; Seligman, Gary S.; Tapp, Perry A.; Pol, Victor
2003-05-01
Due to the rapidly reduced imaging process windows and increasingly stingent device overlay requirements, sub-130 nm lithography processes are more severely impacted than ever by systamic fault. Limits on critical dimensions (CD) and overlay capability further challenge the operational effectiveness of a mix-and-match environment using multiple lithography tools, as such mode additionally consumes the available error budgets. Therefore, a focus on advanced process control (APC) methodologies is key to gaining control in the lithographic modules for critical device levels, which in turn translates to accelerated yield learning, achieving time-to-market lead, and ultimately a higher return on investment. This paper describes the implementation and unique challenges of a closed-loop CD and overlay control solution in high voume manufacturing of leading edge devices. A particular emphasis has been placed on developing a flexible APC application capable of managing a wide range of control aspects such as process and tool drifts, single and multiple lot excursions, referential overlay control, 'special lot' handling, advanced model hierarchy, and automatic model seeding. Specific integration cases, including the multiple-reticle complementary phase shift lithography process, are discussed. A continuous improvement in the overlay and CD Cpk performance as well as the rework rate has been observed through the implementation of this system, and the results are studied.
Developing Performance Estimates for High Precision Astrometry with TMT
NASA Astrophysics Data System (ADS)
Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana
2013-12-01
Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.
Computer-assisted uncertainty assessment of k0-NAA measurement results
NASA Astrophysics Data System (ADS)
Bučar, T.; Smodiš, B.
2008-10-01
In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.
Cost-effectiveness of the streamflow-gaging program in Wyoming
Druse, S.A.; Wahl, K.L.
1988-01-01
This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)
First-order error budgeting for LUVOIR mission
NASA Astrophysics Data System (ADS)
Lightsey, Paul A.; Knight, J. Scott; Feinberg, Lee D.; Bolcar, Matthew R.; Shaklan, Stuart B.
2017-09-01
Future large astronomical telescopes in space will have architectures that will have complex and demanding requirements to meet the science goals. The Large UV/Optical/IR Surveyor (LUVOIR) mission concept being assessed by the NASA/Goddard Space Flight Center is expected to be 9 to 15 meters in diameter, have a segmented primary mirror and be diffraction limited at a wavelength of 500 nanometers. The optical stability is expected to be in the picometer range for minutes to hours. Architecture studies to support the NASA Science and Technology Definition teams (STDTs) are underway to evaluate systems performance improvements to meet the science goals. To help define the technology needs and assess performance, a first order error budget has been developed. Like the JWST error budget, the error budget includes the active, adaptive and passive elements in spatial and temporal domains. JWST performance is scaled using first order approximations where appropriate and includes technical advances in telescope control.
Geometric error characterization and error budgets. [thematic mapper
NASA Technical Reports Server (NTRS)
Beyer, E.
1982-01-01
Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.
Cost effectiveness of the US Geological Survey's stream-gaging program in New York
Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.
1986-01-01
The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.
2011-01-01
The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.
The Terrestrial Planet Finder coronagraph dynamics error budget
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Green, Joseph J.; Lay, Oliver P.
2005-01-01
The Terrestrial Planet Finder Coronagraph (TPF-C) demands extreme wave front control and stability to achieve its goal of detecting earth-like planets around nearby stars. We describe the performance models and error budget used to evaluate image plane contrast and derive engineering requirements for this challenging optical system.
Cost-effectiveness of the stream-gaging program in Nebraska
Engel, G.B.; Wahl, K.L.; Boohar, J.A.
1984-01-01
This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont
Smath, J.A.; Blackey, F.E.
1986-01-01
Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)
A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.; Shaklan, Stuart B.
2009-01-01
This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.
NASA Astrophysics Data System (ADS)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2008-07-01
This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.
Balancing the books - a statistical theory of prospective budgets in Earth System science
NASA Astrophysics Data System (ADS)
O'Kane, J. Philip
An honest declaration of the error in a mass, momentum or energy balance, ɛ, simply raises the question of its acceptability: "At what value of ɛ is the attempted balance to be rejected?" Answering this question requires a reference quantity against which to compare ɛ. This quantity must be a mathematical function of all the data used in making the balance. To deliver this function, a theory grounded in a workable definition of acceptability is essential. A distinction must be drawn between a retrospective balance and a prospective budget in relation to any natural space-filling body. Balances look to the past; budgets look to the future. The theory is built on the application of classical sampling theory to the measurement and closure of a prospective budget. It satisfies R.A. Fisher's "vital requirement that the actual and physical conduct of experiments should govern the statistical procedure of their interpretation". It provides a test, which rejects, or fails to reject, the hypothesis that the closing error on the budget, when realised, was due to sampling error only. By increasing the number of measurements, the discrimination of the test can be improved, controlling both the precision and accuracy of the budget and its components. The cost-effective design of such measurement campaigns is discussed briefly. This analysis may also show when campaigns to close a budget on a particular space-filling body are not worth the effort for either scientific or economic reasons. Other approaches, such as those based on stochastic processes, lack this finality, because they fail to distinguish between different types of error in the mismatch between a set of realisations of the process and the measured data.
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert
2011-01-01
The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.
Earth radiation budget measurement from a spinning satellite: Conceptual design of detectors
NASA Technical Reports Server (NTRS)
Sromovsky, L. A.; Revercomb, H. E.; Suomi, V. E.
1975-01-01
The conceptual design, sensor characteristics, sensor performance and accuracy, and spacecraft and orbital requirements for a spinning wide-field-of-view earth energy budget detector were investigated. The scientific requirements for measurement of the earth's radiative energy budget are presented. Other topics discussed include the observing system concept, solar constant radiometer design, plane flux wide FOV sensor design, fast active cavity theory, fast active cavity design and error analysis, thermopile detectors as an alternative, pre-flight and in-flight calibration plane, system error summary, and interface requirements.
77 FR 65448 - Funding Availability Under Supportive Services for Veteran Families Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-26
... electronic copy of the entire application. A budget template must be attached in Excel format on the CD, but... contain an electronic copy of the entire application. A budget template must be attached in Excel format...
Cost effectiveness of the stream-gaging program in Ohio
Shindel, H.L.; Bartlett, W.P.
1986-01-01
This report documents the results of the cost effectiveness of the stream-gaging program in Ohio. Data uses and funding sources were identified for 107 continuous stream gages currently being operated by the U.S. Geological Survey in Ohio with a budget of $682,000; this budget includes field work for other projects and excludes stations jointly operated with the Miami Conservancy District. No stream gage were identified as having insufficient reason to continue their operation; nor were any station identified as having uses specifically only for short-term studies. All 107 station should be maintained in the program for the foreseeable future. The average standard error of estimation of stream flow records is 29.2 percent at its present level of funding. A minimum budget of $679,000 is required to operate the 107-gage program; a budget less than this does no permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 31.1 percent The maximum budget analyzed was $1,282,000, which resulted in an average standard error of 11.1 percent. A need for additional gages has been identified by the other agencies that cooperate in the program. It is suggested that these gage be installed as funds can be made available.
Imaging phased telescope array study
NASA Technical Reports Server (NTRS)
Harvey, James E.
1989-01-01
The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.
Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.
1985-01-01
The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)
Gandois, L; Nicolas, M; VanderHeijden, G; Probst, A
2010-11-01
The trace metal (TM: Cd, Cu, Ni, Pb and Zn) budget (stocks and annual fluxes) was evaluated in a forest stand (silver fir, Abies alba Miller) in north-eastern France. Trace metal concentrations were measured in different tree compartments in order to assess TM partitioning and dynamics in the trees. Inputs included bulk deposition, estimated dry deposition and weathering. Outputs were leaching and biomass exportation. Atmospheric deposition was the main input flux. The estimated dry deposition accounted for about 40% of the total trace metal deposition. The relative importance of leaching (estimated by a lumped parameter water balance model, BILJOU) and net biomass uptake (harvesting) for ecosystem exportation depended on the element. Trace metal distribution between tree compartments (stem wood and bark, branches and needles) indicated that Pb was mainly stored in the stem, whereas Zn and Ni, and to a lesser extent Cd and Cu, were translocated to aerial parts of the trees and cycled in the ecosystem. For Zn and Ni, leaching was the main output flux (>95% of the total output) and the plot budget (input-output) was negative, whereas for Pb the biomass net exportation represented 60% of the outputs and the budget was balanced. Cadmium and Cu had intermediate behaviours, with 18% and 30% of the total output relative to biomass exportation, respectively, and the budgets were negative. The net uptake by biomass was particularly important for Pb budgets, less so for Cd and Cu and not very important for Zn and Ni in such forest stands. Copyright © 2010 Elsevier B.V. All rights reserved.
Adverse effects in dual-feed interferometry
NASA Astrophysics Data System (ADS)
Colavita, M. Mark
2009-11-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.
L2 Spelling Errors in Italian Children with Dyslexia.
Palladino, Paola; Cismondo, Dhebora; Ferrari, Marcella; Ballagamba, Isabella; Cornoldi, Cesare
2016-05-01
The present study aimed to investigate L2 spelling skills in Italian children by administering an English word dictation task to 13 children with dyslexia (CD), 13 control children (comparable in age, gender, schooling and IQ) and a group of 10 children with an English learning difficulty, but no L1 learning disorder. Patterns of difficulties were examined for accuracy and type of errors, in spelling dictated short and long words (i.e. disyllables and three syllables). Notably, CD were poor in spelling English words. Furthermore, their errors were mainly related with phonological representation of words, as they made more 'phonologically' implausible errors than controls. In addition, CD errors were more frequent for short than long words. Conversely, the three groups did not differ in the number of plausible ('non-phonological') errors, that is, words that were incorrectly written, but whose reading could correspond to the dictated word via either Italian or English rules. Error analysis also showed syllable position differences in the spelling patterns of CD, children with and English learning difficulty and control children. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2013-01-01
A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.
Cost effectiveness of the stream-gaging program in Nevada
Arteaga, F.E.
1990-01-01
The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)
Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana
Stewart, J.A.; Miller, R.L.; Butch, G.K.
1986-01-01
Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)
Determination of cadmium in sediments by diluted HCI extraction and isotope dilution ICP-MS.
Terán-Baamonde, Javier; Soto-Ferreiro, Rosa-María; Carlosena, Alatzne; Andrade, José-Manuel; Prada, Darío
2018-08-15
Isotope dilution ICP-MS is proposed to measure the mass fraction of Cd extracted by diluted HCl in marine sediments, using a fast and simple extraction procedure based on ultrasonic probe agitation. The 111 Cd isotope was added before the extraction to achieve isotope equilibration with native Cd solubilized from the sample. The parameters affecting trueness and precision of isotope ratio measurements were evaluated carefully and subsequently corrected in order to minimize errors; they were: detector dead time, spectral interferences, mass discrimination factor and optimum sample/spike ratio. The mass fraction of Cd extracted was compared with the sum of the certified contents of the three steps of the sequential extraction procedure of the Standards, Measurements and Testing Programme (SM&T) analysing the BCR 701 sediment to validate the method. The certified and measured values agreed, giving a measured / certified mass fraction ratio of 1.05. Further, the extraction procedure itself was studied by adding the enriched isotope after the extraction step, which allowed verifying that analyte losses occurred during this process. Two additional reference sediments with certified total cadmium contents were also analysed. The method provided very good precision (0.9%, RSD) and a low detection limit, 1.8 ng g -1 . The procedural uncertainty budget was estimated following the EURACHEM Guide by means of the 'GUM Workbench' software, obtaining a relative expanded uncertainty of 1.5%. The procedure was applied to determine the bioaccessible mass fraction of Cd in sediments from two environmentally and economically important areas of Galicia (rias of Arousa and Vigo, NW of Spain). Copyright © 2018 Elsevier B.V. All rights reserved.
Performance of the Keck Observatory adaptive-optics system.
van Dam, Marcos A; Le Mignant, David; Macintosh, Bruce A
2004-10-10
The adaptive-optics (AO) system at the W. M. Keck Observatory is characterized. We calculate the error budget of the Keck AO system operating in natural guide star mode with a near-infrared imaging camera. The measurement noise and bandwidth errors are obtained by modeling the control loops and recording residual centroids. Results of sky performance tests are presented: The AO system is shown to deliver images with average Strehl ratios of as much as 0.37 at 1.58 microm when a bright guide star is used and of 0.19 for a magnitude 12 star. The images are consistent with the predicted wave-front error based on our error budget estimates.
Cost-effectiveness of the stream-gaging program in North Carolina
Mason, R.R.; Jackson, N.M.
1985-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.
Evaluation of a new photomask CD metrology tool
NASA Astrophysics Data System (ADS)
Dubuque, Leonard F.; Doe, Nicholas G.; St. Cin, Patrick
1996-12-01
In the integrated circuit (IC) photomask industry today, dense IC patterns, sub-micron critical dimensions (CD), and narrow tolerances for 64 M technologies and beyond are driving increased demands to minimize and characterize all components of photomask CD variation. This places strict requirements on photomask CD metrology in order to accurately characterize the mask CD error distribution. According to the gauge-maker's rule, measurement error must not exceed 30% of the tolerance on the product dimension measured or the gauge is not considered capable. The traditional single point repeatability tests are a poor measure of overall measurement system error in a dynamic, leading-edge technology environment. In such an environment, measurements may be taken at different points in the field- of-view due to stage in-accuracy, pattern recognition requirements, and throughput considerations. With this in mind, a set of experiments were designed to characterize thoroughly the metrology tool's repeatability and systematic error. Original experiments provided inconclusive results and had to be extended to obtain a full characterization of the system. Tests demonstrated a performance of better than 15 nm total CD error. Using this test as a tool for further development, the authors were able to determine the effects of various system components and measure the improvement with changes in optics, electronics, and software. Optimization of the optical path, electronics, and system software has yielded a new instrument with a total system error of better than 8 nm. Good collaboration between the photomask manufacturer and the equipment supplier has led to a realistic test of system performance and an improved CD measurement instrument.
The influence of monetary punishment on cognitive control in abstinent cocaine-users*
Hester, Robert; Bell, Ryan P.; Foxe, John J.; Garavan, Hugh
2013-01-01
Background Dependent drug users show a diminished neural response to punishment, in both limbic and cortical regions, though it remains unclear how such changes influence cognitive processes critical to addiction. To assess this relationship, we examined the influence of monetary punishment on inhibitory control and adaptive post-error behaviour in abstinent cocaine dependent (CD) participants. Methods 15 abstinent CD and 15 matched control participants performed a Go/No-go response inhibition task, which administered monetary fines for failed response inhibition, during collection of fMRI data. Results CD participants showed reduced inhibitory control and significantly less adaptive post-error slowing in response to punishment, when compared to controls. The diminished behavioural punishment sensitivity shown by CD participants was associated with significant hypoactive error-related BOLD responses in the dorsal anterior cingulate cortex (ACC), right insula and right prefrontal regions. Specifically, CD participants’ error-related response in these regions was not modulated by the presence of punishment, whereas control participants’ response showed a significant BOLD increase during punished errors. Conclusions CD participants showed a blunted response to failed control (errors) that was not modulated by punishment. Consistent with previous findings of reduced sensitivity to monetary loss in cocaine users, we further demonstrate that such insensitivity is associated with an inability to increase cognitive control in the face of negative consequences, a core symptom of addiction. The pattern of deficits in the CD group may have implications for interventions that attempt to improve cognitive control in drug dependent groups via positive/negative incentives. PMID:23791040
The influence of monetary punishment on cognitive control in abstinent cocaine-users.
Hester, Robert; Bell, Ryan P; Foxe, John J; Garavan, Hugh
2013-11-01
Dependent drug users show a diminished neural response to punishment, in both limbic and cortical regions, though it remains unclear how such changes influence cognitive processes critical to addiction. To assess this relationship, we examined the influence of monetary punishment on inhibitory control and adaptive post-error behavior in abstinent cocaine dependent (CD) participants. 15 abstinent CD and 15 matched control participants performed a Go/No-go response inhibition task, which administered monetary fines for failed response inhibition, during collection of fMRI data. CD participants showed reduced inhibitory control and significantly less adaptive post-error slowing in response to punishment, when compared to controls. The diminished behavioral punishment sensitivity shown by CD participants was associated with significant hypoactive error-related BOLD responses in the dorsal anterior cingulate cortex (ACC), right insula and right prefrontal regions. Specifically, CD participants' error-related response in these regions was not modulated by the presence of punishment, whereas control participants' response showed a significant BOLD increase during punished errors. CD participants showed a blunted response to failed control (errors) that was not modulated by punishment. Consistent with previous findings of reduced sensitivity to monetary loss in cocaine users, we further demonstrate that such insensitivity is associated with an inability to increase cognitive control in the face of negative consequences, a core symptom of addiction. The pattern of deficits in the CD group may have implications for interventions that attempt to improve cognitive control in drug dependent groups via positive/negative incentives. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.
Cost-effectiveness of the stream-gaging program in Maine; a prototype for nationwide implementation
Fontaine, Richard A.; Moss, M.E.; Smath, J.A.; Thomas, W.O.
1984-01-01
This report documents the results of a cost-effectiveness study of the stream-gaging program in Maine. Data uses and funding sources were identified for the 51 continuous stream gages currently being operated in Maine with a budget of $211,000. Three stream gages were identified as producing data no longer sufficiently needed to warrant continuing their operation. Operation of these stations should be discontinued. Data collected at three other stations were identified as having uses specific only to short-term studies; it is recommended that these stations be discontinued at the end of the data-collection phases of the studies. The remaining 45 stations should be maintained in the program for the foreseeable future. The current policy for operation of the 45-station program would require a budget of $180,300 per year. The average standard error of estimation of streamflow records is 17.7 percent. It was shown that this overall level of accuracy at the 45 sites could be maintained with a budget of approximately $170,000 if resources were redistributed among the gages. A minimum budget of $155,000 is required to operate the 45-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 25.1 percent. The maximum budget analyzed was $350,000, which resulted in an average standard error of 8.7 percent. Large parts of Maine's interior were identified as having sparse streamflow data. It was determined that this sparsity be remedied as funds become available.
Comparison of direct and heterodyne detection optical intersatellite communication links
NASA Technical Reports Server (NTRS)
Chen, C. C.; Gardner, C. S.
1987-01-01
The performance of direct and heterodyne detection optical intersatellite communication links are evaluated and compared. It is shown that the performance of optical links is very sensitive to the pointing and tracking errors at the transmitter and receiver. In the presence of random pointing and tracking errors, optimal antenna gains exist that will minimize the required transmitter power. In addition to limiting the antenna gains, random pointing and tracking errors also impose a power penalty in the link budget. This power penalty is between 1.6 to 3 dB for a direct detection QPPM link, and 3 to 5 dB for a heterodyne QFSK system. For the heterodyne systems, the carrier phase noise presents another major factor of performance degradation that must be considered. In contrast, the loss due to synchronization error is small. The link budgets for direct and heterodyne detection systems are evaluated. It is shown that, for systems with large pointing and tracking errors, the link budget is dominated by the spatial tracking error, and the direct detection system shows a superior performance because it is less sensitive to the spatial tracking error. On the other hand, for systems with small pointing and tracking jitters, the antenna gains are in general limited by the launch cost, and suboptimal antenna gains are often used in practice. In which case, the heterodyne system has a slightly higher power margin because of higher receiver sensitivity.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
A Starshade Petal Error Budget for Exo-Earth Detection and Characterization
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Lisman, P. Douglas; Cady, Eric; Martin, Stefan; Thomson, Mark; Dumont, Philip; Kasdin, N. Jeremy
2011-01-01
We present a starshade error budget with engineering requirements that are well within the current manufacturing and metrology capabilities. The error budget is based on an observational scenario in which the starshade spins about its axis on timescales short relative to the zodi-limited integration time, typically several hours. The scatter from localized petal errors is smoothed into annuli around the center of the image plane, resulting in a large reduction in the background flux variation while reducing thermal gradients caused by structural shadowing. Having identified the performance sensitivity to petal shape errors with spatial periods of 3-4 cycles/petal as the most challenging aspect of the design, we have adopted and modeled a manufacturing approach that mitigates these perturbations with 1-meter-long precision edge segments positioned using commercial metrology that readily meets assembly requirements. We have performed detailed thermal modeling and show that the expected thermal deformations are well within the requirements as well. We compare the requirements for four cases: a 32 meter diameter starshade with a 1.5 meter telescope, analyzed at 75 and 90 milliarcseconds, and a 40 meter diameter starshade with a 4 meter telescope, analyzed at 60 and 75 milliarcseconds.
National Defense Budget Estimates for FY
1993-05-01
funding" policy . Under full funding, Congress approves, in the year of the request, sufficient funds to complete a given quantity of items, even though...the proposed level of general transfer authority, and a technical outlay adjustment to properly reflect the Administration’s pay policies . 16 SCO...rg O) oo OO *-< coco •r-» CD cncD CD CM CO CD CD *4 in *•* o o f-t CM O r- CM CM O CM ao CM ^ in CO CM CD CM r-. co CM r- co
Oriented Scintillation Spectrometer Experiment (OSSE). Revision A. Volume 1
1988-05-19
SYSTEM-LEVEL ENVIRONMENTAL TESTS ................... 108 3.5.1 OPERATION REPORT, PROOF MODEL STRUCTURE TESTS.. .108 3.5.1.1 PROOF MODEL MODAL SURVEY...81 3-21 ALIGNMENT ERROR BUDGET, FOV, A4 ................ 82 3-22 ALIGNMENT ERROR BUDGET, ROTATION AXIS, A4 ...... 83 3-23 OSSE PROOF MODEL MODAL SURVEY...PROOF MODEL MODAL SURVEY .................. 112 3-27-1 OSSE PROOF MODEL STATIC LOAD TEST ............. 116 3-27-2 OSSE PROOF MODEL STATIC LOAD TEST
The flash memory battle: How low can we go?
NASA Astrophysics Data System (ADS)
van Setten, Eelco; Wismans, Onno; Grim, Kees; Finders, Jo; Dusa, Mircea; Birkner, Robert; Richter, Rigo; Scherübl, Thomas
2008-03-01
With the introduction of the TWINSCAN XT:1900Gi the limit of the water based hyper-NA immersion lithography has been reached in terms of resolution. With a numerical aperture of 1.35 a single expose resolution of 36.5nm half pitch has been demonstrated. However the practical resolution limit in production will be closer to 40nm half pitch, without having to go to double patterning alike strategies. In the relentless Flash memory market the performance of the exposure tool is stretched to the limit for a competitive advantage and cost-effective product. In this paper we will present the results of an experimental study of the resolution limit of the NAND-Flash Memory Gate layer for a production-worthy process on the TWINSCAN XT:1900Gi. The entire gate layer will be qualified in terms of full wafer CD uniformity, aberration sensitivities for the different wordlines and feature-center placement errors for 38, 39, 40 and 43nm half pitch design rule. In this study we will also compare the performance of a binary intensity mask to a 6% attenuated phase shift mask and look at strategies to maximize Depth of Focus, and to desensitize the gate layer for lens aberrations and placement errors. The mask is one of the dominant contributors to the CD uniformity budget of the flash gate layer. Therefore the wafer measurements are compared to aerial image measurements of the mask using AIMSTM 45-193i to separate the mask contribution from the scanner contribution to the final imaging performance.
Kinetic energy budgets in areas of intense convection
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Berecek, E. M.; Ebel, D. M.; Jedlovec, G. J.
1980-01-01
A kinetic energy budget analysis of the AVE-SESAME 1 period which coincided with the deadly Red River Valley tornado outbreak is presented. Horizontal flux convergence was found to be the major kinetic energy source to the region, while cross contour destruction was the major sink. Kinetic energy transformations were dominated by processes related to strong jet intrusion into the severe storm area. A kinetic energy budget of the AVE 6 period also is presented. The effects of inherent rawinsonde data errors on widely used basic kinematic parameters, including velocity divergence, vorticity advection, and kinematic vertical motion are described. In addition, an error analysis was performed in terms of the kinetic energy budget equation. Results obtained from downward integration of the continuity equation to obtain kinematic values of vertical motion are described. This alternate procedure shows promising results in severe storm situations.
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.
2017-01-01
In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.
Full-chip level MEEF analysis using model based lithography verification
NASA Astrophysics Data System (ADS)
Kim, Juhwan; Wang, Lantian; Zhang, Daniel; Tang, Zongwu
2005-11-01
MEEF (Mask Error Enhancement Factor) has become a critical factor in CD uniformity control since optical lithography process moved to sub-resolution era. A lot of studies have been done by quantifying the impact of the mask CD (Critical Dimension) errors on the wafer CD errors1-2. However, the benefits from those studies were restricted only to small pattern areas of the full-chip data due to long simulation time. As fast turn around time can be achieved for the complicated verifications on very large data by linearly scalable distributed processing technology, model-based lithography verification becomes feasible for various types of applications such as post mask synthesis data sign off for mask tape out in production and lithography process development with full-chip data3,4,5. In this study, we introduced two useful methodologies for the full-chip level verification of mask error impact on wafer lithography patterning process. One methodology is to check MEEF distribution in addition to CD distribution through process window, which can be used for RET/OPC optimization at R&D stage. The other is to check mask error sensitivity on potential pinch and bridge hotspots through lithography process variation, where the outputs can be passed on to Mask CD metrology to add CD measurements on those hotspot locations. Two different OPC data were compared using the two methodologies in this study.
Cost effectiveness of the stream-gaging program in Louisiana
Herbert, R.A.; Carlson, D.D.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Louisiana. Data uses and funding sources were identified for the 68 continuous-record stream gages currently (1984) in operation with a budget of $408,700. Three stream gages have uses specific to a short-term study with no need for continued data collection beyond the study. The remaining 65 stations should be maintained in the program for the foreseeable future. In addition to the current operation of continuous-record stations, a number of wells, flood-profile gages, crest-stage gages, and stage stations, are serviced on the continuous-record station routes; thus, increasing the current budget to $423,000. The average standard error of estimate for data collected at the stations is 34.6%. Standard errors computed in this study are one measure of streamflow errors, and can be used as guidelines in comparing the effectiveness of alternative networks. By using the routes and number of measurements prescribed by the ' Traveling Hydrographer Program, ' the standard error could be reduced to 31.5% with the current budget of $423,000. If the gaging resources are redistributed, the 34.6% overall level of accuracy at the 68 continuous-record sites and the servicing of the additional wells or gages could be maintained with a budget of approximately $410,000. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Dam, M A; Mignant, D L; Macintosh, B A
In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitudemore » 12 star.« less
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
Zonal average earth radiation budget measurements from satellites for climate studies
NASA Technical Reports Server (NTRS)
Ellis, J. S.; Haar, T. H. V.
1976-01-01
Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.
Analysis method to determine and characterize the mask mean-to-target and uniformity specification
NASA Astrophysics Data System (ADS)
Lee, Sung-Woo; Leunissen, Leonardus H. A.; Van de Kerkhove, Jeroen; Philipsen, Vicky; Jonckheere, Rik; Lee, Suk-Joo; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae
2006-06-01
The specification of the mask mean-to-target (MTT) and uniformity is related to functions as: mask error enhancement factor, dose sensitivity and critical dimension (CD) tolerances. The mask MTT shows a trade-off relationship with the uniformity. Simulations for the mask MTT and uniformity (M-U) are performed for LOGIC devices of 45 and 37 nm nodes according to mask type, illumination condition and illuminator polarization state. CD tolerances and after develop inspection (ADI) target CD's in the simulation are taken from the 2004 ITRS roadmap. The simulation results allow for much smaller tolerances in the uniformity and larger offsets in the MTT than the values as given in the ITRS table. Using the parameters in the ITRS table, the mask uniformity contributes to nearly 95% of total CDU budget for the 45 nm node, and is even larger than the CDU specification of the ITRS for the 37 nm node. We also compared the simulation requirements with the current mask making capabilities. The current mask manufacturing status of the mask uniformity is barely acceptable for the 45 nm node, but requires process improvements towards future nodes. In particular, for the 37 nm node, polarized illumination is necessary to meet the ITRS requirements. The current mask linearity deviates for pitches smaller than 300 nm, which is not acceptable even for the 45 nm node. More efforts on the proximity correction method are required to improve the linearity behavior.
GCIP water and energy budget synthesis (WEBS)
Roads, J.; Lawford, R.; Bainto, E.; Berbery, E.; Chen, S.; Fekete, B.; Gallo, K.; Grundstein, A.; Higgins, W.; Kanamitsu, M.; Krajewski, W.; Lakshmi, V.; Leathers, D.; Lettenmaier, D.; Luo, L.; Maurer, E.; Meyers, T.; Miller, D.; Mitchell, Ken; Mote, T.; Pinker, R.; Reichler, T.; Robinson, D.; Robock, A.; Smith, J.; Srinivasan, G.; Verdin, K.; Vinnikov, K.; Vonder, Haar T.; Vorosmarty, C.; Williams, S.; Yarosh, E.
2003-01-01
As part of the World Climate Research Program's (WCRPs) Global Energy and Water-Cycle Experiment (GEWEX) Continental-scale International Project (GCIP), a preliminary water and energy budget synthesis (WEBS) was developed for the period 1996-1999 fromthe "best available" observations and models. Besides this summary paper, a companion CD-ROM with more extensive discussion, figures, tables, and raw data is available to the interested researcher from the GEWEX project office, the GAPP project office, or the first author. An updated online version of the CD-ROM is also available at http://ecpc.ucsd.edu/gcip/webs.htm/. Observations cannot adequately characterize or "close" budgets since too many fundamental processes are missing. Models that properly represent the many complicated atmospheric and near-surface interactions are also required. This preliminary synthesis therefore included a representative global general circulation model, regional climate model, and a macroscale hydrologic model as well as a global reanalysis and a regional analysis. By the qualitative agreement among the models and available observations, it did appear that we now qualitatively understand water and energy budgets of the Mississippi River Basin. However, there is still much quantitative uncertainty. In that regard, there did appear to be a clear advantage to using a regional analysis over a global analysis or a regional simulation over a global simulation to describe the Mississippi River Basin water and energy budgets. There also appeared to be some advantage to using a macroscale hydrologic model for at least the surface water budgets. Copyright 2003 by the American Geophysical Union.
Kinetic energy budget during strong jet stream activity over the eastern United States
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Scoggins, J. R.
1980-01-01
Kinetic energy budgets are computed during a cold air outbreak in association with strong jet stream activity over the eastern United States. The period is characterized by large generation of kinetic energy due to cross-contour flow. Horizontal export and dissipation of energy to subgrid scales of motion constitute the important energy sinks. Rawinsonde data at 3 and 6 h intervals during a 36 h period are used in the analysis and reveal that energy fluctuations on a time scale of less than 12 h are generally small even though the overall energy balance does change considerably during the period in conjunction with an upper level trough which moves through the region. An error analysis of the energy budget terms suggests that this major change in the budget is not due to random errors in the input data but is caused by the changing synoptic situation. The study illustrates the need to consider the time and space scales of associated weather phenomena in interpreting energy budgets obtained through use of higher frequency data.
76 FR 55139 - Order Making Fiscal Year 2012 Annual Adjustments to Registration Fee Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
... Congressional Budget Office (``CBO'') and Office of Management and Budget (``OMB'') to project the aggregate... given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n-step...
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Assessing and measuring wetland hydrology
Rosenberry, Donald O.; Hayashi, Masaki; Anderson, James T.; Davis, Craig A.
2013-01-01
Virtually all ecological processes that occur in wetlands are influenced by the water that flows to, from, and within these wetlands. This chapter provides the “how-to” information for quantifying the various source and loss terms associated with wetland hydrology. The chapter is organized from a water-budget perspective, with sections associated with each of the water-budget components that are common in most wetland settings. Methods for quantifying the water contained within the wetland are presented first, followed by discussion of each separate component. Measurement accuracy and sources of error are discussed for each of the methods presented, and a separate section discusses the cumulative error associated with determining a water budget for a wetland. Exercises and field activities will provide hands-on experience that will facilitate greater understanding of these processes.
Error Budgets for the Exoplanet Starshade (exo-s) Probe-Class Mission Study
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin
2015-01-01
Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 millimeters starshade co-launched with a 1.1 millimeter commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 millimeter starshade intended to work with a 2.4 millimeters telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 millimeter starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.
Error budgets for the Exoplanet Starshade (Exo-S) probe-class mission study
NASA Astrophysics Data System (ADS)
Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin
2015-09-01
Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 m starshade co-launched with a 1.1 m commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 m starshade intended to work with a 2.4 m telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 m starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.
NASA Astrophysics Data System (ADS)
Wu, Guocan; Zheng, Xiaogu; Dan, Bo
2016-04-01
The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making such...
Improved Calibration through SMAP RFI Change Detection
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng
2017-01-01
Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.
Cost effectiveness of stream-gaging program in Michigan
Holtschlag, D.J.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... equivalent of 10 CD-ROMs. This is estimated to cost $20 for the 10 CD-ROM spindle, and $8 to ship each group... comments should be identified with the OMB control number 0910-NEW and title ``Information Request... Request Regarding Dissolvable Tobacco Products--(OMB Control Number 0910-NEW) On June 22, 2009, the...
Budgets of divergent and rotational kinetic energy during two periods of intense convection
NASA Technical Reports Server (NTRS)
Buechler, D. E.; Fuelberg, H. E.
1986-01-01
The derivations of the energy budget equations for divergent and rotational components of kinetic energy are provided. The intense convection periods studied are: (1) synoptic scale data of 3 or 6 hour intervals and (2) mesoalphascale data every 3 hours. Composite energies and averaged budgets for the periods are presented; the effects of random data errors on derived energy parameters is investigated. The divergent kinetic energy and rotational kinetic energy budgets are compared; good correlation of the data is observed. The kinetic energies and budget terms increase with convective development; however, the conversion of the divergent and rotational energies are opposite.
22 CFR 96.33 - Budget, audit, insurance, and risk assessment requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... its governing body, if applicable, for management of its funds. The budget discloses all remuneration (including perquisites) paid to the agency's or person's board of directors, managers, employees, and... determining the type and amount of professional, general, directors' and officers', errors and omissions, and...
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert
2012-01-01
The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.
NASA Astrophysics Data System (ADS)
Nikolsky, Peter; Strolenberg, Chris; Nielsen, Rasmus; Nooitgedacht, Tjitte; Davydova, Natalia; Yang, Greg; Lee, Shawn; Park, Chang-Min; Kim, Insung; Yeo, Jeong-Ho
2013-04-01
As the International Technology Roadmap for Semiconductors critical dimension uniformity (CDU) specification shrinks, semiconductor companies need to maintain a high yield of good wafers per day and high performance (and hence market value) of finished products. This cannot be achieved without continuous analysis and improvement of on-product CDU as one of the main drivers for process control and optimization with better understanding of main contributors from the litho cluster: mask, process, metrology and scanner. We will demonstrate a study of mask CDU characterization and its impact on CDU Budget Breakdown (CDU BB) performed for advanced extreme ultraviolet (EUV) lithography with 1D (dense lines) and 2D (dense contacts) feature cases. We will show that this CDU contributor is one of the main differentiators between well-known ArFi and new EUV CDU budgeting principles. We found that reticle contribution to intrafield CDU should be characterized in a specific way: mask absorber thickness fingerprints play a role comparable with reticle CDU in the total reticle part of the CDU budget. Wafer CD fingerprints, introduced by this contributor, may or may not compensate variations of mask CDs and hence influence on total mask impact on intrafield CDU at the wafer level. This will be shown on 1D and 2D feature examples. Mask stack reflectivity variations should also be taken into account: these fingerprints have visible impact on intrafield CDs at the wafer level and should be considered as another contributor to the reticle part of EUV CDU budget. We also observed mask error enhancement factor (MEEF) through field fingerprints in the studied EUV cases. Variations of MEEF may play a role towards the total intrafield CDU and may need to be taken into account for EUV lithography. We characterized MEEF-through-field for the reviewed features, with results herein, but further analysis of this phenomenon is required. This comprehensive approach to quantifying the mask part of the overall EUV CDU contribution helps deliver an accurate and integral CDU BB per product/process and litho tool. The better understanding of the entire CDU budget for advanced EUVL nodes achieved by Samsung and ASML helps extend the limits of Moore's Law and to deliver successful implementation of smaller, faster and smarter chips in semiconductor industry.
Artificial intelligence modeling of cadmium(II) biosorption using rice straw
NASA Astrophysics Data System (ADS)
Nasr, Mahmoud; Mahmoud, Alaa El Din; Fawzy, Manal; Radwan, Ahmed
2017-05-01
The biosorption efficiency of Cd2+ using rice straw was investigated at room temperature (25 ± 4 °C), contact time (2 h) and agitation rate (5 Hz). Experiments studied the effect of three factors, biosorbent dose BD (0.1 and 0.5 g/L), pH (2 and 7) and initial Cd2+ concentration X (10 and 100 mg/L) at two levels "low" and "high". Results showed that, a variation in X from high to low revealed 31 % increase in the Cd2+ biosorption. However, a discrepancy in pH and BD from low to high achieved 28.60 and 23.61 % increase in the removal of Cd2+, respectively. From 23 factorial design, the effects of BD, pH and X achieved p value equals to 0.2248, 0.1881 and 0.1742, respectively, indicating that the influences are in the order X > pH > BD. Similarly, an adaptive neuro-fuzzy inference system indicated that X is the most influential with training and checking errors of 10.87 and 17.94, respectively. This trend was followed by "pH" with training error (15.80) and checking error (17.39), after that BD with training error (16.09) and checking error (16.29). A feed-forward back-propagation neural network with a configuration 3-6-1 achieved correlation ( R) of 0.99 (training), 0.82 (validation) and 0.97 (testing). Thus, the proposed network is capable of predicting Cd2+ biosorption with high accuracy, while the most significant variable was X.
Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric
2009-01-01
The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.
Propagation of angular errors in two-axis rotation systems
NASA Astrophysics Data System (ADS)
Torrington, Geoffrey K.
2003-10-01
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.
ERIC Educational Resources Information Center
Marcoulides, George A.
1995-01-01
A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)
Quantifying uncertainty in forest nutrient budgets
Ruth D. Yanai; Carrie R. Levine; Mark B. Green; John L. Campbell
2012-01-01
Nutrient budgets for forested ecosystems have rarely included error analysis, in spite of the importance of uncertainty to interpretation and extrapolation of the results. Uncertainty derives from natural spatial and temporal variation and also from knowledge uncertainty in measurement and models. For example, when estimating forest biomass, researchers commonly report...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-28
... Identifier: CMS-10003] Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB); Correction AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Correction of notice. SUMMARY: This document corrects a technical error in the notice [Document Identifier: CMS...
Sensitivity of planetary cruise navigation to earth orientation calibration errors
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Folkner, W. M.
1995-01-01
A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
Ma, Zhaoxuan; Shiao, Stephen L; Yoshida, Emi J; Swartwood, Steven; Huang, Fangjin; Doche, Michael E; Chung, Alice P; Knudsen, Beatrice S; Gertych, Arkadiusz
2017-09-18
Immune cell infiltrates (ICI) of tumors are scored by pathologists around tumor glands. To obtain a better understanding of the immune infiltrate, individual immune cell types, their activation states and location relative to tumor cells need to be determined. This process requires precise identification of the tumor area and enumeration of immune cell subtypes separately in the stroma and inside tumor nests. Such measurements can be accomplished by a multiplex format using immunohistochemistry (IHC). We developed a pipeline that combines immunohistochemistry (IHC) and digital image analysis. One slide was stained with pan-cytokeratin and CD45 and the other slide with CD8, CD4 and CD68. The tumor mask generated through pan-cytokeratin staining was transferred from one slide to the other using affine image co-registration. Bland-Altman plots and Pearson correlation were used to investigate differences between densities and counts of immune cell underneath the transferred versus manually annotated tumor masks. One-way ANOVA was used to compare the mask transfer error for tissues with solid and glandular tumor architecture. The overlap between manual and transferred tumor masks ranged from 20%-90% across all cases. The error of transferring the mask was 2- to 4-fold greater in tumor regions with glandular compared to solid growth pattern (p < 10 -6 ). Analyzing data from a single slide, the Pearson correlation coefficients of cell type densities outside and inside tumor regions were highest for CD4 + T-cells (r = 0.8), CD8 + T-cells (r = 0.68) or CD68+ macrophages (r = 0.79). The correlation coefficient for CD45+ T- and B-cells was only 0.45. The transfer of the mask generated an error in the measurement of intra- and extra- tumoral CD68+, CD8+ or CD4+ counts (p < 10 -10 ). In summary, we developed a general method to integrate data from IHC stained slides into a single dataset. Because of the transfer error between slides, we recommend applying the antibody for demarcation of the tumor on the same slide as the ICI antibodies.
Logging-related increases in stream density in a northern California watershed
Matthew S. Buffleben
2012-01-01
Although many sediment budgets estimate the effects of logging, few have considered the potential impact of timber harvesting on stream density. Failure to consider changes in stream density could lead to large errors in the sediment budget, particularly between the allocation of natural and anthropogenic sources of sediment.This study...
NASA Astrophysics Data System (ADS)
Kojima, Yosuke; Shirasaki, Masanori; Chiba, Kazuaki; Tanaka, Tsuyoshi; Inazuki, Yukio; Yoshikawa, Hiroki; Okazaki, Satoshi; Iwase, Kazuya; Ishikawa, Kiichi; Ozawa, Ken
2007-05-01
For 45 nm node and beyond, the alternating phase-shift mask (alt. PSM), one of the most expected resolution enhancement technologies (RET) because of its high image contrast and small mask error enhancement factor (MEEF), and the binary mask (BIM) attract attention. Reducing CD and registration errors and defect are their critical issues. As the solution, the new blank for alt. PSM and BIM is developed. The top film of new blank is thin Cr, and the antireflection film and shielding film composed of MoSi are deposited under the Cr film. The mask CD performance is evaluated for through pitch, CD linearity, CD uniformity, global loading, resolution and pattern fidelity, and the blank performance is evaluated for optical density, reflectivity, sheet resistance, flatness and defect level. It is found that the performance of new blank is equal to or better than that of conventional blank in all items. The mask CD performance shows significant improvement. The lithography performance of new blank is confirmed by wafer printing and AIMS measurement. The full dry type alt. PSM has been used as test plate, and the test results show that new blank can almost meet the specifications of pi-0 CD difference, CD uniformity and process margin for 45 nm node. Additionally, the new blank shows the better pattern fidelity than that of conventional blank on wafer. AIMS results are almost same as wafer results except for the narrowest pattern. Considering the result above, this new blank can reduce the mask error factors of alt. PSM and BIM for 45 nm node and beyond.
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
Ultraspectral sounding retrieval error budget and estimation
NASA Astrophysics Data System (ADS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping
2011-11-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).
Ultraspectral Sounding Retrieval Error Budget and Estimation
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping
2011-01-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..
Overview of the TOPEX/Poseidon Platform Harvest Verification Experiment
NASA Technical Reports Server (NTRS)
Morris, Charles S.; DiNardo, Steven J.; Christensen, Edward J.
1995-01-01
An overview is given of the in situ measurement system installed on Texaco's Platform Harvest for verification of the sea level measurement from the TOPEX/Poseidon satellite. The prelaunch error budget suggested that the total root mean square (RMS) error due to measurements made at this verification site would be less than 4 cm. The actual error budget for the verification site is within these original specifications. However, evaluation of the sea level data from three measurement systems at the platform has resulted in unexpectedly large differences between the systems. Comparison of the sea level measurements from the different tide gauge systems has led to a better understanding of the problems of measuring sea level in relatively deep ocean. As of May 1994, the Platform Harvest verification site has successfully supported 60 TOPEX/Poseidon overflights.
Extended Kalman filter for attitude estimation of the earth radiation budget satellite
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack Y.
1989-01-01
The design and testing of an Extended Kalman Filter (EKF) for ground attitude determination, misalignment estimation and sensor calibration of the Earth Radiation Budget Satellite (ERBS) are described. Attitude is represented by the quaternion of rotation and the attitude estimation error is defined as an additive error. Quaternion normalization is used for increasing the convergence rate and for minimizing the need for filter tuning. The development of the filter dynamic model, the gyro error model and the measurement models of the Sun sensors, the IR horizon scanner and the magnetometers which are used to generate vector measurements are also presented. The filter is applied to real data transmitted by ERBS sensors. Results are presented and analyzed and the EKF advantages as well as sensitivities are discussed. On the whole the filter meets the expected synergism, accuracy and robustness.
NASA Astrophysics Data System (ADS)
Kleinherenbrink, Marcel; Riva, Riccardo; Sun, Yu
2016-11-01
In this study, for the first time, an attempt is made to close the sea level budget on a sub-basin scale in terms of trend and amplitude of the annual cycle. We also compare the residual time series after removing the trend, the semiannual and the annual signals. To obtain errors for altimetry and Argo, full variance-covariance matrices are computed using correlation functions and their errors are fully propagated. For altimetry, we apply a geographically dependent intermission bias [Ablain et al.(2015)], which leads to differences in trends up to 0.8 mm yr-1. Since Argo float measurements are non-homogeneously spaced, steric sea levels are first objectively interpolated onto a grid before averaging. For the Gravity Recovery And Climate Experiment (GRACE), gravity fields full variance-covariance matrices are used to propagate errors and statistically filter the gravity fields. We use four different filtered gravity field solutions and determine which post-processing strategy is best for budget closure. As a reference, the standard 96 degree Dense Decorrelation Kernel-5 (DDK5)-filtered Center for Space Research (CSR) solution is used to compute the mass component (MC). A comparison is made with two anisotropic Wiener-filtered CSR solutions up to degree and order 60 and 96 and a Wiener-filtered 90 degree ITSG solution. Budgets are computed for 10 polygons in the North Atlantic Ocean, defined in a way that the error on the trend of the MC plus steric sea level remains within 1 mm yr-1. Using the anisotropic Wiener filter on CSR gravity fields expanded up to spherical harmonic degree 96, it is possible to close the sea level budget in 9 of 10 sub-basins in terms of trend. Wiener-filtered Institute of Theoretical geodesy and Satellite Geodesy (ITSG) and the standard DDK5-filtered CSR solutions also close the trend budget if a glacial isostatic adjustment (GIA) correction error of 10-20 % is applied; however, the performance of the DDK5-filtered solution strongly depends on the orientation of the polygon due to residual striping. In 7 of 10 sub-basins, the budget of the annual cycle is closed, using the DDK5-filtered CSR or the Wiener-filtered ITSG solutions. The Wiener-filtered 60 and 96 degree CSR solutions, in combination with Argo, lack amplitude and suffer from what appears to be hydrological leakage in the Amazon and Sahel regions. After reducing the trend, the semiannual and the annual signals, 24-53 % of the residual variance in altimetry-derived sea level time series is explained by the combination of Argo steric sea levels and the Wiener-filtered ITSG MC. Based on this, we believe that the best overall solution for the MC of the sub-basin-scale budgets is the Wiener-filtered ITSG gravity fields. The interannual variability is primarily a steric signal in the North Atlantic Ocean, so for this the choice of filter and gravity field solution is not really significant.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.
Performance analysis of next-generation lunar laser retroreflectors
NASA Astrophysics Data System (ADS)
Ciocci, Emanuele; Martini, Manuele; Contessa, Stefania; Porcelli, Luca; Mastrofini, Marco; Currie, Douglas; Delle Monache, Giovanni; Dell'Agnello, Simone
2017-09-01
Starting from 1969, Lunar Laser Ranging (LLR) to the Apollo and Lunokhod Cube Corner Retroreflectors (CCRs) provided several tests of General Relativity (GR). When deployed, the Apollo/Lunokhod CCRs design contributed only a negligible fraction of the ranging error budget. Today the improvement over the years in the laser ground stations makes the lunar libration contribution relevant. So the libration now dominates the error budget limiting the precision of the experimental tests of gravitational theories. The MoonLIGHT-2 project (Moon Laser Instrumentation for General relativity High-accuracy Tests - Phase 2) is a next-generation LLR payload developed by the Satellite/lunar/GNSS laser ranging/altimetry and Cube/microsat Characterization Facilities Laboratory (SCF _ Lab) at the INFN-LNF in collaboration with the University of Maryland. With its unique design consisting of a single large CCR unaffected by librations, MoonLIGHT-2 can significantly reduce error contribution of the reflectors to the measurement of the lunar geodetic precession and other GR tests compared to Apollo/Lunokhod CCRs. This paper treats only this specific next-generation lunar laser retroreflector (MoonLIGHT-2) and it is by no means intended to address other contributions to the global LLR error budget. MoonLIGHT-2 is approved to be launched with the Moon Express 1(MEX-1) mission and will be deployed on the Moon surface in 2018. To validate/optimize MoonLIGHT-2, the SCF _ Lab is carrying out a unique experimental test called SCF-Test: the concurrent measurement of the optical Far Field Diffraction Pattern (FFDP) and the temperature distribution of the CCR under thermal conditions produced with a close-match solar simulator and simulated space environment. The focus of this paper is to describe the SCF _ Lab specialized characterization of the performance of our next-generation LLR payload. While this payload will improve the contribution of the error budget of the space segment (MoonLIGHT-2) to GR tests and to constraints on new gravitational theories (like non-minimally coupled gravity and spacetime torsion), the description of the associated physics analysis and global LLR error budget is outside of the chosen scope of present paper. We note that, according to Reasenberg et al. (2016), software models used for LLR physics and lunar science cannot process residuals with an accuracy better than few centimeters and that, in order to process millimeter ranging data (or better) coming from (not only) future reflectors, it is necessary to update and improve the respective models inside the software package. The work presented here on results of the SCF-test thermal and optical analysis shows that a good performance is expected by MoonLIGHT-2 after its deployment on the Moon. This in turn will stimulate improvements in LLR ground segment hardware and help refine the LLR software code and models. Without a significant improvement of the LLR space segment, the acquisition of improved ground LLR hardware and challenging LLR software refinements may languish for lack of motivation, since the librations of the old generation LLR payloads largely dominate the global LLR error budget.
Cost-effectiveness of the stream-gaging program in Missouri
Waite, L.A.
1987-01-01
This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)
Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data
NASA Technical Reports Server (NTRS)
Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen
2006-01-01
This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.
Michael Köhl; Charles Scott; Daniel Plugge
2013-01-01
Uncertainties are a composite of errors arising from observations and the appropriateness of models. An error budget approach can be used to identify and accumulate the sources of errors to estimate change in emissions between two points in time. Various forest monitoring approaches can be used to estimate the changes in emissions due to deforestation and forest...
Modeling and analysis of pinhole occulter experiment
NASA Technical Reports Server (NTRS)
Ring, J. R.
1986-01-01
The objectives were to improve pointing control system implementation by converting the dynamic compensator from a continuous domain representation to a discrete one; to determine pointing stability sensitivites to sensor and actuator errors by adding sensor and actuator error models to treetops and by developing an error budget for meeting pointing stability requirements; and to determine pointing performance for alternate mounting bases (space station for example).
Live, Virtual, and Constructive-Training Environment: A Vision and Strategy for the Marine Corps
2014-09-01
Research Lab ATF&PD Advocacy, Transition, Fiscal and Personnel, Budget, Operations Division ATS Aviation Training Systems AVN Aviation BFT Blue...MEFs DC, CD&I MARFORs MEFs DCs HQMC Reps ACMC DC, CD&I DC, P&R DC, PP&O DC, I&L DC, AVN Community forum to identify and prioritize issues Review Trng M
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
Ballantyne, A. P.; Andres, R.; Houghton, R.; ...
2015-04-30
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less
CBO’s Revenue Forecasting Record
2015-11-01
1983 1988 1993 1998 2003 2008 2013 -10 0 10 20 30 CBO Administration CBO’s Mean Forecast Error (1.1%) Forecast Errors for CBO’s and the...Administration’s Two-Year Revenue Projections CONGRESS OF THE UNITED STATES CONGRESSIONAL BUDGET OFFICE CBO CBO’s Revenue Forecasting Record NOVEMBER 2015...
Department of the Navy Supporting Data for FY1991 Budget Estimates Descriptive Summaries
1990-01-01
deployments of the F/A-18 aircraft. f. (U) Engineering and technical support for AAS-38 tracker and F/A-18 C/D WSSA. g. (U) Provided support to ATARS program...for preliminary testing of RECCE/ ATARS common nose and associated air data computer (ADC) algorithms. h. (U) Initiated integration of full HARPOON and...to ATARS program for testing of flight control computer software. 28 UNCLASSIFIED 0]PgLa .e 2 4136N Budget Activity: 4 Program Elemhk* Title: F/A-18
Within-wafer CD variation induced by wafer shape
NASA Astrophysics Data System (ADS)
Huang, Chi-hao; Yang, Mars; Yang, Elvis; Yang, T. H.; Chen, K. C.
2016-03-01
In order to meet the increasing storage capacity demand and reduce bit cost of NAND flash memories, 3D stacked vertical flash cell array has been proposed. In constructing 3D NAND flash memories, the bit number per unit area is increased as increasing the number of stacked layers. However, the increased number of stacked layers has made the film stress control extremely important for maintaining good process quality. The residual film stress alters the wafer shape accordingly several process impacts have been readily observed across wafer, such as film deposition non-uniformity, etch rate non-uniformity, wafer chucking error on scanner, materials coating/baking defects, overlay degradation and critical dimension (CD) non-uniformity. The residual tensile and compressive stresses on wafers will result in concave and convex wafer shapes, respectively. This study investigates within-wafer CD uniformity (CDU) associated with wafer shape change induced by the 3D NAND flash memory processes. Within-wafer CDU was correlated with several critical parameters including different wafer bow heights of concave and convex wafer shapes, photo resists with different post exposure baking (PEB) temperature sensitivities, and DoseMapper compensation. The results indicated the trend of within-wafer CDU maintains flat for convex wafer shapes with bow height up to +230um and concave wafer shapes with bow height ranging from 0 ~ -70um, while the within-wafer CDU trends up from -70um to -246um wafer bow heights. To minimize the within-wafer CD distribution induced by wafer warpage, carefully tailoring the film stack and thermal budget in the process flow for maintaining the wafer shape at CDU friendly range is indispensable and using photo-resist materials with lower PEB temperature sensitivity is also suggested. In addition, DoseMapper compensation is also an alternative to greatly suppress the within-wafer CD non-uniformity but the photo-resist profile variation induced by across-wafer PEB temperature non-uniformity attributed to wafer warpage is uncorrectable, and the photo-resist profile variation is believed to affect across-wafer etch bias uniformity to some degree.
NASA Technical Reports Server (NTRS)
De Boer, G.; Shupe, M.D.; Caldwell, P.M.; Bauer, Susanne E.; Persson, O.; Boyle, J.S.; Kelley, M.; Klein, S.A.; Tjernstrom, M.
2014-01-01
Atmospheric measurements from the Arctic Summer Cloud Ocean Study (ASCOS) are used to evaluate the performance of three atmospheric reanalyses (European Centre for Medium Range Weather Forecasting (ECMWF)- Interim reanalysis, National Center for Environmental Prediction (NCEP)-National Center for Atmospheric Research (NCAR) reanalysis, and NCEP-DOE (Department of Energy) reanalysis) and two global climate models (CAM5 (Community Atmosphere Model 5) and NASA GISS (Goddard Institute for Space Studies) ModelE2) in simulation of the high Arctic environment. Quantities analyzed include near surface meteorological variables such as temperature, pressure, humidity and winds, surface-based estimates of cloud and precipitation properties, the surface energy budget, and lower atmospheric temperature structure. In general, the models perform well in simulating large-scale dynamical quantities such as pressure and winds. Near-surface temperature and lower atmospheric stability, along with surface energy budget terms, are not as well represented due largely to errors in simulation of cloud occurrence, phase and altitude. Additionally, a development version of CAM5, which features improved handling of cloud macro physics, has demonstrated to improve simulation of cloud properties and liquid water amount. The ASCOS period additionally provides an excellent example of the benefits gained by evaluating individual budget terms, rather than simply evaluating the net end product, with large compensating errors between individual surface energy budget terms that result in the best net energy budget.
NASA Astrophysics Data System (ADS)
Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.
2016-11-01
Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.
NASA Technical Reports Server (NTRS)
Fisher, Brad; Wolff, David B.
2010-01-01
Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.
Ma, H. -Y.; Klein, S. A.; Xie, S.; ...
2018-02-27
Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less
NASA Astrophysics Data System (ADS)
Ma, H.-Y.; Klein, S. A.; Xie, S.; Zhang, C.; Tang, S.; Tang, Q.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.; Ahlgrimm, M.; Berg, L. K.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Liu, Y.; Merryfield, W.; Qian, Y.; Roehrig, R.; Wang, Y.-C.
2018-03-01
Many weather forecast and climate models simulate warm surface air temperature (T2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions of surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.
Levesque, Eric; Hoti, Emir; de La Serna, Sofia; Habouchi, Houssam; Ichai, Philippe; Saliba, Faouzi; Samuel, Didier; Azoulay, Daniel
2013-03-01
In the French healthcare system, the intensive care budget allocated is directly dependent on the activity level of the center. To evaluate this activity level, it is necessary to code the medical diagnoses and procedures performed on Intensive Care Unit (ICU) patients. The aim of this study was to evaluate the effects of using an Intensive Care Information System (ICIS) on the incidence of coding errors and its impact on the ICU budget allocated. Since 2005, the documentation on and monitoring of every patient admitted to our ICU has been carried out using an ICIS. However, the coding process was performed manually until 2008. This study focused on two periods: the period of manual coding (year 2007) and the period of computerized coding (year 2008) which covered a total of 1403 ICU patients. The time spent on the coding process, the rate of coding errors (defined as patients missed/not coded or wrongly identified as undergoing major procedure/s) and the financial impact were evaluated for these two periods. With computerized coding, the time per admission decreased significantly (from 6.8 ± 2.8 min in 2007 to 3.6 ± 1.9 min in 2008, p<0.001). Similarly, a reduction in coding errors was observed (7.9% vs. 2.2%, p<0.001). This decrease in coding errors resulted in a reduced difference between the potential and real ICU financial supplements obtained in the respective years (€194,139 loss in 2007 vs. a €1628 loss in 2008). Using specific computer programs improves the intensive process of manual coding by shortening the time required as well as reducing errors, which in turn positively impacts the ICU budget allocation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, H. -Y.; Klein, S. A.; Xie, S.
Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less
Deciphering the Adaptive Immune Response to Ovarian Cancer
2014-10-01
of 10 mg/mL. Wells containing anti-CD3/ anti-CD28 coated beads (bead-to-cell ratio of 1:1) or human cytomegalovirus, Epstein - Barr virus , and...Waldenstrom’s Macroglobulinemia (MYD88L265P). CONCLUSION: Overall, this study is progressing on schedule and on budget. We have developed the...apart from T-cell–medi- ated control of virus -induced cancers (3). More obvious in humans is the influence of the immune system on cancer progression and
Performance of the Gemini Planet Imager’s adaptive optics system
Poyneer, Lisa A.; Palmer, David W.; Macintosh, Bruce; ...
2016-01-07
The Gemini Planet Imager’s adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. We give a definitive description of the system’s algorithms and technologies as built. Ultimately, the error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term.
Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.
2004-01-01
Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.
Zhao, Guo; Wang, Hui; Liu, Gang; Wang, Zhiqiang
2016-09-21
An easy, but effective, method has been proposed to detect and quantify the Pb(II) in the presence of Cd(II) based on a Bi/glassy carbon electrode (Bi/GCE) with the combination of a back propagation artificial neural network (BP-ANN) and square wave anodic stripping voltammetry (SWASV) without further electrode modification. The effects of Cd(II) in different concentrations on stripping responses of Pb(II) was studied. The results indicate that the presence of Cd(II) will reduce the prediction precision of a direct calibration model. Therefore, a two-input and one-output BP-ANN was built for the optimization of a stripping voltammetric sensor, which considering the combined effects of Cd(II) and Pb(II) on the SWASV detection of Pb(II) and establishing the nonlinear relationship between the stripping peak currents of Pb(II) and Cd(II) and the concentration of Pb(II). The key parameters of the BP-ANN and the factors affecting the SWASV detection of Pb(II) were optimized. The prediction performance of direct calibration model and BP-ANN model were tested with regard to the mean absolute error (MAE), root mean square error (RMSE), average relative error (ARE), and correlation coefficient. The results proved that the BP-ANN model exhibited higher prediction accuracy than the direct calibration model. Finally, a real samples analysis was performed to determine trace Pb(II) in some soil specimens with satisfactory results.
Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma
NASA Technical Reports Server (NTRS)
Fisher, Brad L.
2004-01-01
The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.
van Walbeek, Corné
2014-01-01
Background The tobacco industry claims that illicit trade in cigarettes has increased sharply since the 1990s and that government has lost substantial tax revenue. Objectives (1) To determine whether cigarette excise tax revenue has been below budget in recent years, compared with previous decades. (2) To determine trends in the size of the illicit market since 1995. Methods For (1), mean percentage errors and root mean square percentage errors were calculated for budget revenue deviation for three products (cigarettes, beer and spirits), for various subperiods. For (2), predicted changes in total consumption, using actual cigarette price and GDP changes and previously published price and income elasticity estimates, were calculated and compared with changes in tax-paid consumption. Results Cigarette excise revenues were 0.7% below budget for 2000–2012 on average, compared with 3.0% below budget for beer and 4.7% below budget for spirits. There is no evidence that illicit trade in cigarettes in South Africa increased between 2002 and 2009. There is a substantial increase in illicit trade in 2010, probably peaking in 2011. In 2012 tax-paid consumption of cigarettes increased 2.6%, implying that the illicit market share decreased an estimated 0.6 percentage points. Conclusions Other than in 2010, there is no evidence that illicit trade is significantly undermining government revenue. Claims that illicit trade has consistently increased over the past 15 years, and has continued its sharp increase since 2010, are not supported. PMID:24431121
Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage
Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José
2016-01-01
Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014
ERIC Educational Resources Information Center
Perez, Ernest
1997-01-01
Examines the practical realities of upgrading Intel personal computers in libraries, considering budgets and technical personnel availability. Highlights include adding RAM; putting in faster processor chips, including clock multipliers; new hard disks; CD-ROM speed; motherboards and interface cards; cost limits and economic factors; and…
NASA Technical Reports Server (NTRS)
Stowe, Larry; Ardanuy, Philip; Hucek, Richard; Abel, Peter; Jacobowitz, Herbert
1991-01-01
A set of system simulations was performed to evaluate candidate scanner configurations to fly as a part of the Earth Radiation Budget Instrument (ERBI) on the polar platforms during the 1990's. The simulation is considered of instantaneous sampling (without diurnal averaging) of the longwave and shortwave fluxes at the top of the atmosphere (TOA). After measurement and subsequent inversion to the TOA, the measured fluxes were compared to the reference fluxes for 2.5 deg lat/long resolution targets. The reference fluxes at this resolution are obtained by integrating over the 25 x 25 = 625 grid elements in each target. The differences between each of these two resultant spatially averaged sets of target measurements (errors) are taken and then statistically summarized. Five instruments are considered: (1) the Conically Scanning Radiometer (CSR); (2) the ERBE Cross Track Scanner; (3) the Nimbus-7 Biaxial Scanner; (4) the Clouds and Earth's Radiant Energy System Instrument (CERES-1); and (5) the Active Cavity Array (ACA). Identical studies of instantaneous error were completed for many days, two seasons, and several satellite equator crossing longitudes. The longwave flux errors were found to have the same space and time characteristics as for the shortwave fluxes, but the errors are only about 25 pct. of the shortwave errors.
Passive Ranging Using Infra-Red Atmospheric Attenuation
2010-03-01
was the Bomem MR-154 Fourier Transform Spectrometer (FTS). The FTS used both an HgCdTe and InSb detector . For this study, the primary source of data...also outfitted with an HgCdTe and InSb detector . Again, only data from the InSb detector was used. The spectral range of data collected was from...an uncertainty in transmittance of 0.01 (figure 20). This would yield an error in range of 6%. Other sources of error include detector noise or
Benhamou, Dan; Piriou, Vincent; De Vaumas, Cyrille; Albaladejo, Pierre; Malinovsky, Jean-Marc; Doz, Marianne; Lafuma, Antoine; Bouaziz, Hervé
2017-04-01
Patient safety is improved by the use of labelled, ready-to-use, pre-filled syringes (PFS) when compared to conventional methods of syringe preparation (CMP) of the same product from an ampoule. However, the PFS presentation costs more than the CMP presentation. To estimate the budget impact for French hospitals of switching from atropine in ampoules to atropine PFS for anaesthesia care. A model was constructed to simulate the financial consequences of the use of atropine PFS in operating theatres, taking into account wastage and medication errors. The model tested different scenarios and a sensitivity analysis was performed. In a reference scenario, the systematic use of atropine PFS rather than atropine CMP yielded a net one-year budget saving of €5,255,304. Medication errors outweighed other cost factors relating to the use of atropine CMP (€9,425,448). Avoidance of wastage in the case of atropine CMP (prepared and unused) was a major source of savings (€1,167,323). Significant savings were made by means of other scenarios examined. The sensitivity analysis suggests that the results obtained are robust and stable for a range of parameter estimates and assumptions. The financial model was based on data obtained from the literature and expert opinions. The budget impact analysis shows that even though atropine PFS is more expensive than atropine CMP, its use would lead to significant cost savings. Savings would mainly be due to fewer medication errors and their associated consequences and the absence of wastage when atropine syringes are prepared in advance. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.
Determination of the carbon budget of a pasture: effect of system boundaries and flux uncertainties
NASA Astrophysics Data System (ADS)
Felber, R.; Bretscher, D.; Münger, A.; Neftel, A.; Ammann, C.
2015-12-01
Carbon (C) sequestration in the soil is considered as a potential important mechanism to mitigate greenhouse gas (GHG) emissions of the agricultural sector. It can be quantified by the net ecosystem carbon budget (NECB) describing the change of soil C as the sum of all relevant import and export fluxes. NECB was investigated here in detail for an intensively grazed dairy pasture in Switzerland. Two budget approaches with different system boundaries were applied: NECBtot for system boundaries including the grazing cows and NECBpast for system boundaries excluding the cows. CO2 and CH4 exchange induced by soil/vegetation processes as well as direct emissions by the animals were derived from eddy covariance measurements. Other C fluxes were either measured (milk yield, concentrate feeding) or derived based on animal performance data (intake, excreta). For the investigated year, both approaches resulted in a small non-significant C loss: NECBtot - 13 ± 61 g C m-2 yr-1 and NECBpast - 17 ± 81 g C m-2 yr-1. The considerable uncertainties, depending on the approach, were mainly due to errors in the CO2 exchange or in the animal related fluxes. The associated GHG budget revealed CH4 emissions from the cows to be the major contributor, but with much lower uncertainty compared to NECB. Although only one year of data limit the representativeness of the carbon budget results, they demonstrated the important contribution of the non-CO2 fluxes depending on the chosen system boundaries and the effect of their propagated uncertainty in an exemplary way. The simultaneous application and comparison of both NECB approaches provides a useful consistency check for the carbon budget determination and can help to identify and eliminate systematic errors.
Determination of the carbon budget of a pasture: effect of system boundaries and flux uncertainties
NASA Astrophysics Data System (ADS)
Felber, Raphael; Bretscher, Daniel; Münger, Andreas; Neftel, Albrecht; Ammann, Christof
2016-05-01
Carbon (C) sequestration in the soil is considered as a potential important mechanism to mitigate greenhouse gas (GHG) emissions of the agricultural sector. It can be quantified by the net ecosystem carbon budget (NECB) describing the change of soil C as the sum of all relevant import and export fluxes. NECB was investigated here in detail for an intensively grazed dairy pasture in Switzerland. Two budget approaches with different system boundaries were applied: NECBtot for system boundaries including the grazing cows and NECBpast for system boundaries excluding the cows. CO2 and CH4 exchange induced by soil/vegetation processes as well as direct emissions by the animals were derived from eddy covariance measurements. Other C fluxes were either measured (milk yield, concentrate feeding) or derived based on animal performance data (intake, excreta). For the investigated year, both approaches resulted in a small near-neutral C budget: NECBtot -27 ± 62 and NECBpast 23 ± 76 g C m-2 yr-1. The considerable uncertainties, depending on the approach, were mainly due to errors in the CO2 exchange or in the animal-related fluxes. The comparison of the NECB results with the annual exchange of other GHG revealed CH4 emissions from the cows to be the major contributor in terms of CO2 equivalents, but with much lower uncertainty compared to NECB. Although only 1 year of data limit the representativeness of the carbon budget results, they demonstrate the important contribution of the non-CO2 fluxes depending on the chosen system boundaries and the effect of their propagated uncertainty in an exemplary way. The simultaneous application and comparison of both NECB approaches provides a useful consistency check for the carbon budget determination and can help to identify and eliminate systematic errors.
Onorbit IMU alignment error budget
NASA Technical Reports Server (NTRS)
Corson, R. W.
1980-01-01
The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.
NASA Astrophysics Data System (ADS)
Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei
2018-03-01
Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.
40 CFR Appendix F to Part 60 - Quality Assurance Procedures
Code of Federal Regulations, 2014 CFR
2014-07-01
... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM... the daily zero (or low-level) CD or the daily high-level CD exceeds two times the limits of the... (or low-level) or high-level CD result exceeds four times the applicable drift specification in...
40 CFR Appendix F to Part 60 - Quality Assurance Procedures
Code of Federal Regulations, 2013 CFR
2013-07-01
... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...
40 CFR Appendix F to Part 60 - Quality Assurance Procedures
Code of Federal Regulations, 2010 CFR
2010-07-01
... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...
40 CFR Appendix F to Part 60 - Quality Assurance Procedures
Code of Federal Regulations, 2011 CFR
2011-07-01
... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...
2000-02-01
CON -^OOCOOOOOOCMCM^t...0 (D 3 5 o o .C CO o £- U >- u. a CO W ,_ O S N N CO o a> CON in s o CO s o co CM in co en o i- o CO CM §1 Ü O CO •* 3 O to 8 ,o CO...34* *-■ S « CO ~ -— CD CD E Ü •*= CD TO .2> 2 £ g O T3 T3 ■i °. 1 8 E o O p Ü fc. _- >» Pro O (0 »1 0- (D If E ° S £
NASA Astrophysics Data System (ADS)
Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward
2016-01-01
We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.
How to obtain accurate resist simulations in very low-k1 era?
NASA Astrophysics Data System (ADS)
Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu
2006-03-01
A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can be confidently evaluated using the accurately calibrated resist model. One of the examples simulates the sensitivity of the mask pattern error, which is helpful to specify the mask CD control.
Transfer of Cadmium from Soil to Vegetable in the Pearl River Delta area, South China
Zhang, Huihua; Chen, Junjian; Zhu, Li; Yang, Guoyi; Li, Dingqiang
2014-01-01
The purpose of this study was to investigate the regional Cadmium (Cd) concentration levels in soils and in leaf vegetables across the Pearl River Delta (PRD) area; and reveal the transfer characteristics of Cadmium (Cd) from soils to leaf vegetable species on a regional scale. 170 paired vegetables and corresponding surface soil samples in the study area were collected for calculating the transfer factors of Cadmium (Cd) from soils to vegetables. This investigation revealed that in the study area Cd concentration in soils was lower (mean value 0.158 mg kg−1) compared with other countries or regions. The Cd-contaminated areas are mainly located in west areas of the Pearl River Delta. Cd concentrations in all vegetables were lower than the national standard of Safe vegetables (0.2 mg kg−1). 88% of vegetable samples met the standard of No-Polluted vegetables (0.05 mg kg−1). The Cd concentration in vegetables was mainly influenced by the interactions of total Cd concentration in soils, soil pH and vegetable species. The fit lines of soil-to-plant transfer factors and total Cd concentration in soils for various vegetable species were best described by the exponential equation (), and these fit lines can be divided into two parts, including the sharply decrease part with a large error range, and the slowly decrease part with a low error range, according to the gradual increasing of total Cd concentrations in soils. PMID:25247431
Transfer of cadmium from soil to vegetable in the Pearl River Delta area, South China.
Zhang, Huihua; Chen, Junjian; Zhu, Li; Yang, Guoyi; Li, Dingqiang
2014-01-01
The purpose of this study was to investigate the regional Cadmium (Cd) concentration levels in soils and in leaf vegetables across the Pearl River Delta (PRD) area; and reveal the transfer characteristics of Cadmium (Cd) from soils to leaf vegetable species on a regional scale. 170 paired vegetables and corresponding surface soil samples in the study area were collected for calculating the transfer factors of Cadmium (Cd) from soils to vegetables. This investigation revealed that in the study area Cd concentration in soils was lower (mean value 0.158 mg kg(-1)) compared with other countries or regions. The Cd-contaminated areas are mainly located in west areas of the Pearl River Delta. Cd concentrations in all vegetables were lower than the national standard of Safe vegetables (0.2 mg kg(-1)). 88% of vegetable samples met the standard of No-Polluted vegetables (0.05 mg kg(-1)). The Cd concentration in vegetables was mainly influenced by the interactions of total Cd concentration in soils, soil pH and vegetable species. The fit lines of soil-to-plant transfer factors and total Cd concentration in soils for various vegetable species were best described by the exponential equation (y = ax(b)), and these fit lines can be divided into two parts, including the sharply decrease part with a large error range, and the slowly decrease part with a low error range, according to the gradual increasing of total Cd concentrations in soils.
Science support for the Earth radiation budget experiment
NASA Technical Reports Server (NTRS)
Coakley, James A., Jr.
1994-01-01
The work undertaken as part of the Earth Radiation Budget Experiment (ERBE) included the following major components: The development and application of a new cloud retrieval scheme to assess errors in the radiative fluxes arising from errors in the ERBE identification of cloud conditions. The comparison of the anisotropy of reflected sunlight and emitted thermal radiation with the anisotropy predicted by the Angular Dependence Models (ADM's) used to obtain the radiative fluxes. Additional studies included the comparison of calculated longwave cloud-free radiances with those observed by the ERBE scanner and the use of ERBE scanner data to track the calibration of the shortwave channels of the Advanced Very High Resolution Radiometer (AVHRR). Major findings included: the misidentification of cloud conditions by the ERBE scene identification algorithm could cause 15 percent errors in the shortwave flux reflected by certain scene types. For regions containing mixtures of scene types, the errors were typically less than 5 percent, and the anisotropies of the shortwave and longwave radiances exhibited a spatial scale dependence which, because of the growth of the scanner field of view from nadir to limb, gave rise to a view zenith angle dependent bias in the radiative fluxes.
NASA Astrophysics Data System (ADS)
Nesladek, Pavel; Wiswesser, Andreas; Sass, Björn; Mauermann, Sebastian
2008-04-01
The Critical dimension off-target (CDO) is a key parameter for mask house customer, affecting directly the performance of the mask. The CDO is the difference between the feature size target and the measured feature size. The change of CD during the process is either compensated within the process or by data correction. These compensation methods are commonly called process bias and data bias, respectively. The difference between data bias and process bias in manufacturing results in systematic CDO error, however, this systematic error does not take into account the instability of the process bias. This instability is a result of minor variations - instabilities of manufacturing processes and changes in materials and/or logistics. Using several masks the CDO of the manufacturing line can be estimated. For systematic investigation of the unit process contribution to CDO and analysis of the factors influencing the CDO contributors, a solid understanding of each unit process and huge number of masks is necessary. Rough identification of contributing processes and splitting of the final CDO variation between processes can be done with approx. 50 masks with identical design, material and process. Such amount of data allows us to identify the main contributors and estimate the effect of them by means of Analysis of variance (ANOVA) combined with multivariate analysis. The analysis does not provide information about the root cause of the variation within the particular unit process, however, it provides a good estimate of the impact of the process on the stability of the manufacturing line. Additionally this analysis can be used to identify possible interaction between processes, which cannot be investigated if only single processes are considered. Goal of this work is to evaluate limits for CDO budgeting models given by the precision and the number of measurements as well as partitioning the variation within the manufacturing process. The CDO variation splits according to the suggested model into contributions from particular processes or process groups. Last but not least the power of this method to determine the absolute strength of each parameter will be demonstrated. Identification of the root cause of this variation within the unit process itself is not scope of this work.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Analysis of error-correction constraints in an optical disk
NASA Astrophysics Data System (ADS)
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
NASA Astrophysics Data System (ADS)
Mehta, Sohan S.; Ganta, Lakshmi K.; Chauhan, Vikrant; Wu, Yixu; Singh, Sunil; Ann, Chia; Subramany, Lokesh; Higgins, Craig; Erenturk, Burcin; Srivastava, Ravi; Singh, Paramjit; Koh, Hui Peng; Cho, David
2015-03-01
Immersion based 20nm technology node and below becoming very challenging to chip designers, process and integration due to multiple patterning to integrate one design layer . Negative tone development (NTD) processes have been well accepted by industry experts for enabling technologies 20 nm and below. 193i double patterning is the technology solution for pitch down to 80 nm. This imposes tight control in critical dimension(CD) variation in double patterning where design patterns are decomposed in two different masks such as in litho-etch-litho etch (LELE). CD bimodality has been widely studied in LELE double patterning. A portion of CD tolerance budget is significantly consumed by variations in CD in double patterning. The objective of this work is to study the process variation challenges and resolution in the Negative Tone Develop Process for 20 nm and Below Technology Node. This paper describes the effect of dose slope on CD variation in negative tone develop LELE process. This effect becomes even more challenging with standalone NTD developer process due to q-time driven CD variation. We studied impact of different stacks with combination of binary and attenuated phase shift mask and estimated dose slope contribution individually from stack and mask type. Mask 3D simulation was carried out to understand theoretical aspect. In order to meet the minimum insulator requirement for the worst case on wafer the overlay and critical dimension uniformity (CDU) budget margins have slimmed. Besides the litho process and tool control using enhanced metrology feedback, the variation control has other dependencies too. Color balancing between the two masks in LELE is helpful in countering effects such as iso-dense bias, and pattern shifting. Dummy insertion and the improved decomposition techniques [2] using multiple lower priority constraints can help to a great extent. Innovative color aware routing techniques [3] can also help with achieving more uniform density and color balanced layouts.
The Many-Headed Hydra: Information Networking at LAA.
ERIC Educational Resources Information Center
Winzenried, Arthur P.
1997-01-01
Describes an integrated computer library system installed at Lilydale Adventist Academy (LAA) in Melbourne (Australia) in response to a limited budget, increased demand, and greater user expectations. Topics include student workstations, cost effectiveness, CD-ROMS on local area networks, and student input regarding their needs. (Author/LRW)
Delanghe, Joris R; Cobbaert, Christa; Galteau, Marie-Madeleine; Harmoinen, Aimo; Jansen, Rob; Kruse, Rolf; Laitinen, Päivi; Thienpont, Linda M; Wuyts, Birgitte; Weykamp, Cas; Panteghini, Mauro
2008-01-01
The European In Vitro Diagnostics (IVD) directive requires traceability to reference methods and materials of analytes. It is a task of the profession to verify the trueness of results and IVD compatibility. The results of a trueness verification study by the European Communities Confederation of Clinical Chemistry (EC4) working group on creatinine standardization are described, in which 189 European laboratories analyzed serum creatinine in a commutable serum-based material, using analytical systems from seven companies. Values were targeted using isotope dilution gas chromatography/mass spectrometry. Results were tested on their compliance to a set of three criteria: trueness, i.e., no significant bias relative to the target value, between-laboratory variation and within-laboratory variation relative to the maximum allowable error. For the lower and intermediate level, values differed significantly from the target value in the Jaffe and the dry chemistry methods. At the high level, dry chemistry yielded higher results. Between-laboratory coefficients of variation ranged from 4.37% to 8.74%. Total error budget was mainly consumed by the bias. Non-compensated Jaffe methods largely exceeded the total error budget. Best results were obtained for the enzymatic method. The dry chemistry method consumed a large part of its error budget due to calibration bias. Despite the European IVD directive and the growing needs for creatinine standardization, an unacceptable inter-laboratory variation was observed, which was mainly due to calibration differences. The calibration variation has major clinical consequences, in particular in pediatrics, where reference ranges for serum and plasma creatinine are low, and in the estimation of glomerular filtration rate.
Rogel-Castillo, Cristian; Boulton, Roger; Opastpongkarn, Arunwong; Huang, Guangwei; Mitchell, Alyson E
2016-07-27
Concealed damage (CD) is defined as a brown discoloration of the kernel interior (nutmeat) that appears only after moderate to high heat treatment (e.g., blanching, drying, roasting, etc.). Raw almonds with CD have no visible defects before heat treatment. Currently, there are no screening methods available for detecting CD in raw almonds. Herein, the feasibility of using near-infrared (NIR) spectroscopy between 1125 and 2153 nm for the detection of CD in almonds is demonstrated. Almond kernels with CD have less NIR absorbance in the region related with oil, protein, and carbohydrates. With the use of partial least squares discriminant analysis (PLS-DA) and selection of specific wavelengths, three classification models were developed. The calibration models have false-positive and false-negative error rates ranging between 12.4 and 16.1% and between 10.6 and 17.2%, respectively. The percent error rates ranged between 8.2 and 9.2%. Second-derivative preprocessing of the selected wavelength resulted in the most robust predictive model.
Towards the 1 mm/y stability of the radial orbit error at regional scales
NASA Astrophysics Data System (ADS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-François; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2015-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West “order-1” pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, H. -Y.; Klein, S. A.; Xie, S.
Many weather forecasting and climate models simulate a warm surface air temperature (T2m) bias over mid-latitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multi-model intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to T2m bias using a short-term hindcast approach with observations mainly from the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site during the period of April to August 2011. The present study examines the contributionmore » of surface energy budget errors to the bias. All participating models simulate higher net shortwave and longwave radiative fluxes at the surface but there is no consistency on signs of biases in latent and sensible heat fluxes over the Central U.S. and ARM SGP. Nevertheless, biases in net shortwave and downward longwave fluxes, as well as surface evaporative fraction (EF) are the main contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF is affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis suggests that radiation errors are always an important source of T2m error for long-term climate runs with EF errors either of equal or lesser importance. However, for the short-term hindcasts, EF errors are more important provided a model has a substantial EF bias.« less
Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales
NASA Technical Reports Server (NTRS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2015-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales
NASA Technical Reports Server (NTRS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2014-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS,SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
Mask characterization for CDU budget breakdown in advanced EUV lithography
NASA Astrophysics Data System (ADS)
Nikolsky, Peter; Strolenberg, Chris; Nielsen, Rasmus; Nooitgedacht, Tjitte; Davydova, Natalia; Yang, Greg; Lee, Shawn; Park, Chang-Min; Kim, Insung; Yeo, Jeong-Ho
2012-11-01
As the ITRS Critical Dimension Uniformity (CDU) specification shrinks, semiconductor companies need to maintain a high yield of good wafers per day and a high performance (and hence market value) of finished products. This cannot be achieved without continuous analysis and improvement of on-product CDU as one of the main drivers for process control and optimization with better understanding of main contributors from the litho cluster: mask, process, metrology and scanner. In this paper we will demonstrate a study of mask CDU characterization and its impact on CDU Budget Breakdown (CDU BB) performed for an advanced EUV lithography with 1D and 2D feature cases. We will show that this CDU contributor is one of the main differentiators between well-known ArFi and new EUV CDU budgeting principles. We found that reticle contribution to intrafield CDU should be characterized in a specific way: mask absorber thickness fingerprints play a role comparable with reticle CDU in the total reticle part of the CDU budget. Wafer CD fingerprints, introduced by this contributor, may or may not compensate variations of mask CD's and hence influence on total mask impact on intrafield CDU at the wafer level. This will be shown on 1D and 2D feature examples in this paper. Also mask stack reflectivity variations should be taken into account: these fingerprints have visible impact on intrafield CDs at the wafer level and should be considered as another contributor to the reticle part of EUV CDU budget. We observed also MEEF-through-field fingerprints in the studied EUV cases. Variations of MEEF may also play a role for the total intrafield CDU and may be taken into account for EUV Lithography. We characterized MEEF-through-field for the reviewed features, the results to be discussed in our paper, but further analysis of this phenomenon is required. This comprehensive approach to characterization of the mask part of EUV CDU characterization delivers an accurate and integral CDU Budget Breakdown per product/process and Litho tool. The better understanding of the entire CDU budget for advanced EUVL nodes achieved by Samsung and ASML helps to extend the limits of Moore's Law and to deliver successful implementation of smaller, faster and smarter chips in semiconductor industry.
NASA Technical Reports Server (NTRS)
Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert
1994-01-01
Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.
ADONIS: One Library's Experience with a CD-ROM Document Delivery System.
ERIC Educational Resources Information Center
Pereira, Monica
Academic libraries have traditionally used interlibrary lending to facilitate document delivery. The trend of stagnating or dwindling serials budgets in libraries, coupled with increased journal costs, has served to increase libraries' reliance on the benefits of consortium pricing and shared costs, by utilizing interlibrary lending of journals.…
Balancing Your Database Network Licenses against Your Budget.
ERIC Educational Resources Information Center
Bauer, Benjamin F.
1995-01-01
Discussion of choosing database access to satisfy users and budgetary constraints highlights a method to make educated estimates of simultaneous usage levels. Topics include pricing; advances in networks and CD-ROM technology; and two networking scenarios, one in an academic library and one in a corporate research facility. (LRW)
The effect of saccade metrics on the corollary discharge contribution to perceived eye location
Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.
2015-01-01
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955
Real-time line-width measurements: a new feature for reticle inspection systems
NASA Astrophysics Data System (ADS)
Eran, Yair; Greenberg, Gad; Joseph, Amnon; Lustig, Cornel; Mizrahi, Eyal
1997-07-01
The significance of line width control in mask production has become greater with the lessening of defect size. There are two conventional methods used for controlling line widths dimensions which employed in the manufacturing of masks for sub micron devices. These two methods are the critical dimensions (CD) measurement and the detection of edge defects. Achieving reliable and accurate control of line width errors is one of the most challenging tasks in mask production. Neither of the two methods cited above (namely CD measurement and the detection of edge defects) guarantees the detection of line width errors with good sensitivity over the whole mask area. This stems from the fact that CD measurement provides only statistical data on the mask features whereas applying edge defect detection method checks defects on each edge by itself, and does not supply information on the combined result of error detection on two adjacent edges. For example, a combination of a small edge defect together with a CD non- uniformity which are both within the allowed tolerance, may yield a significant line width error, which will not be detected using the conventional methods (see figure 1). A new approach for the detection of line width errors which overcomes this difficulty is presented. Based on this approach, a new sensitive line width error detector was developed and added to Orbot's RT-8000 die-to-database reticle inspection system. This innovative detector operates continuously during the mask inspection process and scans (inspects) the entire area of the reticle for line width errors. The detection is based on a comparison of measured line width that are taken on both the design database and the scanned image of the reticle. In section 2, the motivation for developing this new detector is presented. The section covers an analysis of various defect types, which are difficult to detect using conventional edge detection methods or, alternatively, CD measurements. In section 3, the basic concept of the new approach is introduced together with a description of the new detector and its characteristics. In section 4, the calibration process that took place in order to achieve reliable and repeatable line width measurements is presented. The description of an experiments conducted in order to evaluate the sensitivity of the new detector is given in section 5, followed by a report of the results of this evaluation. The conclusions are presented in section 6.
Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun
2016-07-01
The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.
NASA Astrophysics Data System (ADS)
Bacopoulos, Peter
2018-05-01
A localized truncation error analysis with complex derivatives (LTEA+CD) is applied recursively with advanced circulation (ADCIRC) simulations of tides and storm surge for finite element mesh optimization. Mesh optimization is demonstrated with two iterations of LTEA+CD for tidal simulation in the lower 200 km of the St. Johns River, located in northeast Florida, and achieves more than an over 50% decrease in the number of mesh nodes, relating to a twofold increase in efficiency, at a zero cost to model accuracy. The recursively generated meshes using LTEA+CD lead to successive reductions in the global cumulative truncation error associated with the model mesh. Tides are simulated with root mean square error (RMSE) of 0.09-0.21 m and index of agreement (IA) values generally in the 80s and 90s percentage ranges. Tidal currents are simulated with RMSE of 0.09-0.23 m s-1 and IA values of 97% and greater. Storm tide due to Hurricane Matthew 2016 is simulated with RMSE of 0.09-0.33 m and IA values of 75-96%. Analysis of the LTEA+CD results shows the M2 constituent to dominate the node spacing requirement in the St. Johns River, with the M4 and M6 overtides and the STEADY constituent contributing some. Friction is the predominant physical factor influencing the target element size distribution, especially along the main river stem, while frequency (inertia) and Coriolis (rotation) are supplementary contributing factors. The combination of interior- and boundary-type computational molecules, providing near-full coverage of the model domain, renders LTEA+CD an attractive mesh generation/optimization tool for complex coastal and estuarine domains. The mesh optimization procedure using LTEA+CD is automatic and extensible to other finite element-based numerical models. Discussion is provided on the scope of LTEA+CD, the starting point (mesh) of the procedure, the user-specified scaling of the LTEA+CD results, and the iteration (termination) of LTEA+CD for mesh optimization.
Simultaneous overlay and CD measurement for double patterning: scatterometry and RCWA approach
NASA Astrophysics Data System (ADS)
Li, Jie; Liu, Zhuan; Rabello, Silvio; Dasari, Prasad; Kritsun, Oleg; Volkman, Catherine; Park, Jungchul; Singh, Lovejeet
2009-03-01
As optical lithography advances to 32 nm technology node and beyond, double patterning technology (DPT) has emerged as an attractive solution to circumvent the fundamental optical limitations. DPT poses unique demands on critical dimension (CD) uniformity and overlay control, making the tolerance decrease much faster than the rate at which critical dimension shrinks. This, in turn, makes metrology even more challenging. In the past, multi-pad diffractionbased overlay (DBO) using empirical approach has been shown to be an effective approach to measure overlay error associated with double patterning [1]. In this method, registration errors for double patterning were extracted from specially designed diffraction targets (three or four pads for each direction); CD variation is assumed negligible within each group of adjacent pads and not addressed in the measurement. In another paper, encouraging results were reported with a first attempt at simultaneously extracting overlay and CD parameters using scatterometry [2]. In this work, we apply scatterometry with a rigorous coupled wave analysis (RCWA) approach to characterize two double-patterning processes: litho-etch-litho-etch (LELE) and litho-freeze-litho-etch (LFLE). The advantage of performing rigorous modeling is to reduce the number of pads within each measurement target, thus reducing space requirement and improving throughput, and simultaneously extract CD and overlay information. This method measures overlay errors and CDs by fitting the optical signals with spectra calculated from a model of the targets. Good correlation is obtained between the results from this method and that of several reference techniques, including empirical multi-pad DBO, CD-SEM, and IBO. We also perform total measurement uncertainty (TMU) analysis to evaluate the overall performance. We demonstrate that scatterometry provides a promising solution to meet the challenging overlay metrology requirement in DPT.
Qiu, Lefeng; Wang, Kai; Long, Wenli; Wang, Ke; Hu, Wei; Amable, Gabriel S.
2016-01-01
Soil cadmium (Cd) contamination has attracted a great deal of attention because of its detrimental effects on animals and humans. This study aimed to develop and compare the performances of stepwise linear regression (SLR), classification and regression tree (CART) and random forest (RF) models in the prediction and mapping of the spatial distribution of soil Cd and to identify likely sources of Cd accumulation in Fuyang County, eastern China. Soil Cd data from 276 topsoil (0–20 cm) samples were collected and randomly divided into calibration (222 samples) and validation datasets (54 samples). Auxiliary data, including detailed land use information, soil organic matter, soil pH, and topographic data, were incorporated into the models to simulate the soil Cd concentrations and further identify the main factors influencing soil Cd variation. The predictive models for soil Cd concentration exhibited acceptable overall accuracies (72.22% for SLR, 70.37% for CART, and 75.93% for RF). The SLR model exhibited the largest predicted deviation, with a mean error (ME) of 0.074 mg/kg, a mean absolute error (MAE) of 0.160 mg/kg, and a root mean squared error (RMSE) of 0.274 mg/kg, and the RF model produced the results closest to the observed values, with an ME of 0.002 mg/kg, an MAE of 0.132 mg/kg, and an RMSE of 0.198 mg/kg. The RF model also exhibited the greatest R2 value (0.772). The CART model predictions closely followed, with ME, MAE, RMSE, and R2 values of 0.013 mg/kg, 0.154 mg/kg, 0.230 mg/kg and 0.644, respectively. The three prediction maps generally exhibited similar and realistic spatial patterns of soil Cd contamination. The heavily Cd-affected areas were primarily located in the alluvial valley plain of the Fuchun River and its tributaries because of the dramatic industrialization and urbanization processes that have occurred there. The most important variable for explaining high levels of soil Cd accumulation was the presence of metal smelting industries. The good performance of the RF model was attributable to its ability to handle the non-linear and hierarchical relationships between soil Cd and environmental variables. These results confirm that the RF approach is promising for the prediction and spatial distribution mapping of soil Cd at the regional scale. PMID:26964095
Qiu, Lefeng; Wang, Kai; Long, Wenli; Wang, Ke; Hu, Wei; Amable, Gabriel S
2016-01-01
Soil cadmium (Cd) contamination has attracted a great deal of attention because of its detrimental effects on animals and humans. This study aimed to develop and compare the performances of stepwise linear regression (SLR), classification and regression tree (CART) and random forest (RF) models in the prediction and mapping of the spatial distribution of soil Cd and to identify likely sources of Cd accumulation in Fuyang County, eastern China. Soil Cd data from 276 topsoil (0-20 cm) samples were collected and randomly divided into calibration (222 samples) and validation datasets (54 samples). Auxiliary data, including detailed land use information, soil organic matter, soil pH, and topographic data, were incorporated into the models to simulate the soil Cd concentrations and further identify the main factors influencing soil Cd variation. The predictive models for soil Cd concentration exhibited acceptable overall accuracies (72.22% for SLR, 70.37% for CART, and 75.93% for RF). The SLR model exhibited the largest predicted deviation, with a mean error (ME) of 0.074 mg/kg, a mean absolute error (MAE) of 0.160 mg/kg, and a root mean squared error (RMSE) of 0.274 mg/kg, and the RF model produced the results closest to the observed values, with an ME of 0.002 mg/kg, an MAE of 0.132 mg/kg, and an RMSE of 0.198 mg/kg. The RF model also exhibited the greatest R2 value (0.772). The CART model predictions closely followed, with ME, MAE, RMSE, and R2 values of 0.013 mg/kg, 0.154 mg/kg, 0.230 mg/kg and 0.644, respectively. The three prediction maps generally exhibited similar and realistic spatial patterns of soil Cd contamination. The heavily Cd-affected areas were primarily located in the alluvial valley plain of the Fuchun River and its tributaries because of the dramatic industrialization and urbanization processes that have occurred there. The most important variable for explaining high levels of soil Cd accumulation was the presence of metal smelting industries. The good performance of the RF model was attributable to its ability to handle the non-linear and hierarchical relationships between soil Cd and environmental variables. These results confirm that the RF approach is promising for the prediction and spatial distribution mapping of soil Cd at the regional scale.
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.
Zhao, Guo; Wang, Hui; Liu, Gang
2017-07-03
Abstract : In this study, a novel method based on a Bi/glassy carbon electrode (Bi/GCE) for quantitatively and directly detecting Cd 2+ in the presence of Cu 2+ without further electrode modifications by combining square-wave anodic stripping voltammetry (SWASV) and a back-propagation artificial neural network (BP-ANN) has been proposed. The influence of the Cu 2+ concentration on the stripping response to Cd 2+ was studied. In addition, the effect of the ferrocyanide concentration on the SWASV detection of Cd 2+ in the presence of Cu 2+ was investigated. A BP-ANN with two inputs and one output was used to establish the nonlinear relationship between the concentration of Cd 2+ and the stripping peak currents of Cu 2+ and Cd 2+ . The factors affecting the SWASV detection of Cd 2+ and the key parameters of the BP-ANN were optimized. Moreover, the direct calibration model (i.e., adding 0.1 mM ferrocyanide before detection), the BP-ANN model and other prediction models were compared to verify the prediction performance of these models in terms of their mean absolute errors (MAEs), root mean square errors (RMSEs) and correlation coefficients. The BP-ANN model exhibited higher prediction accuracy than the direct calibration model and the other prediction models. Finally, the proposed method was used to detect Cd 2+ in soil samples with satisfactory results.
Evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas
Medina, K.D.; Geiger, C.O.
1984-01-01
The results of an evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas are documented. Data uses and funding sources were identified for the 140 complete record streamflow-gaging stations operated in Kansas during 1983 with a budget of $793,780. As a result of the evaluation of the needs and uses of data from the stream-gaging program, it was found that the 140 gaging stations were needed to meet these data requirements. The average standard error of estimation of streamflow records was 20.8 percent, assuming the 1983 budget and operating schedule of 6-week interval visitations and based on 85 of the 140 stations. It was shown that this overall level of accuracy could be improved to 18.9 percent by altering the 1983 schedule of station visitations. A minimum budget of $760 ,000, with a corresponding average error of estimation of 24.9 percent, is required to operate the 1983 program. None of the stations investigated were suitable for the application of alternative methods for simulating discharge records. Improved instrumentation can have a very positive impact on streamflow uncertainties by decreasing lost record. (USGS)
Dual view Geostationary Earth Radiation Budget from the Meteosat Second Generation satellites.
NASA Astrophysics Data System (ADS)
Dewitte, Steven; Clerbaux, Nicolas; Ipe, Alessandro; Baudrez, Edward; Moreels, Johan
2017-04-01
The diurnal cycle of the radiation budget is a key component of the tropical climate. The geostationary Meteosat Second Generation (MSG) satellites carrying both the broadband Geostationary Earth Radiation Budget (GERB) instrument with nadir resolution of 50 km and the multispectral Spinning Enhanced VIsible and InfraRed Imager (SEVIRI) with nadir resolution of 3 km offer a unique opportunity to observe this diurnal cycle. The geostationary orbit has the advantage of good temporal sampling but the disadvantage of fixed viewing angles, which makes the measurements of the broadband Top Of Atmosphere (TOA) radiative fluxes more sensitive to angular dependent errors. The Meteosat-10 (MSG-3) satellite observes the earth from the standard position at 0° longitude. From October 2016 onwards the Meteosat-8 (MSG-1) satellite makes observations from a new position at 41.5° East over the Indian Ocean. The dual view from Meteosat-8 and Meteosat-10 allows the assessment and correction of angular dependent systematic errors of the flux estimates. We demonstrate this capability with the validation of a new method for the estimation of the clear-sky TOA albedo from the SEVIRI instruments.
Real-time high-resolution PC-based system for measurement of errors on compact disks
NASA Astrophysics Data System (ADS)
Tehranchi, Babak; Howe, Dennis G.
1994-10-01
Hardware and software utilities are developed to directly monitor the Eight-to-Fourteen (EFM) demodulated data bytes at the input of a CD player's Cross-Interleaved Reed-Solomon Code (CIRC) block decoder. The hardware is capable of identifying erroneous data with single-byte resolution in the serial data stream read from a Compact Disc by a CDD 461 Philips CD-ROM drive. In addition, the system produces graphical maps that show the physical location of the measured errors on the entire disc, or via a zooming and planning feature, on user selectable local disc regions.
NASA Astrophysics Data System (ADS)
Vanhaelewyn, Gauthier; Duchatelet, Pierre; Vigouroux, Corinne; Dils, Bart; Kumps, Nicolas; Hermans, Christian; Demoulin, Philippe; Mahieu, Emmanuel; Sussmann, Ralf; de Mazière, Martine
2010-05-01
The Fourier Transform Infra Red (FTIR) remote measurements of atmospheric constituents at the observatories at Saint-Denis (20.90°S, 55.48°E, 50 m a.s.l., Île de la Réunion) and Jungfraujoch (46.55°N, 7.98°E, 3580 m a.s.l., Switzerland) are affiliated to the Network for the Detection of Atmospheric Composition Change (NDACC). The European NDACC FTIR data for CH4 were improved and homogenized among the stations in the EU project HYMN. One important application of these data is their use for the validation of satellite products, like the validation of SCIAMACHY or IASI CH4 columns. Therefore, it is very important that errors and uncertainties associated to the ground-based FTIR CH4 data are well characterized. In this poster we present a comparison of errors on retrieved vertical concentration profiles of CH4 between Saint-Denis and Jungfraujoch. At both stations, we have used the same retrieval algorithm, namely SFIT2 v3.92 developed jointly at the NASA Langley Research Center, the National Center for Atmospheric Research (NCAR) and the National Institute of Water and Atmosphere Research (NIWA) at Lauder, New Zealand, and error evaluation tools developed at the Belgian Institute for Space Aeronomy (BIRA-IASB). The error components investigated in this study are: smoothing, noise, temperature, instrumental line shape (ILS) (in particular the modulation amplitude and phase), spectroscopy (in particular the pressure broadening and intensity), interfering species and solar zenith angle (SZA) error. We will determine if the characteristics of the sites in terms of altitude, geographic locations and atmospheric conditions produce significant differences in the error budgets for the retrieved CH4 vertical profiles
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.
2016-12-01
This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.
2000-02-01
4-> mem 4J rl cn Di o w w rHßOJlHW ß4-)rdftiH>lO T) rH 0) ft 0 ß cd o n. o-Hxiocn -HtdMotCDiVH ß (d ä o o b ft o... OHP fa U fa H > U C3 D D ID D D O JL X >~H ^3 X g M ft ft& ft * % O S 13 SB ß u o -H CN 4J CO 14H U n •H MH CN •H 4-> tt) CO m
Particle drag history in a subcritical post-shock flow - data analysis method and uncertainty
NASA Astrophysics Data System (ADS)
Ding, Liuyang; Bordoloi, Ankur; Adrian, Ronald; Prestridge, Kathy; Arizona State University Team; Los Alamos National Laboratory Team
2017-11-01
A novel data analysis method for measuring particle drag in an 8-pulse particle tracking velocimetry-accelerometry (PTVA) experiment is described. We represented the particle drag history, CD(t) , using polynomials up to the third order. An analytical model for continuous particle position history was derived by integrating an equation relating CD(t) with particle velocity and acceleration. The coefficients of CD(t) were then calculated by fitting the position history model to eight measured particle locations in the sense of least squares. A preliminary test with experimental data showed that the new method yielded physically more reasonable particle velocity and acceleration history compared to conventionally adopted polynomial fitting. To fully assess and optimize the performance of the new method, we performed a PTVA simulation by assuming a ground truth of particle motion based on an ensemble of experimental data. The results indicated a significant reduction in the RMS error of CD. We also found that for particle locating noise between 0.1 and 3 pixels, a range encountered in our experiment, the lowest RMS error was achieved by using the quadratic CD(t) model. Furthermore, we will also discuss the optimization of the pulse timing configuration.
Estimating diffusivity from the mixed layer heat and salt balances in the North Pacific
NASA Astrophysics Data System (ADS)
Cronin, M. F.; Pelland, N.; Emerson, S. R.; Crawford, W. R.
2015-12-01
Data from two National Oceanographic and Atmospheric Administration (NOAA) surface moorings in the North Pacific, in combination with data from satellite, Argo floats and glider (when available), are used to evaluate the residual diffusive flux of heat across the base of the mixed layer from the surface mixed layer heat budget. The diffusion coefficient (i.e., diffusivity) is then computed by dividing the diffusive flux by the temperature gradient in the 20-m transition layer just below the base of the mixed layer. At Station Papa in the NE Pacific subpolar gyre, this diffusivity is 1×10-4 m2/s during summer, increasing to ~3×10-4 m2/s during fall. During late winter and early spring, diffusivity has large errors. At other times, diffusivity computed from the mixed layer salt budget at Papa correlate with those from the heat budget, giving confidence that the results are robust for all seasons except late winter-early spring and can be used for other tracers. In comparison, at the Kuroshio Extension Observatory (KEO) in the NW Pacific subtropical recirculation gyre, somewhat larger diffusivity are found based upon the mixed layer heat budget: ~ 3×10-4 m2/s during the warm season and more than an order of magnitude larger during the winter, although again, wintertime errors are large. These larger values at KEO appear to be due to the increased turbulence associated with the summertime typhoons, and weaker wintertime stratification.
Estimating diffusivity from the mixed layer heat and salt balances in the North Pacific
NASA Astrophysics Data System (ADS)
Cronin, Meghan F.; Pelland, Noel A.; Emerson, Steven R.; Crawford, William R.
2015-11-01
Data from two National Oceanographic and Atmospheric Administration (NOAA) surface moorings in the North Pacific, in combination with data from satellite, Argo floats and glider (when available), are used to evaluate the residual diffusive flux of heat across the base of the mixed layer from the surface mixed layer heat budget. The diffusion coefficient (i.e., diffusivity) is then computed by dividing the diffusive flux by the temperature gradient in the 20 m transition layer just below the base of the mixed layer. At Station Papa in the NE Pacific subpolar gyre, this diffusivity is 1 × 10-4 m2/s during summer, increasing to ˜3 × 10-4 m2/s during fall. During late winter and early spring, diffusivity has large errors. At other times, diffusivity computed from the mixed layer salt budget at Papa correlate with those from the heat budget, giving confidence that the results are robust for all seasons except late winter-early spring and can be used for other tracers. In comparison, at the Kuroshio Extension Observatory (KEO) in the NW Pacific subtropical recirculation gyre, somewhat larger diffusivities are found based upon the mixed layer heat budget: ˜ 3 × 10-4 m2/s during the warm season and more than an order of magnitude larger during the winter, although again, wintertime errors are large. These larger values at KEO appear to be due to the increased turbulence associated with the summertime typhoons, and weaker wintertime stratification.
Erratum: Synthesis of Cd-free InP/ZnS Quantum Dots Suitable for Biomedical Applications.
2016-02-29
A correction was made to: Synthesis of Cd-free InP/ZnS Quantum Dots Suitable for Biomedical Applications. There was an error with an author's given name. The author's name was corrected to: Katye M. Fichter from: Kathryn M. Fichter.
Investigation of scene identification algorithms for radiation budget measurements
NASA Technical Reports Server (NTRS)
Diekmann, F. J.
1986-01-01
The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.
Prototype Development of a Geostationary Synthetic Thinned Aperture Radiometer, GeoSTAR
NASA Technical Reports Server (NTRS)
Tanner, Alan B.; Wilson, William J.; Kangaslahti, Pekka P.; Lambrigsten, Bjorn H.; Dinardo, Steven J.; Piepmeier, Jeffrey R.; Ruf, Christopher S.; Rogacki, Steven; Gross, S. M.; Musko, Steve
2004-01-01
Preliminary details of a 2-D synthetic aperture radiometer prototype operating from 50 to 58 GHz will be presented. The instrument is being developed as a laboratory testbed, and the goal of this work is to demonstrate the technologies needed to do atmospheric soundings with high spatial resolution from Geostationary orbit. The concept is to deploy a large sparse aperture Y-array from a geostationary satellite, and to use aperture synthesis to obtain images of the earth without the need for a large mechanically scanned antenna. The laboratory prototype consists of a Y-array of 24 horn antennas, MMIC receivers, and a digital cross-correlation sub-system. System studies are discussed, including an error budget which has been derived from numerical simulations. The error budget defines key requirements, such as null offsets, phase calibration, and antenna pattern knowledge. Details of the instrument design are discussed in the context of these requirements.
NASA Technical Reports Server (NTRS)
Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce
1985-01-01
An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.
2015-11-24
spatial concerns: ¤ how well are gradients captured? (resolution requirement) spatial/temporal concerns: ¤ dispersion and dissipation error...distribution is unlimited. Gradient Capture vs. Resolution: Single Mode FFT: Solution/Derivative: Convergence: f x( )= sin(x) with x∈[0,2π ] df dx...distribution is unlimited. Gradient Capture vs. Resolution: Multiple Modes FFT: Solution/Derivative: Convergence: 6 __ CD02 __ CD04 __ CD06
WFIRST: Managing Telescope Wavefront Stability to Meet Coronagraph Performance
NASA Astrophysics Data System (ADS)
Noecker, Martin; Poberezhskiy, Ilya; Kern, Brian; Krist, John; WFIRST System Engineering Team
2018-01-01
The WFIRST coronagraph instrument (CGI) needs a stable telescope and active wavefront control to perform coronagraph science with an expected sensitivity of 8x10-9 in the exoplanet-star flux ratio (SNR=10) at 200 milliarcseconds angular separation. With its subnanometer requirements on the stability of its input wavefront error (WFE), the CGI employs a combination of pointing and wavefront control loops and thermo-mechanical stability to meet budget allocations for beam-walk and low-order WFE, which enable stable starlight speckles on the science detector that can be removed by image subtraction. We describe the control strategy and the budget framework for estimating and budgeting the elements of wavefront stability, and the modeling strategy to evaluate it.
Advanced diffraction-based overlay for double patterning
NASA Astrophysics Data System (ADS)
Li, Jie; Liu, Yongdong; Dasari, Prasad; Hu, Jiangtao; Smith, Nigel; Kritsun, Oleg; Volkman, Catherine
2010-03-01
Diffraction based overlay (DBO) technologies have been developed to address the tighter overlay control challenges as the dimensions of integrated circuit continue to shrink. Several studies published recently have demonstrated that the performance of DBO technologies has the potential to meet the overlay metrology budget for 22nm technology node. However, several hurdles must be cleared before DBO can be used in production. One of the major hurdles is that most DBO technologies require specially designed targets that consist of multiple measurement pads, which consume too much space and increase measurement time. A more advanced spectroscopic ellipsometry (SE) technology-Mueller Matrix SE (MM-SE) is developed to address the challenge. We use a double patterning sample to demonstrate the potential of MM-SE as a DBO candidate. Sample matrix (the matrix that describes the effects of the sample on the incident optical beam) obtained from MM-SE contains up to 16 elements. We show that the Mueller elements from the off-diagonal 2x2 blocks respond to overlay linearly and are zero when overlay errors are absent. This superior property enables empirical DBO (eDBO) using two pads per direction. Furthermore, the rich information in Mueller matrix and its direct response to overlay make it feasible to extract overlay errors from only one pad per direction using modeling approach (mDBO). We here present the Mueller overlay results using both eDBO and mDBO and compare the results with image-based overlay (IBO) and CD-SEM results. We also report the tool induced shifts (TIS) and dynamic repeatability.
Groundwater discharge to lakes (GDL) - the disregarded component of lake nutrient budgets
NASA Astrophysics Data System (ADS)
Lewandowski, J.; Meinikmann, K.; Pöschke, F.; Nützmann, G.
2012-04-01
Eutrophication is a major threat to lakes in temperate climatic zones. It is necessary to determine the relevance of different nutrient sources to conduct effective management measures, to understand in-lake processes and to model future scenarios. A prerequisite for such nutrient budgets are water budgets. While most components of the water budget can be determined quite accurate the quantification of groundwater discharge to lakes (GDL) and surface water infiltration into the aquifer are much more difficult. For example, it is quite common to determine the groundwater component as residual in the water and nutrient budget which is extremely problematic since in that case all errors of the budget terms are summed up in the groundwater term. In total, we identified 10 different reasons for disregarding the groundwater path in nutrient budgets. We investigated the fate of the nutrients nitrogen and phosphorus on their pathway from the catchment through the reactive aquifer-lake interface into the lake. We reviewed the international literature and summarized numbers reported for GDL of nutrients. Since literature is quite sparse we also had a look at numbers reported for submarine groundwater discharge (SGD) of nutrients for which much more literature exists and which is despite some fundamental differences in principal comparable to GDL.
Improvement of CD-SEM mark position measurement accuracy
NASA Astrophysics Data System (ADS)
Kasa, Kentaro; Fukuhara, Kazuya
2014-04-01
CD-SEM is now attracting attention as a tool that can accurately measure positional error of device patterns. However, the measurement accuracy can get worse due to pattern asymmetry as in the case of image based overlay (IBO) and diffraction based overlay (DBO). For IBO and DBO, a way of correcting the inaccuracy arising from measurement patterns was suggested. For CD-SEM, although a way of correcting CD bias was proposed, it has not been argued how to correct the inaccuracy arising from pattern asymmetry using CD-SEM. In this study we will propose how to quantify and correct the measurement inaccuracy affected by pattern asymmetry.
The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier
2013-02-14
produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater
NASA Technical Reports Server (NTRS)
Barkstrom, B. R.
1983-01-01
The measurement of the earth's radiation budget has been chosen to illustrate the technique of objective system design. The measurement process is an approximately linear transformation of the original field of radiant exitances, so that linear statistical techniques may be employed. The combination of variability, measurement strategy, and error propagation is presently made with the help of information theory, as suggested by Kondratyev et al. (1975) and Peckham (1974). Covariance matrices furnish the quantitative statement of field variability.
Optimize of shrink process with X-Y CD bias on hole pattern
NASA Astrophysics Data System (ADS)
Koike, Kyohei; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Oyama, Kenichi; Yaegashi, Hidetami
2017-03-01
Gridded design rules[1] is major process in configuring logic circuit used 193-immersion lithography. In the scaling of grid patterning, we can make 10nm order line and space pattern by using multiple patterning techniques such as self-aligned multiple patterning (SAMP) and litho-etch- litho-etch (LELE)[2][3][4] . On the other hand, Line cut process has some error parameters such as pattern defect, placement error, roughness and X-Y CD bias with the decreasing scale. We tried to cure hole pattern roughness to use additional process such as Line smoothing[5] . Each smoothing process showed different effect. As the result, CDx shrink amount is smaller than CDy without one additional process. In this paper, we will report the pattern controllability comparison of EUV and 193-immersion. And we will discuss optimum method about CD bias on hole pattern.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-06
... been linked to human rights abuses, particularly in the area of natural resource development in ethnic.... SUMMARY: The Department of State is seeking Office of Management and Budget (OMB) approval for [email protected] . Mail (paper, or CD submissions): U.S. Department of State, DRL/EAP Suite 7817, Burma Human...
NASA Astrophysics Data System (ADS)
Rangel-Kuoppa, Victor-Tapio; Albor-Aguilera, María-de-Lourdes; Hérnandez-Vásquez, César; Flores-Márquez, José-Manuel; Jiménez-Olarte, Daniel; Sastré-Hernández, Jorge; González-Trujillo, Miguel-Ángel; Contreras-Puente, Gerardo-Silverio
2018-04-01
In this Part 2 of this series of articles, the procedure proposed in Part 1, namely a new parameter extraction technique of the shunt resistance (R sh ) and saturation current (I sat ) of a current-voltage (I-V) measurement of a solar cell, within the one-diode model, is applied to CdS-CdTe and CIGS-CdS solar cells. First, the Cheung method is used to obtain the series resistance (R s ) and the ideality factor n. Afterwards, procedures A and B proposed in Part 1 are used to obtain R sh and I sat . The procedure is compared with two other commonly used procedures. Better accuracy on the simulated I-V curves used with the parameters extracted by our method is obtained. Also, the integral percentage errors from the simulated I-V curves using the method proposed in this study are one order of magnitude smaller compared with the integral percentage errors using the other two methods.
Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas
Lindgren, R.J.
2006-01-01
A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.
1990-01-01
W a jwý 0 c! l U) to 0 C: C3 tdl &-j 10 0 r r -" .-4GJ -4 w 4-w w .- 4 W &a’’ 0 0 >1V) ( td ’- ~u 0 4 b W -4 a)i 0 1l-4 a, .. n (I = 0 -4 ý...to0 d)r- $4. 0)0 r. cd .,-~~ 4 C:s o a) (n44E-C W 0. 44CLH td c~ - 0ý 0r a;i w-4’-4 0’r0.. 12O4’w.. X-.41m C0) en (d 3)b-%- u s~~ -40r n 0- 1 0 0...c- ~ Li * to wC~C 0’~ -4 L)CrO0-40 CLTý -4 14 CLU L> w)i VQ. V) Li)- td -III.- 0 U -4 W ed Q C U - H bOC E 0 w u - r-4 w &J.- w 4 0-I kw -H = IV -4
Stability Error Budget for an Aggressive Coronagraph on a 3.8 m Telescope
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Krist, John; Rud, Mayer
2011-01-01
We evaluate in detail the stability requirements for a band-limited coronagraph with an inner working angle as small as 2 lambda/D coupled to an off-axis, 3.8-m diameter telescope. We have updated our methodologies since presenting a stability error budget for the Terrestrial Planet Finder Coronagraph mission that worked at 4 lambda/D and employed an 8th-order mask to reduce aberration sensitives. In the previous work, we determined the tolerances relative to the total light leaking through the coronagraph. Now, we separate the light into a radial component, which is readily separable from a planet signal, and an azimuthal component, which is easily confused with a planet signal. In the current study, throughput considerations require a 4th-order coronagraph. This, combined with the more aggressive working angle, places extraordinarily tight requirements on wavefront stability and opto-mechanical stability. We find that the requirements are driven mainly by coma that leaks around the coronagraph mask and mimics the localized signal of a planet, and pointing errors that scatter light into the background, decreasing SNR. We also show how the requirements would be relaxed if a low-order aberration detection system could be employed.
NASA Technical Reports Server (NTRS)
Li, Zhanqing; Whitlock, Charles H.; Charlock, Thomas P.
1995-01-01
Global sets of surface radiation budget (SRB) have been obtained from satellite programs. These satellite-based estimates need validation with ground-truth observations. This study validates the estimates of monthly mean surface insolation contained in two satellite-based SRB datasets with the surface measurements made at worldwide radiation stations from the Global Energy Balance Archive (GEBA). One dataset was developed from the Earth Radiation Budget Experiment (ERBE) using the algorithm of Li et al. (ERBE/SRB), and the other from the International Satellite Cloud Climatology Project (ISCCP) using the algorithm of Pinker and Laszlo and that of Staylor (GEWEX/SRB). Since the ERBE/SRB data contain the surface net solar radiation only, the values of surface insolation were derived by making use of the surface albedo data contained GEWEX/SRB product. The resulting surface insolation has a bias error near zero and a root-mean-square error (RMSE) between 8 and 28 W/sq m. The RMSE is mainly associated with poor representation of surface observations within a grid cell. When the number of surface observations are sufficient, the random error is estimated to be about 5 W/sq m with present satellite-based estimates. In addition to demonstrating the strength of the retrieving method, the small random error demonstrates how well the ERBE derives from the monthly mean fluxes at the top of the atmosphere (TOA). A larger scatter is found for the comparison of transmissivity than for that of insolation. Month to month comparison of insolation reveals a weak seasonal trend in bias error with an amplitude of about 3 W/sq m. As for the insolation data from the GEWEX/SRB, larger bias errors of 5-10 W/sq m are evident with stronger seasonal trends and almost identical RMSEs.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Yang, Yuekui
2016-01-01
Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.
Error budgeting single and two qubit gates in a superconducting qubit
NASA Astrophysics Data System (ADS)
Chen, Z.; Chiaro, B.; Dunsworth, A.; Foxen, B.; Neill, C.; Quintana, C.; Wenner, J.; Martinis, John. M.; Google Quantum Hardware Team Team
Superconducting qubits have shown promise as a platform for both error corrected quantum information processing and demonstrations of quantum supremacy. High fidelity quantum gates are crucial to achieving both of these goals, and superconducting qubits have demonstrated two qubit gates exceeding 99% fidelity. In order to improve gate fidelity further, we must understand the remaining sources of error. In this talk, I will demonstrate techniques for quantifying the contributions of control, decoherence, and leakage to gate error, for both single and two qubit gates. I will also discuss the near term outlook for achieving quantum supremacy using a gate-based approach in superconducting qubits. This work is supported Google Inc., and by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1605114.
Wright, S.A.; Schoellhamer, D.H.
2005-01-01
[1] Where rivers encounter estuaries, a transition zone develops where riverine and tidal processes both affect sediment transport processes. One such transition zone is the Sacramento-San Joaquin River Delta, a large, complex system where several rivers meet to form an estuary (San Francisco Bay). Herein we present the results of a detailed sediment budget for this river/estuary transitional system. The primary regional goal of the study was to measure sediment transport rates and pathways in the delta in support of ecosystem restoration efforts. In addition to achieving this regional goal, the study has produced general methods to collect, edit, and analyze (including error analysis) sediment transport data at the interface of rivers and estuaries. Estimating sediment budgets for these systems is difficult because of the mixed nature of riverine versus tidal transport processes, the different timescales of transport in fluvial and tidal environments, and the sheer complexity and size of systems such as the Sacramento-San Joaquin River Delta. Sediment budgets also require error estimates in order to assess whether differences in inflows and outflows, which could be small compared to overall fluxes, are indeed distinguishable from zero. Over the 4 year period of this study, water years 1999-2002, 6.6 ?? 0.9 Mt of sediment entered the delta and 2.2 ?? 0.7 Mt exited, resulting in 4.4 ?? 1.1 Mt (67 ?? 17%) of deposition. The estimated deposition rate corresponding to this mass of sediment compares favorably with measured inorganic sediment accumulation on vegetated wetlands in the delta.
Characterizing biospheric carbon balance using CO2 observations from the OCO-2 satellite
NASA Astrophysics Data System (ADS)
Miller, Scot M.; Michalak, Anna M.; Yadav, Vineet; Tadić, Jovan M.
2018-05-01
NASA's Orbiting Carbon Observatory 2 (OCO-2) satellite launched in summer of 2014. Its observations could allow scientists to constrain CO2 fluxes across regions or continents that were previously difficult to monitor. This study explores an initial step toward that goal; we evaluate the extent to which current OCO-2 observations can detect patterns in biospheric CO2 fluxes and constrain monthly CO2 budgets. Our goal is to guide top-down, inverse modeling studies and identify areas for future improvement. We find that uncertainties and biases in the individual OCO-2 observations are comparable to the atmospheric signal from biospheric fluxes, particularly during Northern Hemisphere winter when biospheric fluxes are small. A series of top-down experiments indicate how these errors affect our ability to constrain monthly biospheric CO2 budgets. We are able to constrain budgets for between two and four global regions using OCO-2 observations, depending on the month, and we can constrain CO2 budgets at the regional level (i.e., smaller than seven global biomes) in only a handful of cases (16 % of all regions and months). The potential of the OCO-2 observations, however, is greater than these results might imply. A set of synthetic data experiments suggests that retrieval errors have a salient effect. Advances in retrieval algorithms and to a lesser extent atmospheric transport modeling will improve the results. In the interim, top-down studies that use current satellite observations are best-equipped to constrain the biospheric carbon balance across only continental or hemispheric regions.
Weighing Rocky Exoplanets with Improved Radial Velocimetry
NASA Astrophysics Data System (ADS)
Xuesong Wang, Sharon; Wright, Jason; California Planet Survey Consortium
2016-01-01
The synergy between Kepler and the ground-based radial velocity (RV) surveys have made numerous discoveries of small and rocky exoplanets, opening the age of Earth analogs. However, most (29/33) of the RV-detected exoplanets that are smaller than 3 Earth radii do not have their masses constrained to better than 20% - limited by the current RV precision (1-2 m/s). Our work improves the RV precision of the Keck telescope, which is responsible for most of the mass measurements for small Kepler exoplanets. We have discovered and verified, for the first time, two of the dominant terms in Keck's RV systematic error budget: modeling errors (mostly in deconvolution) and telluric contamination. These two terms contribute 1 m/s and 0.6 m/s, respectively, to the RV error budget (RMS in quadrature), and they create spurious signals at periods of one sidereal year and its harmonics with amplitudes of 0.2-1 m/s. Left untreated, these errors can mimic the signals of Earth-like or Super-Earth planets in the Habitable Zone. Removing these errors will bring better precision to ten-year worth of Keck data and better constraints on the masses and compositions of small Kepler planets. As more precise RV instruments coming online, we need advanced data analysis tools to overcome issues like these in order to detect the Earth twin (RV amplitude 8 cm/s). We are developing a new, open-source RV data analysis tool in Python, which uses Bayesian MCMC and Gaussian processes, to fully exploit the hardware improvements brought by new instruments like MINERVA and NASA's WIYN/EPDS.
Preventing Marketing Efforts That Bomb.
ERIC Educational Resources Information Center
Sevier, Robert A.
2000-01-01
In a marketplace overwhelmed with messages, too many institutions waste money on ineffective marketing. Highlights five common marketing errors: limited definition of marketing; unwillingness to address strategic issues; no supporting data; fuzzy goals and directions; and unrealistic expectations, time lines, and budgets. Though trustees are not…
A manual to identify sources of fluvial sediment
Gellis, Allen C.; Fitzpatrick, Faith A.; Schubauer-Berigan, Joseph
2016-01-01
Sediment is an important pollutant of concern that can degrade and alter aquatic habitat. A sediment budget is an accounting of the sources, storage, and export of sediment over a defined spatial and temporal scale. This manual focuses on field approaches to estimate a sediment budget. We also highlight the sediment fingerprinting approach to attribute sediment to different watershed sources. Determining the sources and sinks of sediment is important in developing strategies to reduce sediment loads to water bodies impaired by sediment. Therefore, this manual can be used when developing a sediment TMDL requiring identification of sediment sources.The manual takes the user through the seven necessary steps to construct a sediment budget:Decision-making for watershed scale and time period of interestFamiliarization with the watershed by conducting a literature review, compiling background information and maps relevant to study questions, conducting a reconnaissance of the watershedDeveloping partnerships with landowners and jurisdictionsCharacterization of watershed geomorphic settingDevelopment of a sediment budget designData collectionInterpretation and construction of the sediment budgetGenerating products (maps, reports, and presentations) to communicate findings.Sediment budget construction begins with examining the question(s) being asked and whether a sediment budget is necessary to answer these question(s). If undertaking a sediment budget analysis is a viable option, the next step is to define the spatial scale of the watershed and the time scale needed to answer the question(s). Of course, we understand that monetary constraints play a big role in any decision.Early in the sediment budget development process, we suggest getting to know your watershed by conducting a reconnaissance and meeting with local stakeholders. The reconnaissance aids in understanding the geomorphic setting of the watershed and potential sources of sediment. Identifying the potential sediment sources early in the design of the sediment budget will help later in deciding which tools are necessary to monitor erosion and/or deposition at these sources. Tools can range from rapid inventories to estimate the sediment budget or quantifying sediment erosion, deposition, and export through more rigorous field monitoring. In either approach, data are gathered and erosion and deposition calculations are determined and compared to the sediment export with a description of the error uncertainty. Findings are presented to local stakeholders and management officials.Sediment fingerprinting is a technique that apportions the sources of fine-grained sediment in a watershed using tracers or fingerprints. Due to different geologic and anthropogenic histories, the chemical and physical properties of sediment in a watershed may vary and often represent a unique signature (or fingerprint) for each source within the watershed. Fluvial sediment samples (the target sediment) are also collected and exhibit a composite of the source properties that can be apportioned through various statistical techniques. Using an unmixing-model and error analysis, the final apportioned sediment is determined.
Skutan, Stefan; Aschenbrenner, Philipp
2012-12-01
Components with extraordinarily high analyte contents, for example copper metal from wires or plastics stabilized with heavy metal compounds, are presumed to be a crucial source of errors in refuse-derived fuel (RDF) analysis. In order to study the error generation of those 'analyte carrier components', synthetic samples spiked with defined amounts of carrier materials were mixed, milled in a high speed rotor mill to particle sizes <1 mm, <0.5 mm and <0.2 mm, respectively, and analyzed repeatedly. Copper (Cu) metal and brass were used as Cu carriers, three kinds of polyvinylchloride (PVC) materials as lead (Pb) and cadmium (Cd) carriers, and paper and polyethylene as bulk components. In most cases, samples <0.2 mm delivered good recovery rates (rec), and low or moderate relative standard deviations (rsd), i.e. metallic Cu 87-91% rec, 14-35% rsd, Cd from flexible PVC yellow 90-92% rec, 8-10% rsd and Pb from rigid PVC 92-96% rec, 3-4% rsd. Cu from brass was overestimated (138-150% rec, 13-42% rsd), Cd from flexible PVC grey underestimated (72-75% rec, 4-7% rsd) in <0.2 mm samples. Samples <0.5 mm and <1 mm spiked with Cu or brass produced errors of up to 220% rsd (<0.5 mm) and 370% rsd (<1 mm). In the case of Pb from rigid PVC, poor recoveries (54-75%) were observed in spite of moderate variations (rsd 11-29%). In conclusion, time-consuming milling to <0.2 mm can reduce variation to acceptable levels, even given the presence of analyte carrier materials. Yet, the sources of systematic errors observed (likely segregation effects) remain uncertain.
Anderson, Mark T.
1995-01-01
The study of ground-water and surface-water interactions often employs streamflow-gaging records and hydrologic budgets to determine ground-water seepage. Because ground-water seepage usually is computed as a residual in the hydrologic budget approach, all uncertainty of measurement and estimation of budget components is associated with the ground-water seepage. This uncertainty can exceed the estimate, especially when streamflow and its associated error of measurement, is large relative to other budget components. In a study of Rapid Creek in western South Dakota, the hydrologic budget approach with hydrochemistry was combined to determine ground-water seepage. The City of Rapid City obtains most of its municipal water from three infiltration galleries (Jackson Springs, Meadowbrook, and Girl Scout) constructed in the near-stream alluvium along Rapid Creek. The reach of Rapid Creek between Pactola Reservoir and Rapid City and, in particular the two subreaches containing the galleries, were studied intensively to identify the sources of water to each gallery. Jackson Springs Gallery was found to pump predominantly ground water with a minor component of surface water. Meadowbrook and Girl Scout Galleries induce infiltration of surface water from Rapid Creek but also have a significant component of ground water.
Vanos, J K; Warland, J S; Gillespie, T J; Kenny, N A
2012-11-01
The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO(2) reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m(-2), respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation (I (cl)), as well clothing non-uniformity, with changing air temperature (T (a)) and metabolic activity (M (act)). Equivalent T (a) values (for I (cl) estimation) are calculated in order to lower the I (cl) value with increasing M (act) at equal T (a). Furthermore, threshold T (a) values are calculated to predict the point at which an individual will change from a uniform I (cl) to a segmented I (cl) (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity (v (r)) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v (r) equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m(-2) and 1.7°C higher when using the improved v (r) equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.
NASA Astrophysics Data System (ADS)
Vanos, J. K.; Warland, J. S.; Gillespie, T. J.; Kenny, N. A.
2012-11-01
The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO2 reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m-2, respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation ( I cl), as well clothing non-uniformity, with changing air temperature ( T a) and metabolic activity ( M act). Equivalent T a values (for I cl estimation) are calculated in order to lower the I cl value with increasing M act at equal T a. Furthermore, threshold T a values are calculated to predict the point at which an individual will change from a uniform I cl to a segmented I cl (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity ( v r) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v r equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m-2 and 1.7°C higher when using the improved v r equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.
Simulating a transmon implementation of the surface code, Part I
NASA Astrophysics Data System (ADS)
Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo
Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
77 FR 55240 - Order Making Fiscal Year 2013 Annual Adjustments to Registration Fee Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-07
... Management and Budget (``OMB'') to project the aggregate offering price for purposes of the fiscal year 2012... AAMOP is given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n...
Cost effectiveness of the stream-gaging program in Pennsylvania
Flippo, H.N.; Behrendt, T.E.
1985-01-01
This report documents a cost-effectiveness study of the stream-gaging program in Pennsylvania. Data uses and funding were identified for 223 continuous-record stream gages operated in 1983; four are planned for discontinuance at the close of water-year 1985; two are suggested for conversion, at the beginning of the 1985 water year, for the collection of only continuous stage records. Two of 11 special-purpose short-term gages are recommended for continuation when the supporting project ends; eight of these gages are to be discontinued and the other will be converted to a partial-record type. Current operation costs for the 212 stations recommended for continued operation is $1,199,000 per year in 1983. The average standard error of estimation for instantaneous streamflow is 15.2%. An overall average standard error of 9.8% could be attained on a budget of $1,271,000, which is 6% greater than the 1983 budget, by adopted cost-effective stream-gaging operations. (USGS)
Multidisciplinary Analysis of the NEXUS Precursor Space Telescope
NASA Astrophysics Data System (ADS)
de Weck, Olivier L.; Miller, David W.; Mosier, Gary E.
2002-12-01
A multidisciplinary analysis is demonstrated for the NEXUS space telescope precursor mission. This mission was originally designed as an in-space technology testbed for the Next Generation Space Telescope (NGST). One of the main challenges is to achieve a very tight pointing accuracy with a sub-pixel line-of-sight (LOS) jitter budget and a root-mean-square (RMS) wavefront error smaller than λ/50 despite the presence of electronic and mechanical disturbances sources. The analysis starts with the assessment of the performance for an initial design, which turns out not to meet the requirements. Twentyfive design parameters from structures, optics, dynamics and controls are then computed in a sensitivity and isoperformance analysis, in search of better designs. Isoperformance allows finding an acceptable design that is well "balanced" and does not place undue burden on a single subsystem. An error budget analysis shows the contributions of individual disturbance sources. This paper might be helpful in analyzing similar, innovative space telescope systems in the future.
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chow, Chi-Wai; Chiang, Ming-Feng; Shih, Fu-Yuan; Pan, Ci-Ling
2011-09-01
In a wavelength division multiplexed-passive optical network (WDM-PON), different fiber lengths and optical components would introduce different power budgets to different optical networking units (ONUs). Besides, the power decay of the distributed optical carrier from the optical line terminal owing to aging of the optical transmitter could also reduce the injected power into the ONU. In this work, we propose and demonstrate a carrier distributed WDM-PON using a reflective semiconductor optical amplifier-based ONU that can adjust its upstream data rate to accommodate different injected optical powers. The WDM-PON is evaluated at standard-reach (25 km) and long-reach (100 km). Bit-error rate measurements at different injected optical powers and transmission lengths show that by adjusting the upstream data rate of the system (622 Mb/s, 1.25 and 2.5 Gb/s), error-free (<10-9) operation can still be achieved when the power budget drops.
Error Budgeting and Tolerancing of Starshades for Exoplanet Detection
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Noecker, M. Charley; Glassman, Tiffany; Lo, Amy S.; Dumont, Philip J.; Kasdin, N. Jeremy; Cady, Eric J.; Vanderbei, Robert; Lawson, Peter R.
2010-01-01
A flower-like starshade positioned between a star and a space telescope is an attractive option for blocking the starlight to reveal the faint reflected light of an orbiting Earth-like planet. Planet light passes around the petals and directly enters the telescope where it is seen along with a background of scattered light due to starshade imperfections. We list the major perturbations that are expected to impact the performance of a starshade system and show that independent models at NGAS and JPL yield nearly identical optical sensitivities. We give the major sensitivities in the image plane for a design consisting of a 34-m diameter starshade, and a 2-m diameter telescope separated by 39,000 km, operating between 0.25 and 0.55 um. These sensitivities include individual petal and global shape terms evaluated at the inner working angle. Following a discussion of the combination of individual perturbation terms, we then present an error budget that is consistent with detection of an Earth-like planet 26 magnitudes fainter than its host star.
Optical signal monitoring in phase modulated optical fiber transmission systems
NASA Astrophysics Data System (ADS)
Zhao, Jian
Optical performance monitoring (OPM) is one of the essential functions for future high speed optical networks. Among the parameters to be monitored, chromatic dispersion (CD) is especially important since it has a significant impact on overall system performance. In this thesis effective CD monitoring approaches for phase-shift keying (PSK) based optical transmission systems are investigated. A number of monitoring schemes based on radio frequency (RF) spectrum analysis and delay-tap sampling are proposed and their performance evaluated. A method for dispersion monitoring of differential phase-shift keying (DPSK) signals based on RF power detection is studied. The RF power spectrum is found to increase with the increase of CD and decrease with polarization mode dispersion (PMD). The spectral power density dependence on CD is studied theoretically and then verified through simulations and experiments. The monitoring sensitivity for nonreturn-to-zero differential phase-shift keying (NRZ-DPSK) and return-to-zero differential phase-shift keying (RZ-DPSK) based systems can reach 80ps/nm/dB and 34ps/nm/dB respectively. The scheme enables the monitoring of differential group delay (DGD) and CD simultaneously. The monitoring sensitivity of CD and DGD can reach 56.7ps/nm/dB and 3.1ps/dB using a bandpass filter. The effects of optical signal-to-noise ratio (OSNR), DGD, fiber nonlinearity and chirp on the monitoring results are investigated. Two RF pilot tones are employed for CD monitoring of DPSK signals. Specially selected pilot tone frequencies enable good monitoring sensitivity with minimum influence on the received signals. The dynamic range exceeding 35dB and monitoring sensitivity up to 9.5ps/nm/dB are achieved. Asynchronous sampling technique is employed for CD monitoring. A signed CD monitoring method for 10Gb/s NRZ-DPSK and RZ-DPSK systems using asynchronous delay-tap sampling technique is studied. The demodulated signals suffer asymmetric waveform distortion if there is a phase error (Deltaphi) in the delay interferometer (DI) and in the presence of residual CD. Using delay-tap sampling the scatter plots can reflect this signal distortion through their asymmetric characteristics. A distance ratio (DR) is defined to represent the change of the scatter plots which is directly related to the accumulated CD. The monitoring range can be up to +/-400ps/nm and to +/-720ps/nm for 10Gb/s NRZ-DPSK and RZ-DPSK signals with 450 phase error in DI. The monitoring sensitivity reaches +/-8ps/nm and CD polarity discrimination is realized. It is found that the signal degradation is related to the increment of the absolute value of CD or phase mismatch. The effect of different polarities of phase error on CD monitoring is also analyzed. The shoulders location depends on the sign of the product DLDeltaphi. If DLDeltaphi > 0, the shoulder will appear on trailing edge else the shoulder will appear on leading edge when DLDeltaphi < 0. The analysis shows that the phase error is identical to the frequency offset of optical source so a signed frequency offset monitoring is also demonstrated. The monitoring results show that the monitoring range can reach +/-2.2GHz and the monitoring sensitivity is around 27MHz. The effect of nonlinearity, OSNR and bandwidth of the lowpass filter on the proposed monitoring method has also been studied. The signed CD monitoring for 100Gb/s carrier suppressed return-to-zero differential quadrature phase-shift keying (CSRZ-DQPSK) system based on the delay-tap sampling technology is demonstrated. The monitoring range and monitoring resolution can goes up to +/-32ps/nm and +/-8ps/nm, respectively. A signed CD and optical carrier wavelength monitoring scheme using cross-correlation method for on-off keying (00K) wavelength division multiplexing (WDM) system is proposed and demonstrated. CD monitoring sensitivity is high and can be less than 10% of the bit period. Wavelength monitoring is implemented using the proposed approach. The monitoring results show that the sensitivity can reach up to 1.37ps/GHz.
Demonstrating Starshade Performance as Part of NASA's Technology Development for Exoplanet Missions
NASA Astrophysics Data System (ADS)
Kasdin, N. Jeremy; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M. W.; Walkemeyer, P. E.; Bach, V. M.; Oakes, E.; Cady, E. J.; Martin, S. R.; Marchen, L. F.; Macintosh, B.; Rudd, R.; Mikula, J. A.; Lynch, D. H.
2012-01-01
In this poster we describe the results of our project to design, manufacture, and measure a prototype starshade petal as part of the Technology Development for Exoplanet Missions program. An external occult is a satellite employing a large screen, or starshade,that flies in formation with a spaceborne telescope to provide the starlight suppression needed for detecting and characterizing exoplanets. Among the advantages of using an occulter are the broadband allowed for characterization and the removal of light for the observatory, greatly relaxing the requirements on the telescope and instrument. In this first two-year phase we focused on the key requirement of manufacturing a precision petal with the precise tolerances needed to meet the overall error budget. These tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We show the results of this analysis and a representative error budget. We also present the final manufactured occulter petal and the metrology on its shape that demonstrates it meets requirements. We show that a space occulter built of petals with the same measured shape would achieve better than 1e-9 contrast. We also show our progress in building and testing sample edges with the sharp radius of curvature needed for limiting solar glint. Finally, we describe our plans for the second TDEM phase.
NASA Astrophysics Data System (ADS)
Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.
2017-10-01
We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.
2000-02-01
tD c...UJ _i UJ 2 o o UJ 1- 2 O o 2 o w «N o 0. § >- u. Ü u z til < V 3 _i < 2 u. a. -b o 3 CN i 0. o a. CO ci a. T- T- 1^ tD ...Tf <=f >- CM CN CN u_ © CD •a a> J3 o o CD CO to Z c CM Ö Ö Ö £ E >• "*" T~ JB tD u_ ~ UJ a. E E ɘ o a. o c i o o CM CM CM W a CM
NASA Astrophysics Data System (ADS)
Milner, Darrin; Didona, Kevin; Bannon, David
2005-04-01
With the introduction of wavelength division multiplexing and dense wavelength division multiplexing, equipment manufactures have sought to reduce design tradeoffs and costs while maintaining or increasing their product performance. With the need to reduce if not eliminate optical losses and create the all light path from source to destination, equipment manufactures are addressing the concerns of component manufactures to provide increased performance to support configurable designs for 100, 50, and eventually 12.5GHz. One of the most reliable, robust, and high performance devices is the low polarization dependent loss (LPDL) diffraction grating used to disperse wavelengths for channel blocking, add/drop functionality and real time light path reconfigurations. The networks today have a variety of factors which contribute to the optical loss budget and impact system design cost, facility requirements, maintenance or replacement costs. These factors include first and second order polarization mode dispersion (PMD), polarization dependent loss (PDL), wavelength dependent losses, and chromatic dispersion (CD). Network designers and equipment manufactures have to consider each component capability and its impact to the systems bit error rate (BER). In order to gain an understanding of the advantages of components with low polarization dependency, we will summarize the effects that interplay with these types of components.
Application of Monte-Carlo Analyses for the Microwave Anisotropy Probe (MAP) Mission
NASA Technical Reports Server (NTRS)
Mesarch, Michael A.; Rohrbaugh, David; Schiff, Conrad; Bauer, Frank H. (Technical Monitor)
2001-01-01
The Microwave Anisotropy Probe (MAP) is the third launch in the National Aeronautics and Space Administration's (NASA's) a Medium Class Explorers (MIDEX) program. MAP will measure, in greater detail, the cosmic microwave background radiation from an orbit about the Sun-Earth-Moon L2 Lagrangian point. Maneuvers will be required to transition MAP from it's initial highly elliptical orbit to a lunar encounter which will provide the remaining energy to send MAP out to a lissajous orbit about L2. Monte-Carlo analysis methods were used to evaluate the potential maneuver error sources and determine their effect of the fixed MAP propellant budget. This paper will discuss the results of the analyses on three separate phases of the MAP mission - recovering from launch vehicle errors, responding to phasing loop maneuver errors, and evaluating the effect of maneuver execution errors and orbit determination errors on stationkeeping maneuvers at L2.
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.
Cervical sensorimotor control in idiopathic cervical dystonia: A cross-sectional study.
De Pauw, Joke; Mercelis, Rudy; Hallemans, Ann; Michiels, Sarah; Truijen, Steven; Cras, Patrick; De Hertogh, Willem
2017-09-01
Patients with idiopathic adult-onset cervical dystonia (CD) experience an abnormal head posture and involuntary muscle contractions. Although the exact areas affected in the central nervous system remain uncertain, impaired functions in systems stabilizing the head and neck are apparent such as the somatosensory and sensorimotor integration systems. The aim of the study is to investigate cervical sensorimotor control dysfunction in patients with CD. Cervical sensorimotor control was assessed by a head repositioning task in 24 patients with CD and 70 asymptomatic controls. Blindfolded participants were asked to reposition their head to a previously memorized neutral head position (NHP) following an active movement (flexion, extension, left, and right rotation). The repositioning error (joint position error, JPE) was registered via 3D motion analysis with an eight-camera infrared system (VICON ® T10). Disease-specific characteristics of all patients were obtained via the Tsui scale, Cervical Dystonia Impact Profile (CDIP-58), and Toronto Western Spasmodic Rating Scale. Patients with CD showed larger JPE than controls (mean difference of 1.5°, p < .006), and systematically 'overshoot', i.e. surpassed the NHP, whereas control subjects 'undershoot', i.e. fall behind the NHP. The JPE did not correlate with disease-specific characteristics. Cervical sensorimotor control is impaired in patients with CD. As cervical sensorimotor control can be trained, this might be a potential treatment option for therapy, adjuvant to botulinum toxin injections.
Dalton, Melinda S.; Aulenbach, Brent T.; Torak, Lynn J.
2004-01-01
Lake Seminole is a 37,600-acre impoundment formed at the confluence of the Flint and Chattahoochee Rivers along the Georgia?Florida State line. Outflow from Lake Seminole through Jim Woodruff Lock and Dam provides headwater to the Apalachicola River, which is a major supply of freshwater, nutrients, and detritus to ecosystems downstream. These rivers,together with their tributaries, are hydraulically connected to karst limestone units that constitute most of the Upper Floridan aquifer and to a chemically weathered residuum of undifferentiated overburden. The ground-water flow system near Lake Seminole consists of the Upper Floridan aquifer and undifferentiated overburden. The aquifer is confined below by low-permeability sediments of the Lisbon Formation and, generally, is semiconfined above by undifferentiated overburden. Ground-water flow within the Upper Floridan aquifer is unconfined or semiconfined and discharges at discrete points by springflow or diffuse leakage into streams and other surface-water bodies. The high degree of connectivity between the Upper Floridan aquifer and surface-water bodies is limited to the upper Eocene Ocala Limestone and younger units that are in contact with streams in the Lake Seminole area. The impoundment of Lake Seminole inundated natural stream channels and other low-lying areas near streams and raised the water-level altitude of the Upper Floridan aquifer near the lake to nearly that of the lake, about 77 feet. Surface-water inflow from the Chattahoochee and Flint Rivers and Spring Creek and outflow to the Apalachicola River through Jim Woodruff Lock and Dam dominate the water budget for Lake Seminole. About 81 percent of the total water-budget inflow consists of surface water; about 18 percent is ground water, and the remaining 1 percent is lake precipitation. Similarly, lake outflow consists of about 89 percent surface water, as flow to the Apalachicola River through Jim Woodruff Lock and Dam, about 4 percent ground water, and about 2 percent lake evaporation. Measurement error and uncertainty in flux calculations cause a flow imbalance of about 4 percent between inflow and outflow water-budget components. Most of this error can be attributed to errors in estimating ground-water discharge from the lake, which was calculated using a ground-water model calibrated to October 1986 conditions for the entire Apalachicola?Chattahoochee?Flint River Basin and not just the area around Lake Seminole. Evaporation rates were determined using the preferred, but mathematically complex, energy budget and five empirical equations: Priestley-Taylor, Penman, DeBruin-Keijman, Papadakis, and the Priestley-Taylor used by the Georgia Automated Environmental Monitoring Network. Empirical equations require a significant amount of data but are relatively easy to calculate and compare well to long-term average annual (April 2000?March 2001) pan evaporation, which is 65 inches. Calculated annual lake evaporation, for the study period, using the energy-budget method was 67.2 inches, which overestimated long-term average annual pan evaporation by 2.2 inches. The empirical equations did not compare well with the energy-budget method during the 18-month study period, with average differences in computed evaporation using each equation ranging from 8 to 26 percent. The empirical equations also compared poorly with long-term average annual pan evaporation, with average differences in evaporation ranging from 3 to 23 percent. Energy budget and long-term average annual pan evaporation estimates did compare well, with only a 3-percent difference between estimates. Monthly evaporation estimates using all methods ranged from 0.7 to 9.5 inches and were lowest during December 2000 and highest during May 2000. Although the energy budget is generally the preferred method, the dominance of surface water in the Lake Seminole water budget makes the method inaccurate and difficult to use, because surface water makes up m
NASA Astrophysics Data System (ADS)
Nunes, A.; Ivanov, V. Y.
2014-12-01
Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.
NASA Astrophysics Data System (ADS)
Evrard, Rebecca L.; Ding, Yifeng
2018-01-01
Clouds play a large role in the Earth's global energy budget, but the impact of cirrus clouds is still widely questioned and researched. Cirrus clouds reside high in the atmosphere and due to cold temperatures are comprised of ice crystals. Gaining a better understanding of ice cloud optical properties and the distribution of cirrus clouds provides an explanation for the contribution of cirrus clouds to the global energy budget. Using radiative transfer models (RTMs), accurate simulations of cirrus clouds can enhance the understanding of the global energy budget as well as improve the use of global climate models. A newer, faster RTM such as the visible infrared imaging radiometer suite (VIIRS) fast radiative transfer model (VFRTM) is compared to a rigorous RTM such as the line-by-line radiative transfer model plus the discrete ordinates radiative transfer program. By comparing brightness temperature (BT) simulations from both models, the accuracy of the VFRTM can be obtained. This study shows root-mean-square error <0.2 K for BT difference using reanalysis data for atmospheric profiles and updated ice particle habit information from the moderate-resolution imaging spectroradiometer collection 6. At a higher resolution, the simulated results of the VFRTM are compared to the observations of VIIRS resulting in a <1.5 % error from the VFRTM for all cases. The VFRTM is validated and is an appropriate RTM to use for global cloud retrievals.
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxiao; Snow, Patrick W.; Vaid, Alok; Solecky, Eric; Zhou, Hua; Ge, Zhenhua; Yasharzade, Shay; Shoval, Ori; Adan, Ofer; Schwarzband, Ishai; Bar-Zvi, Maayan
2015-03-01
Traditional metrology solutions are facing a range of challenges at the 1X node such as three dimensional (3D) measurement capabilities, shrinking overlay and critical dimension (CD) error budgets driven by multi-patterning and via in trench CD measurements. Hybrid metrology offers promising new capabilities to address some of these challenges but it will take some time before fully realized. This paper explores new capabilities currently offered on the in-line Critical Dimension Scanning Electron Microscope (CD-SEM) to address these challenges and enable the CD-SEM to move beyond measuring bottom CD using top down imaging. Device performance is strongly correlated with Fin geometry causing an urgent need for 3D measurements. New beam tilting capabilities enhance the ability to make 3D measurements in the front-end-of-line (FEOL) of the metal gate FinFET process in manufacturing. We explore these new capabilities for measuring Fin height and build upon the work communicated last year at SPIE1. Furthermore, we extend the application of the tilt beam to the back-end-of-line (BEOL) trench depth measurement and demonstrate its capability in production targeting replacement of the existing Atomic Force Microscope (AFM) measurements by including the height measurement in the existing CDSEM recipe to reduce fab cycle time. In the BEOL, another increasingly challenging measurement for the traditional CD-SEM is the bottom CD of the self-aligned via (SAV) in a trench first via last (TFVL) process. Due to the extremely high aspect ratio of the structure secondary electron (SE) collection from the via bottom is significantly reduced requiring the use of backscatter electrons (BSE) to increase the relevant image quality. Even with this solution, the resulting images are difficult to measure with advanced technology nodes. We explore new methods to increase measurement robustness and combine this with novel segmentation-based measurement algorithm generated specifically for BSE images. The results will be contrasted with data from previously used methods to quantify the improvement. We also compare the results to electrical test data to evaluate and quantify the measurement performance improvements. Lastly, according to International Technology Roadmap for Semiconductors (ITRS) from 2013, the overlay 3 sigma requirement will be 3.3 nm in 2015 and 2.9 nm in 2016. Advanced lithography requires overlay measurement in die on features resembling the device geometry. However, current optical overlay measurement is performed in the scribe line on large targets due to optical diffraction limit. In some cases, this limits the usefulness of the measurement since it does not represent the true behavior of the device. We explore using high voltage imaging to help address this urgent need. Novel CD-SEM based overlay targets that optimize the restrictions of process geometry and SEM technique were designed and spread out across the die. Measurements are done on these new targets both after photolithography and etch. Correlation is drawn between the two measurements. These results will also be compared to conventional optical overlay measurement approaches and we will discuss the possibility of using this capability in high volume manufacturing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
Cruz-Adalia, Aránzazu; Ramirez-Santiago, Guillermo; Osuna-Pérez, Jesús; Torres-Torresano, Mónica; Zorita, Virgina; Martínez-Riaño, Ana; Boccasavia, Viola; Borroto, Aldo; Martínez Del Hoyo, Gloria; González-Granado, José María; Alarcón, Balbino; Sánchez-Madrid, Francisco; Veiga, Esteban
2018-01-31
The original version of this Article contained an error in the spelling of the author José María González-Granado, which was incorrectly given as José María Gozález-Granado. This has now been corrected in both the PDF and HTML versions of the Article.
Legind, Charlotte N.; Rein, Arno; Serre, Jeanne; Brochier, Violaine; Haudin, Claire-Sophie; Cambier, Philippe; Houot, Sabine; Trapp, Stefan
2012-01-01
The water budget of soil, the uptake in plants and the leaching to groundwater of cadmium (Cd) and lead (Pb) were simulated simultaneously using a physiological plant uptake model and a tipping buckets water and solute transport model for soil. Simulations were compared to results from a ten-year experimental field study, where four organic amendments were applied every second year. Predicted concentrations slightly decreased (Cd) or stagnated (Pb) in control soils, but increased in amended soils by about 10% (Cd) and 6% to 18% (Pb). Estimated plant uptake was lower in amended plots, due to an increase of Kd (dry soil to water partition coefficient). Predicted concentrations in plants were close to measured levels in plant residues (straw), but higher than measured concentrations in grains. Initially, Pb was mainly predicted to deposit from air into plants (82% in 1998); the next years, uptake from soil became dominating (30% from air in 2006), because of decreasing levels in air. For Cd, predicted uptake from air into plants was negligible (1–5%). PMID:23056555
Econometric models for predicting confusion crop ratios
NASA Technical Reports Server (NTRS)
Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)
1979-01-01
Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... be submitted to the Office of Management and Budget (OMB) for review and approval. Proposed... size to evaluate the measurement error structure of the diet and physical activity assessment... on cancer research, diagnosis, prevention and treatment. Dietary and physical activity data will be...
Semiannual Report to Congress, No. 49. April 1, 2004-September 30, 2004
ERIC Educational Resources Information Center
US Department of Education, 2004
2004-01-01
This report highlights significant work of the U.S. Department of Education's Office of Inspector General for the 6-month period ending September 30, 2004. Sections include: Activities and Accomplishments; Elimination of Fraud and Error in Student Aid Programs; Budget and Performance Integration; Financial Management; Expanded Electronic…
Prediction errors in wildland fire situation analyses.
Geoffrey H. Donovan; Peter Noordijk
2005-01-01
Wildfires consume budgets and put the heat on fire managers to justify and control suppression costs. To determine the appropriate suppression strategy, land managers must conduct a wildland fire situation analysis (WFSA) when:A wildland fire is expected to or does escape initial attack,A wildland fire managed for resource benefits...
Resource-Bounded Information Gathering for Correlation Clustering
2007-01-01
5], budgeted learning, [4], and active learning , for example, [3]. 3 Acknowledgments We thank Avrim Blum, Katrina Ligett, Chris Pal, Sridhar...2007 3. N. Roy, A. McCallum, Toward Optimal Active Learning through Sampling Estima- tion of Error Reduction, Proc. of 18th ICML, 2001 4. A. Kapoor, R
1985-12-20
Report) Approved for Public Disemination I 17. DISTRIBUTION STATEMENT (of the abstract entered In Block 20, It different from Report) I1. SUPPLEMENTARY...Continue an riverl. aid. It neceseary ind Idoni..•y by block number) Fix Estimation Statistical Assumptions, Error Budget, Unnodclcd Errors, Coding...llgedl i t Eh’ fI) t r !". 1 I ’ " r, tl 1: a Icr it h m hc ro ,, ] y zcd arc Csedil other Current TIV! Sysem ’ he report examines the underlying
Soil Carbon Budget During Establishment of Short Rotation Woody Crops
NASA Astrophysics Data System (ADS)
Coleman, M. D.
2003-12-01
Carbon budgets were monitored following forest harvest and during re-establishment of short rotation woody crops. Soil CO2 efflux was monitored using infared gas analyzer methods, fine root production was estimated with minirhizotrons, above ground litter inputs were trapped, coarse root inputs were estimated with developed allometric relationships, and soil carbon pools were measured in loblolly pine and cottonwood plantations. Our carbon budget allows evaluation of errors, as well as quantifying pools and fluxes in developing stands during non-steady-state conditions. Soil CO2 efflux was larger than the combined inputs from aboveground litter fall and root production. Fine-root production increased during stand development; however, mortality was not yet equivalent to production, showing the belowground carbon budget was not yet in equilibrium and root carbon standing crop was accruing. Belowground production was greater in cottonwood than pine, but the level of pine soil CO2 efflux was equal to or greater than that of cottonwood, indicating heterotrophic respiration was higher for pine. Comparison of unaccounted efflux with soil organic carbon changes provides verification of loss or accrual.
Cost-effectiveness of the stream-gaging program in Maryland, Delaware, and the District of Columbia
Carpenter, David H.; James, R.W.; Gillen, D.F.
1987-01-01
This report documents the results of a cost-effectiveness study of the stream-gaging program in Maryland, Delaware, and the District of Columbia. Data uses and funding sources were identified for 99 continuously operated stream gages in Maryland , Delaware, and the District of Columbia. The current operation of the program requires a budget of $465,260/year. The average standard error of estimation of streamflow records is 11.8%. It is shown that this overall level of accuracy at the 99 sites could be maintained with a budget of $461,000, if resources were redistributed among the gages. (USGS)
Saha, Amartya K.; Moses, Christopher S.; Price, Rene M.; Engel, Victor; Smith, Thomas J.; Anderson, Gordon
2012-01-01
Water budget parameters are estimated for Shark River Slough (SRS), the main drainage within Everglades National Park (ENP) from 2002 to 2008. Inputs to the water budget include surface water inflows and precipitation while outputs consist of evapotranspiration, discharge to the Gulf of Mexico and seepage losses due to municipal wellfield extraction. The daily change in volume of SRS is equated to the difference between input and outputs yielding a residual term consisting of component errors and net groundwater exchange. Results predict significant net groundwater discharge to the SRS peaking in June and positively correlated with surface water salinity at the mangrove ecotone, lagging by 1 month. Precipitation, the largest input to the SRS, is offset by ET (the largest output); thereby highlighting the importance of increasing fresh water inflows into ENP for maintaining conditions in terrestrial, estuarine, and marine ecosystems of South Florida.
Observing the earth radiation budget from satellites - Past, present, and a look to the future
NASA Technical Reports Server (NTRS)
House, F. B.
1985-01-01
Satellite measurements of the radiative exchange between the planet earth and space have been the objective of many experiments since the beginning of the space age in the late 1950's. The on-going mission of the Earth Radiation Budget (ERB) experiments has been and will be to consider flight hardware, data handling and scientific analysis methods in a single design strategy. Research and development on observational data has produced an analysis model of errors associated with ERB measurement systems on polar satellites. Results show that the variability of reflected solar radiation from changing meteorology dominates measurement uncertainties. As an application, model calculations demonstrate that measurement requirements for the verification of climate models may be satisfied with observations from one polar satellite, provided there is information on diurnal variations of the radiation budget from the ERBE mission.
NASA Astrophysics Data System (ADS)
Khan, Yousaf; Afridi, Muhammad Idrees; Khan, Ahmed Mudassir; Rehman, Waheed Ur; Khan, Jahanzeb
2014-09-01
Hybrid wavelength-division multiplexed/time-division multiplexed passive optical access networks (WDM/TDM-PONs) combine the advance features of both WDM and TDM PONs to provide a cost-effective access network solution. We demonstrate and analyze the transmission performances and power budget issues of a colorless hybrid WDM/TDM-PON scheme. A 10-Gb/s downstream differential phase shift keying (DPSK) and remodulated upstream on/off keying (OOK) data signals are transmitted over 25 km standard single mode fiber. Simulation results show error free transmission having adequate power margins in both downstream and upstream transmission, which prove the applicability of the proposed scheme to future passive optical access networks. The power budget confines both the PON splitting ratio and the distance between the Optical Line Terminal (OLT) and Optical Network Unit (ONU).
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P.
2015-12-02
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help tomore » correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.« less
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P
2015-12-14
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.
Scanner qualification with IntenCD based reticle error correction
NASA Astrophysics Data System (ADS)
Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan
2010-03-01
Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.
A Bayesian approach to multisource forest area estimation
Andrew O. Finley
2007-01-01
In efforts such as land use change monitoring, carbon budgeting, and forecasting ecological conditions and timber supply, demand is increasing for regional and national data layers depicting forest cover. These data layers must permit small area estimates of forest and, most importantly, provide associated error estimates. This paper presents a model-based approach for...
Cost-efficient selection of a marker panel in genetic studies
Jamie S. Sanderlin; Nicole Lazar; Michael J. Conroy; Jaxk Reeves
2012-01-01
Genetic techniques are frequently used to sample and monitor wildlife populations. The goal of these studies is to maximize the ability to distinguish individuals for various genetic inference applications, a process which is often complicated by genotyping error. However, wildlife studies usually have fixed budgets, which limit the number of geneticmarkers available...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-13
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Submission for OMB Review... Office of Management and Budget (OMB) a request to review and approve the information collection listed... measurement error structure of the diet and physical activity assessment instruments and the heterogeneity of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
... and by educating the public, especially young people, about tobacco products and the dangers their use... identified. When FDA receives tobacco-specific adverse event and product problem information, it will use the... quality problem, or product use error occurs. This risk identification process is the first necessary step...
ERIC Educational Resources Information Center
Meyer, J. Patrick; Liu, Xiang; Mashburn, Andrew J.
2014-01-01
Researchers often use generalizability theory to estimate relative error variance and reliability in teaching observation measures. They also use it to plan future studies and design the best possible measurement procedures. However, designing the best possible measurement procedure comes at a cost, and researchers must stay within their budget…
Matthews, Grant
2004-12-01
The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.
Precision VUV Spectro-Polarimetry for Solar Chromospheric Magnetic Field Measurements
NASA Astrophysics Data System (ADS)
Ishikawa, R.; Bando, T.; Hara, H.; Ishikawa, S.; Kano, R.; Kubo, M.; Katsukawa, Y.; Kobiki, T.; Narukage, N.; Suematsu, Y.; Tsuneta, S.; Aoki, K.; Miyagawa, K.; Ichimoto, K.; Kobayashi, K.; Auchère, F.; Clasp Team
2014-10-01
The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a VUV spectro-polarimeter optimized for measuring the linear polarization of the Lyman-α line (121.6 nm) to be launched in 2015 with NASA's sounding rocket (Ishikawa et al. 2011; Narukage et al. 2011; Kano et al. 2012; Kobayashi et al. 2012). With this experiment, we aim to (1) observe the scattering polarization in the Lyman-α line, (2) detect the Hanle effect, and (3) assess the magnetic fields in the upper chromosphere and transition region for the first time. The polarization measurement error consists of scale error δ a (error in amplitude of linear polarization), azimuth error Δφ (error in the direction of linear polarization), and spurious polarization ɛ (false linear polarization signals). The error ɛ should be suppressed below 0.1% in the Lyman-α core (121.567 nm ±0.02 nm), and 0.5% in the Lyman-α wing (121.567 nm ±0.05 nm), based on our scientific requirements shown in Table 2 of Kubo et al. (2014). From scientific justification, we adopt Δ φ<2° and δ a<10% as the instrument requirements. The spectro-polarimeter features a continuously rotating MgF2 waveplate (Ishikawa et al. 2013), a dual-beam spectrograph with a spherical grating working also as a beam splitter, and two polarization analyzers (Bridou et al. 2011), which are mounted at 90 degree from each other to measure two orthogonal polarization simultaneously. For the optical layout of the CLASP instrument, see Figure 3 in Kubo et al. (2014). Considering the continuous rotation of the half-waveplate, the modulation efficiency is 0.64 both for Stokes Q and U. All the raw data are returned and demodulation (successive addition or subtraction of images) is done on the ground. We control the CLASP polarization performance in the following three steps. First, we evaluate the throughput and polarization properties of each optical component in the Lyman-α line, using the Ultraviolet Synchrotron ORbital Radiation Facility (UVSOR) at the Institute for Molecular Science. The second step is polarization calibration of the spectro-polarimeter after alignment. Since the spurious polarization caused by the axisymmetric telescope is estimated to be negligibly small because of the symmetry (Ishikawa et al. 2014), we do not perform end-to-end polarization calibration. As the final step, before the scientific observation near the limb, we make a short observation at the Sun center and verify the polarization sensitivity, because the scattering polarization is expected to be close to zero at the Sun center due to symmetric geometry. In order to clarify whether we will be able to achieve the required polarization sensitivity and accuracy via these steps, we exercise polarization error budget, by investigating all the possible causes and their magnitudes of polarization errors, all of which are not necessarily verified by the polarization calibration. Based on these error budgets, we conclude that a polarization sensitivity of 0.1% in the line core, δ a<10% and Δ φ<2° can be achieved combined with the polarization calibration of the spectro-polarimeter and the onboard calibration at the Sun center(refer to Ishikawa et al. 2014, for the detail). We are currently conducting verification tests of the flight components and development of the UV light source for the polarization calibration. From 2014 spring, we will begin the integration, alignment, and calibration. We will update the error budgets throughout the course of these tests.
Baumgart, Daniel C; le Claire, Marie
2016-01-01
Crohn's disease (CD) and ulcerative colitis (UC) challenge economies worldwide. Detailed health economic data of DRG based academic inpatient care for inflammatory bowel disease (IBD) patients in Europe is unavailable. IBD was identified through ICD-10 K50 and K51 code groups. We took an actual costing approach, compared expenditures to G-DRG and non-DRG proceeds and performed detailed cost center and type accounting to identify coverage determinants. Of all 3093 hospitalized cases at our department, 164 were CD and 157 UC inpatients in 2012. On average, they were 44.1 (CD 44.9 UC 43.3 all 58) years old, stayed 10.1 (CD 11.8 UC 8.4 vs. all 8) days, carried 5.8 (CD 6.4 UC 5.2 vs. all 6.8) secondary diagnoses, received 7.4 (CD 7.7 UC 7 vs. all 6.2) procedures, had a higher cost weight (CD 2.8 UC 2.4 vs. all 1.6) and required more intense nursing. Their care was more costly (means: total cost IBD 8477€ CD 9051€ UC 7903€ vs. all 5078€). However, expenditures were not fully recovered by DRG proceeds (means: IBD 7413€, CD 8441€, UC 6384€ vs all 4758€). We discovered substantial disease specific mismatches in cost centers and types and identified the medical ward personnel and materials budgets to be most imbalanced. Non-DRG proceeds were almost double (IBD 16.1% vs. all 8.2%), but did not balance deficits at total coverage analysis, that found medications (antimicrobials, biologics and blood products), medical materials (mostly endoscopy items) to contribute most to the deficit. DRGs challenge sophisticated IBD care.
Wilkinson, S N; Dougall, C; Kinsey-Henderson, A E; Searle, R D; Ellis, R J; Bartley, R
2014-01-15
The use of river basin modelling to guide mitigation of non-point source pollution of wetlands, estuaries and coastal waters has become widespread. To assess and simulate the impacts of alternate land use or climate scenarios on river washload requires modelling techniques that represent sediment sources and transport at the time scales of system response. Building on the mean-annual SedNet model, we propose a new D-SedNet model which constructs daily budgets of fine sediment sources, transport and deposition for each link in a river network. Erosion rates (hillslope, gully and streambank erosion) and fine sediment sinks (floodplains and reservoirs) are disaggregated from mean annual rates based on daily rainfall and runoff. The model is evaluated in the Burdekin basin in tropical Australia, where policy targets have been set for reducing sediment and nutrient loads to the Great Barrier Reef (GBR) lagoon from grazing and cropping land. D-SedNet predicted annual loads with similar performance to that of a sediment rating curve calibrated to monitored suspended sediment concentrations. Relative to a 22-year reference load time series at the basin outlet derived from a dynamic general additive model based on monitoring data, D-SedNet had a median absolute error of 68% compared with 112% for the rating curve. RMS error was slightly higher for D-SedNet than for the rating curve due to large relative errors on small loads in several drought years. This accuracy is similar to existing agricultural system models used in arable or humid environments. Predicted river loads were sensitive to ground vegetation cover. We conclude that the river network sediment budget model provides some capacity for predicting load time-series independent of monitoring data in ungauged basins, and for evaluating the impact of land management on river sediment load time-series, which is challenging across large regions in data-poor environments. © 2013. Published by Elsevier B.V. All rights reserved.
Characterization of 193-nm resists for optical mask manufacturing
NASA Astrophysics Data System (ADS)
Fosshaug, Hans; Paulsson, Adisa; Berzinsh, Uldis; Magnusson, Helena
2004-12-01
The push for smaller linewidths and tighter critical dimension (CD) budgets forced manufacturers of optical pattern generators to move from traditional i-line to deep ultraviolet (DUV) resist processing. Entering the DUV area was not without pain. The process conditions, especially exposure times of a few hours, put very tough demands on the resist material itself. However, today 248nm laser writers are fully operating using a resist process that exhibits the requested resolution, CD uniformity and environmental stability. The continuous demands of CD performance made Micronic to investigate suitable resist candidate materials for the next generation optical writer using 193nm excimer laser exposure. This paper reports on resist benchmarking of one commercial as well as several newly developed resists. The resists were investigated using a wafer scanner. The data obtained illustrate the current performance of 193nm photoresists, and further demonstrate that despite good progress in resist formulation optimization, the status is still a bit from the required lithographic performance.
New Methods for Assessing and Reducing Uncertainty in Microgravity Studies
NASA Astrophysics Data System (ADS)
Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.
2017-12-01
Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.
Achievable flatness in a large microwave power transmitting antenna
NASA Technical Reports Server (NTRS)
Ried, R. C.
1980-01-01
A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.
Understanding error generation in fused deposition modeling
NASA Astrophysics Data System (ADS)
Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David
2015-03-01
Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.
High resolution microwave spectrometer sounder (HIMSS), volume 1, book 1
NASA Technical Reports Server (NTRS)
1990-01-01
The following topics are presented with respect to the high resolution microwave spectrometer sounder (HIMSS) that is to be used as an instrument for NASA's Earth Observing System (EOS): (1) an instrument overview; (2) an instrument description; (3) the instrument's conceptual design; (4) technical risks and offsets; (5) instrument reliability; (6) commands and telemetry; (7) mass and power budgets; (8) integration and test program; (9) program implementation; and (10) phase CD schedule.
Monte Carlo simulation of edge placement error
NASA Astrophysics Data System (ADS)
Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Estrella, Joel; Enomoto, Masashi
2018-03-01
In the discussion of edge placement error (EPE), we proposed interactive pattern fidelity error (IPFE) as an indicator to judge pass/fail of integrated patterns. IPFE consists of lower and upper layer EPEs (CD and center of gravity: COG) and overlay, which is decided from the combination of each maximum variation. We succeeded in obtaining the IPFE density function by Monte Carlo simulation. In the results, we also found that the standard deviation (σ) of each indicator should be controlled by 4.0σ, at the semiconductor grade, such as 100 billion patterns per die. Moreover, CD, COG and overlay were analyzed by analysis of variance (ANOVA); we can discuss all variations from wafer to wafer (WTW), pattern to pattern (PTP), line edge roughness (LWR) and stochastic pattern noise (SPN) on an equal footing. From the analysis results, we can determine that these variations belong to which process and tools. Furthermore, measurement length of LWR is also discussed in ANOVA. We propose that the measurement length for IPFE analysis should not be decided to the micro meter order, such as >2 μm length, but for which device is actually desired.
Contour metrology using critical dimension atomic force microscopy
NASA Astrophysics Data System (ADS)
Orji, Ndubuisi G.; Dixson, Ronald G.; Vladár, András E.; Ming, Bin; Postek, Michael T.
2012-03-01
The critical dimension atomic force microscope (CD-AFM), which is used as a reference instrument in lithography metrology, has been proposed as a complementary instrument for contour measurement and verification. Although data from CD-AFM is inherently three dimensional, the planar two-dimensional data required for contour metrology is not easily extracted from the top-down CD-AFM data. This is largely due to the limitations of the CD-AFM method for controlling the tip position and scanning. We describe scanning techniques and profile extraction methods to obtain contours from CD-AFM data. We also describe how we validated our technique, and explain some of its limitations. Potential sources of error for this approach are described, and a rigorous uncertainty model is presented. Our objective is to show which data acquisition and analysis methods could yield optimum contour information while preserving some of the strengths of CD-AFM metrology. We present comparison of contours extracted using our technique to those obtained from the scanning electron microscope (SEM), and the helium ion microscope (HIM).
Atmospheric energetics as related to cyclogenesis over the eastern United States. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
West, P. W.
1973-01-01
A method is presented to investigate the atmospheric energy budget as related to cyclogenesis. Energy budget equations are developed that are shown to be advantageous because the individual terms represent basic physical processes which produce changes in atmospheric energy, and the equations provide a means to study the interaction of the cyclone with the larger scales of motion. The work presented represents an extension of previous studies because all of the terms of the energy budget equations were evaluated throughout the development period of the cyclone. Computations are carried out over a limited atmospheric volume which encompasses the cyclone, and boundary fluxes of energy that were ignored in most previous studies are evaluated. Two examples of cyclogenesis over the eastern United States were chosen for study. One of the cases (1-4 November, 1966) represented an example of vigorous development, while the development in the other case (5-8 December, 1969) was more modest. Objectively analyzed data were used in the evaluation of the energy budget terms in order to minimize computational errors, and an objective analysis scheme is described that insures that all of the resolution contained in the rawinsonde observations is incorporated in the analyses.
Sloto, Ronald A.; Buxton, Debra E.
2005-01-01
This pilot study, done by the U.S. Geological Survey in cooperation with the Delaware River Basin Commission, developed annual water budgets using available data for five watersheds in the Delaware River Basin with different degrees of urbanization and different geological settings. A basin water budget and a water-use budget were developed for each watershed. The basin water budget describes inputs to the watershed (precipitation and imported water), outputs of water from the watershed (streamflow, exported water, leakage, consumed water, and evapotranspiration), and changes in ground-water and surface-water storage. The water-use budget describes water withdrawals in the watershed (ground-water and surface-water withdrawals), discharges of water in the watershed (discharge to surface water and ground water), and movement of water of water into and out of the watershed (imports, exports, and consumed water). The water-budget equations developed for this study can be applied to any watershed in the Delaware River Basin. Data used to develop the water budgets were obtained from available long-term meteorological and hydrological data-collection stations and from water-use data collected by regulatory agencies. In the Coastal Plain watersheds, net ground-water loss from unconfined to confined aquifers was determined by using ground-water-flow-model simulations. Error in the water-budget terms is caused by missing data, poor or incomplete measurements, overestimated or underestimated quantities, measurement or reporting errors, and the use of point measurements, such as precipitation and water levels, to estimate an areal quantity, particularly if the watershed is hydrologically or geologically complex or the data-collection station is outside the watershed. The complexity of the water budgets increases with increasing watershed urbanization and interbasin transfer of water. In the Wissahickon Creek watershed, for example, some ground water is discharged to streams in the watershed, some is exported as wastewater, and some is exported for public supply. In addition, ground water withdrawn outside the watershed is imported for public supply or imported as wastewater for treatment and discharge in the watershed. A GIS analysis was necessary to quantify many of the water-budget components. The 89.9-square mile East Branch Brandywine Creek watershed in Pennsylvania is a rural watershed with reservoir storage that is underlain by fractured rock. Water budgets were developed for 1977-2001. Average annual precipitation, streamflow, and evapotranspiration were 46.89, 21.58, and 25.88 inches, respectively. Some water was imported (average of 0.68 inches) into the watershed for public-water supply and as wastewater for treatment and discharge; these imports resulted in a net gain of water to the watershed. More water was discharged to East Branch Brandywine Creek than was withdrawn from it; the net discharge resulted in an increase in streamflow. Most ground water was withdrawn (average of 0.25 inches) for public-water supply. Surface water was withdrawn (average of 0.58 inches) for public-water and industrial supply. Discharge of water by sewage-treatment plants and industries (average of 1.22 inches) and regulation by Marsh Creek Reservoir caused base flow to appear an average of 7.2 percent higher than it would have been without these additional sources. On average, 67 percent of the difference was caused by sewage-treatment-plant and industrial discharges, and 33 percent was caused by regulation of the Marsh Creek Reservoir. Water imports, withdrawals, and discharges have been increasing as the watershed becomes increasingly urbanized. The 64-square mile Wissahickon Creek watershed in Pennsylvania is an urban watershed underlain by fractured rock. Water budgets were developed for 1987-98. Average annual precipitation, streamflow, and evapotranspiration were 47.23, 22.24, and 23.12 inches, respectively. The watershed is highly u
Driving imaging and overlay performance to the limits with advanced lithography optimization
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Finders, Jo; van der Laan, Hans; Hinnen, Paul; Kubis, Michael; Beems, Marcel
2012-03-01
Immersion lithography is being extended to 22-nm and even below. Next to generic scanner system improvements, application specific solutions are needed to follow the requirements for CD control and overlay. Starting from the performance budgets, this paper discusses how to improve (in volume manufacturing environment) CDU towards 1-nm and overlay towards 3-nm. The improvements are based on deploying the actuator capabilities of the immersion scanner. The latest generation immersion scanners have extended the correction capabilities for overlay and imaging, offering freeform adjustments of lens, illuminator and wafer grid. In order to determine the needed adjustments the recipe generation per user application is based on a combination wafer metrology data and computational lithography methods. For overlay, focus and CD metrology we use an angle resolved optical scatterometer.
Predicting plant uptake of cadmium: validated with long-term contaminated soils.
Lamb, Dane T; Kader, Mohammed; Ming, Hui; Wang, Liang; Abbasi, Sedigheh; Megharaj, Mallavarapu; Naidu, Ravi
2016-10-01
Cadmium accumulates in plant tissues at low soil loadings and is a concern for human health. Yet at higher levels it is also of concern for ecological receptors. We determined Cd partitioning constants for 41 soils to examine the role of soil properties controlling Cd partitioning and plant uptake. From a series of sorption and dose response studies, transfer functions were developed for predicting Cd uptake in Cucumis sativa L. (cucumber). The parameter log K f was predicted with soil pH ca , logCEC and log OC. Transfer of soil pore-water Cd 2+ to shoots was described with a power function (R 2 = 0.73). The dataset was validated with 13 long-term contaminated soils (plus 2 control soils) ranging in Cd concentration from 0.2 to 300 mg kg -1 . The series of equations predicting Cd shoot from pore-water Cd 2+ were able to predict the measured data in the independent dataset (root mean square error = 2.2). The good relationship indicated that Cd uptake to cucumber shoots could be predicted with Cd pore and Cd 2+ without other pore-water parameters such as pH or Ca 2+ . The approach may be adapted to a range of plant species.
Space shuttle post-entry and landing analysis. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Crawford, B. S.; Duiven, E. M.
1973-01-01
Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.
Sunrise/sunset thermal shock disturbance analysis and simulation for the TOPEX satellite
NASA Technical Reports Server (NTRS)
Dennehy, C. J.; Welch, R. V.; Zimbelman, D. F.
1990-01-01
It is shown here that during normal on-orbit operations the TOPEX low-earth orbiting satellite is subjected to an impulsive disturbance torque caused by rapid heating of its solar array when entering and exiting the earth's shadow. Error budgets and simulation results are used to demonstrate that this sunrise/sunset torque disturbance is the dominant Normal Mission Mode (NMM) attitude error source. The detailed thermomechanical modeling, analysis, and simulation of this torque is described, and the predicted on-orbit performance of the NMM attitude control system in the face of the sunrise/sunset disturbance is presented. The disturbance results in temporary attitude perturbations that exceed NMM pointing requirements. However, they are below the maximum allowable pointing error which would cause the radar altimeter to break lock.
NASA Astrophysics Data System (ADS)
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
The DiskMass Survey. II. Error Budget
NASA Astrophysics Data System (ADS)
Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas
2010-06-01
We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.
NASA Astrophysics Data System (ADS)
Zhang, Chengzhu; Xie, Shaocheng; Klein, Stephen A.; Ma, Hsi-yen; Tang, Shuaiqi; Van Weverberg, Kwinten; Morcrette, Cyril J.; Petch, Jon
2018-03-01
All the weather and climate models participating in the Clouds Above the United States and Errors at the Surface project show a summertime surface air temperature (T2 m) warm bias in the region of the central United States. To understand the warm bias in long-term climate simulations, we assess the Atmospheric Model Intercomparison Project simulations from the Coupled Model Intercomparison Project Phase 5, with long-term observations mainly from the Atmospheric Radiation Measurement program Southern Great Plains site. Quantities related to the surface energy and water budget, and large-scale circulation are analyzed to identify possible factors and plausible links involved in the warm bias. The systematic warm season bias is characterized by an overestimation of T2 m and underestimation of surface humidity, precipitation, and precipitable water. Accompanying the warm bias is an overestimation of absorbed solar radiation at the surface, which is due to a combination of insufficient cloud reflection and clear-sky shortwave absorption by water vapor and an underestimation in surface albedo. The bias in cloud is shown to contribute most to the radiation bias. The surface layer soil moisture impacts T2 m through its control on evaporative fraction. The error in evaporative fraction is another important contributor to T2 m. Similar sources of error are found in hindcast from other Clouds Above the United States and Errors at the Surface studies. In Atmospheric Model Intercomparison Project simulations, biases in meridional wind velocity associated with the low-level jet and the 500 hPa vertical velocity may also relate to T2 m bias through their control on the surface energy and water budget.
Tracing industrial heavy metal inputs to topsoils using using cadmium isotopes
NASA Astrophysics Data System (ADS)
Huang, Y.; Ma, L.; Ni, S.; Lu, H.; Liu, Z.; Zhang, C.; Guo, J.; Wang, N.
2015-12-01
Anthropogenic activities have dominated heavy metal (such as Cd, Pb, and Zn) cycling in many environments. The extent and fate of these metal depositions in topsoils, however, have not been adequately evaluated. Here, we utilize an innovative Cadmium (Cd) isotope tool to trace the sources of metal pollutants in topsoils collected from surrounding a Vanadium Titanium Magnetite smelting plant in Sichuan, China. Topsoil samples and possible pollution end-members such as fly ashes, bottom ashes, ore materials, and coal were also collected from the region surrounding the smelting plant and were analyzed for Cd isotope ratios (d114Cd relative to Cd NIST 3108). Large Cd isotope fractionation (up to 3 ‰) was observed in these industrial end-members: fly ashes possessed higher δ114Cd values ranging from +0.03 to +0.19‰; bottom fly ashes have lower δ114Cd values ranging from -0.35 to -2.46‰; and unprocessed ore and coal samples has δ114Cd value of -0.40‰. This fractionation can be attributed to the smelting processes during which bottom ashes acquired lighter Cd isotope signatures while fly ashes were mainly characterized by heavy isotope ratios, in comparison to the unprocessed ore and coal samples. Indeed, δ114Cd values of topsoils in the smelting area range from 0.29 to -0.56‰, and more than half of the soils analyzed have distinct δ114Cd values > 0‰. Cd isotopes and concentrations measured in topsoils suggested that processed materials (fly and bottom ashes from ore and coal actually used by the smelting plant) were the major source of Cd in soils. In a δ114Cd vs 1/Cd mixing diagram, the soils represent a mixture of three identified end members (fly ash, bottom ash and deep unaffected soil) with distinct Cd isotopic compositions and concentrations. Deep soils have the same δ114Cd values range as the unprocessed ore and coal, which indicated the Cd isotope fractionation did occur during evaporation and condensation processes inside the smelting plant. The signatures of fly ash end member might be even higher according to the δ114Cd increasing trend of topsoils with the increasing of Cd concentration of the topsoils. Our study suggested that δ114Cd values can be used to distinguish sources of anthropogenic Cd and to construct metal budgets in in this studying area.
An audit of the global carbon budget: identifying and reducing sources of uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Tans, P. P.; Marland, G.; Stocker, B. D.
2012-12-01
Uncertainties in our carbon accounting practices may limit our ability to objectively verify emission reductions on regional scales. Furthermore uncertainties in the global C budget must be reduced to benchmark Earth System Models that incorporate carbon-climate interactions. Here we present an audit of the global C budget where we try to identify sources of uncertainty for major terms in the global C budget. The atmospheric growth rate of CO2 has increased significantly over the last 50 years, while the uncertainty in calculating the global atmospheric growth rate has been reduced from 0.4 ppm/yr to 0.2 ppm/yr (95% confidence). Although we have greatly reduced global CO2 growth rate uncertainties, there remain regions, such as the Southern Hemisphere, Tropics and Arctic, where changes in regional sources/sinks will remain difficult to detect without additional observations. Increases in fossil fuel (FF) emissions are the primary factor driving the increase in global CO2 growth rate; however, our confidence in FF emission estimates has actually gone down. Based on a comparison of multiple estimates, FF emissions have increased from 2.45 ± 0.12 PgC/yr in 1959 to 9.40 ± 0.66 PgC/yr in 2010. Major sources of increasing FF emission uncertainty are increased emissions from emerging economies, such as China and India, as well as subtle differences in accounting practices. Lastly, we evaluate emission estimates from Land Use Change (LUC). Although relative errors in emission estimates from LUC are quite high (2 sigma ~ 50%), LUC emissions have remained fairly constant in recent decades. We evaluate the three commonly used approaches to estimating LUC emissions- Bookkeeping, Satellite Imagery, and Model Simulations- to identify their main sources of error and their ability to detect net emissions from LUC.; Uncertainties in Fossil Fuel Emissions over the last 50 years.
NASA Astrophysics Data System (ADS)
Plazas, A. A.; Shapiro, C.; Kannawadi, A.; Mandelbaum, R.; Rhodes, J.; Smith, R.
2016-10-01
Weak gravitational lensing (WL) is one of the most powerful techniques to learn about the dark sector of the universe. To extract the WL signal from astronomical observations, galaxy shapes must be measured and corrected for the point-spread function (PSF) of the imaging system with extreme accuracy. Future WL missions—such as NASA’s Wide-Field Infrared Survey Telescope (WFIRST)—will use a family of hybrid near-infrared complementary metal-oxide-semiconductor detectors (HAWAII-4RG) that are untested for accurate WL measurements. Like all image sensors, these devices are subject to conversion gain nonlinearities (voltage response to collected photo-charge) that bias the shape and size of bright objects such as reference stars that are used in PSF determination. We study this type of detector nonlinearity (NL) and show how to derive requirements on it from WFIRST PSF size and ellipticity requirements. We simulate the PSF optical profiles expected for WFIRST and measure the fractional error in the PSF size (ΔR/R) and the absolute error in the PSF ellipticity (Δe) as a function of star magnitude and the NL model. For our nominal NL model (a quadratic correction), we find that, uncalibrated, NL can induce an error of ΔR/R = 1 × 10-2 and Δe 2 = 1.75 × 10-3 in the H158 bandpass for the brightest unsaturated stars in WFIRST. In addition, our simulations show that to limit the bias of ΔR/R and Δe in the H158 band to ˜10% of the estimated WFIRST error budget, the quadratic NL model parameter β must be calibrated to ˜1% and ˜2.4%, respectively. We present a fitting formula that can be used to estimate WFIRST detector NL requirements once a true PSF error budget is established.
NASA Technical Reports Server (NTRS)
Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh;
2016-01-01
Urban emissions of greenhouse gases (GHG) represent more than 70% of the global fossil fuel GHG emissions. Unless mitigation strategies are successfully implemented, the increase in urban GHG emissions is almost inevitable as large metropolitan areas are projected to grow twice as fast as the world population in the coming 15 years. Monitoring these emissions becomes a critical need as their contribution to the global carbon budget increases rapidly. In this study, we developed the first comprehensive monitoring systems of CO2 emissions at high resolution using a dense network of CO2 atmospheric measurements over the city of Indianapolis. The inversion system was evaluated over a 8-month period and showed an increase compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product, with a 20% increase in the total emissions over the area (from 4.5 to 5.7 Metric Megatons of Carbon +/- 0.23 Metric Megatons of Carbon). However, several key parameters of the inverse system need to be addressed to carefully characterize the spatial distribution of the emissions and the aggregated total emissions.We found that spatial structures in prior emission errors, mostly undetermined, affect significantly the spatial pattern in the inverse solution, as well as the carbon budget over the urban area. Several other parameters of the inversion were sufficiently constrained by additional observations such as the characterization of the GHG boundary inflow and the introduction of hourly transport model errors estimated from the meteorological assimilation system. Finally, we estimated the uncertainties associated with remaining systematic errors and undetermined parameters using an ensemble of inversions. The total CO2 emissions for the Indianapolis urban area based on the ensemble mean and quartiles are 5.26 - 5.91 Metric Megatons of Carbon, i.e. a statistically significant difference compared to the prior total emissions of 4.1 to 4.5 Metric Megatons of Carbon. We therefore conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emissions and their associated error structures are required if we are to determine the spatial structures of urban emissions at high resolution.
Developing Novel Conjugate HIV-1 Subunit Therapeutic Vaccines.
1996-06-01
significant CD4-binding was observed for gpl20-KLH conjugates prepared using 1 -ethyl- 3 -( 3 - dimethylaminopropyl )carbodiimide hydrochloride (EDC). EDC...Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503. 1 . AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3 . REPORT TYPE AND...FOREWORD 3 TABLE OF CONTENTS 4 INTRODUCTION 5 RESULTS 6 Specific Aim # 1 : Production and characterization of HIV-JlV and HIV-1jR_ gp120 6 Development and
NASA Astrophysics Data System (ADS)
Pan, Ming; Troy, Tara; Sahoo, Alok; Sheffield, Justin; Wood, Eric
2010-05-01
Documentation of the water cycle and its evolution over time is a primary scientific goal of the Global Energy and Water Cycle Experiment (GEWEX) and fundamental to assessing global change impacts. In developed countries, observation systems that include in-situ, remote sensing and modeled data can provide long-term, consistent and generally high quality datasets of water cycle variables. The export of these technologies to less developed regions has been rare, but it is these regions where information on water availability and change is probably most needed in the face of regional environmental change due to climate, land use and water management. In these data sparse regions, in situ data alone are insufficient to develop a comprehensive picture of how the water cycle is changing, and strategies that merge in-situ, model and satellite observations within a framework that results in consistent water cycle records is essential. Such an approach is envisaged by the Global Earth Observing System of Systems (GOESS), but has yet to be applied. The goal of this study is to quantify the variation and changes in the global water cycle over the past 50 years. We evaluate the global water cycle using a variety of independent large-scale datasets of hydrologic variables that are used to bridge the gap between sparse in-situ observations, including remote-sensing based retrievals, observation-forced hydrologic modeling, and weather model reanalyses. A data assimilation framework that blends these disparate sources of information together in a consistent fashion with attention to budget closure is applied to make best estimates of the global water cycle and its variation. The framework consists of a constrained Kalman filter applied to the water budget equation. With imperfect estimates of the water budget components, the equation additionally has an error residual term that is redistributed across the budget components using error statistics, which are estimated from the uncertainties among data products. The constrained Kalman filter treats the budget closure constraint as a perfect observation within the assimilation framework. Precipitation is estimated using gauge observations, reanalysis products, and remote sensing products for below 50°N. Evapotranspiration is estimated in a number of ways: from the VIC land surface hydrologic model forced with a hybrid reanalysis-observation global forcing dataset, from remote sensing retrievals based on a suite of energy balance and process based models, and from an atmospheric water budget approach using reanalysis products for the atmospheric convergence and storage terms and our best estimate for precipitation. Terrestrial water storage changes, including surface and subsurface changes, are estimated using estimates from both VIC and the GRACE remote sensing retrievals. From these components, discharge can then be calculated as a residual of the water budget and compared with gauge observations to evaluate the closure of the water budget. Through the use of these largely independent data products, we estimate both the mean seasonal cycle of the water budget components and their uncertainties for a set of 20 large river basins across the globe. We particularly focus on three regions of interest in global changes studies: the Northern Eurasian region which is experiencing rapid change in terrestrial processes; the Amazon which is a central part of the global water, energy and carbon budgets; and Africa, which is predicted to face some of the most critical challenges for water and food security in the coming decades.
The Soil Sink for Nitrous Oxide: Trivial Amount but Challenging Question
NASA Astrophysics Data System (ADS)
Davidson, E. A.; Savage, K. E.; Sihi, D.
2015-12-01
Net uptake of atmospheric nitrous oxide (N2O) has been observed sporadically for many years. Such observations have often been discounted as measurement error or noise, but they were reported frequently enough to gain some acceptance as valid. The advent of fast response field instruments with good sensitivity and precision has permitted confirmation that some soils can be small sinks of N2O. With regards to "closing the global N2O budget" the soil sink is trivial, because it is smaller than the error terms of most other budget components. Although not important from a global budget perspective, the existence of a soil sink for atmospheric N2O presents a fascinating challenge for understanding the physical, chemical, and biological processes that explain the sink. Reduction of N2O by classical biological denitrification requires reducing conditions generally found in wet soil, and yet we have measured the N2O sink in well drained soils, where we also simultaneously measure a sink for atmospheric methane (CH4). Co-occurrence of N2O reduction and CH4 oxidation would require a broad range of microsite conditions within the soil, spanning high and low oxygen concentrations. Abiotic sinks for N2O or other biological processes that consume N2O could exist, but have not yet been identified. We are attempting to simulate processes of diffusion of N2O, CH4, and O2 from the atmosphere and within a soil profile to determine if classical biological N2O reduction and CH4 oxidation at rates consistent with measured fluxes are plausible.
The East Asian Atmospheric Water Cycle and Monsoon Circulation in the Met Office Unified Model
NASA Astrophysics Data System (ADS)
Rodríguez, José M.; Milton, Sean F.; Marzin, Charline
2017-10-01
In this study the low-level monsoon circulation and observed sources of moisture responsible for the maintenance and seasonal evolution of the East Asian monsoon are examined, studying the detailed water budget components. These observational estimates are contrasted with the Met Office Unified Model (MetUM) climate simulation performance in capturing the circulation and water cycle at a variety of model horizontal resolutions and in fully coupled ocean-atmosphere simulations. We study the role of large-scale circulation in determining the hydrological cycle by analyzing key systematic errors in the model simulations. MetUM climate simulations exhibit robust circulation errors, including a weakening of the summer west Pacific Subtropical High, which leads to an underestimation of the southwesterly monsoon flow over the region. Precipitation and implied diabatic heating biases in the South Asian monsoon and Maritime Continent region are shown, via nudging sensitivity experiments, to have an impact on the East Asian monsoon circulation. By inference, the improvement of these tropical biases with increased model horizontal resolution is hypothesized to be a factor in improvements seen over East Asia with increased resolution. Results from the annual cycle of the hydrological budget components in five domains show a good agreement between MetUM simulations and ERA-Interim reanalysis in northern and Tibetan domains. In simulations, the contribution from moisture convergence is larger than in reanalysis, and they display less precipitation recycling over land. The errors are closely linked to monsoon circulation biases.
Wafer hot spot identification through advanced photomask characterization techniques
NASA Astrophysics Data System (ADS)
Choi, Yohan; Green, Michael; McMurran, Jeff; Ham, Young; Lin, Howard; Lan, Andy; Yang, Richer; Lung, Mike
2016-10-01
As device manufacturers progress through advanced technology nodes, limitations in standard 1-dimensional (1D) mask Critical Dimension (CD) metrics are becoming apparent. Historically, 1D metrics such as Mean to Target (MTT) and CD Uniformity (CDU) have been adequate for end users to evaluate and predict the mask impact on the wafer process. However, the wafer lithographer's process margin is shrinking at advanced nodes to a point that the classical mask CD metrics are no longer adequate to gauge the mask contribution to wafer process error. For example, wafer CDU error at advanced nodes is impacted by mask factors such as 3-dimensional (3D) effects and mask pattern fidelity on subresolution assist features (SRAFs) used in Optical Proximity Correction (OPC) models of ever-increasing complexity. These items are not quantifiable with the 1D metrology techniques of today. Likewise, the mask maker needs advanced characterization methods in order to optimize the mask process to meet the wafer lithographer's needs. These advanced characterization metrics are what is needed to harmonize mask and wafer processes for enhanced wafer hot spot analysis. In this paper, we study advanced mask pattern characterization techniques and their correlation with modeled wafer performance.
Radiometric Spacecraft Tracking for Deep Space Navigation
NASA Technical Reports Server (NTRS)
Lanyi, Gabor E.; Border, James S.; Shin, Dong K.
2008-01-01
Interplanetary spacecraft navigation relies on three types of terrestrial tracking observables.1) Ranging measures the distance between the observing site and the probe. 2) The line-of-sight velocity of the probe is inferred from Doppler-shift by measuring the frequency shift of the received signal with respect to the unshifted frequency. 3) Differential angular coordinates of the probe with respect to natural radio sources are nominally obtained via a differential delay technique of (Delta) DOR (Delta Differential One-way Ranging). The accuracy of spacecraft coordinate determination depends on the measurement uncertainties associated with each of these three techniques. We evaluate the corresponding sources of error and present a detailed error budget.
Decoding small surface codes with feedforward neural networks
NASA Astrophysics Data System (ADS)
Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen
2018-01-01
Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.
X-band uplink ground systems development: Part 2
NASA Technical Reports Server (NTRS)
Johns, C. E.
1987-01-01
The prototype X-band exciter testing has been completed. Stability and single-sideband phase noise measurements have been made on the X-band exciter signal (7.145-7.235 GHz) and on the coherent X- and S-band receiver test signals (8.4-8.5 GHz and 2.29-2.3 GHz) generated within the exciter equipment. Outputs are well within error budgets.
1982-04-25
the Directorate of Programs (AFLC/ XRP ), and 11-4 * the Directorate of Logistics Plans and Programs, Aircraft/Missiles Program Division of the Air Staff...OWRM). * The P-18 Exhibit/Budget Estimate Submission (BES), a document developed by AFLC/LOR, is reviewed by AFLC/ XRP , and is presented to HQ USAF
NASA Astrophysics Data System (ADS)
Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.
2014-06-01
This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack Y.; Rokni, Mohammad
1990-01-01
The testing and comparison of two Extended Kalman Filters (EKFs) developed for the Earth Radiation Budget Satellite (ERBS) is described. One EKF updates the attitude quaternion using a four component additive error quaternion. This technique is compared to that of a second EKF, which uses a multiplicative error quaternion. A brief development of the multiplicative algorithm is included. The mathematical development of the additive EKF was presented in the 1989 Flight Mechanics/Estimation Theory Symposium along with some preliminary testing results using real spacecraft data. A summary of the additive EKF algorithm is included. The convergence properties, singularity problems, and normalization techniques of the two filters are addressed. Both filters are also compared to those from the ERBS operational ground support software, which uses a batch differential correction algorithm to estimate attitude and gyro biases. Sensitivity studies are performed on the estimation of sensor calibration states. The potential application of the EKF for real time and non-real time ground attitude determination and sensor calibration for future missions such as the Gamma Ray Observatory (GRO) and the Small Explorer Mission (SMEX) is also presented.
NFIRAOS in 2015: engineering for future integration of complex subsystems
NASA Astrophysics Data System (ADS)
Atwood, Jenny; Andersen, David; Byrnes, Peter; Densmore, Adam; Fitzsimmons, Joeleff; Herriot, Glen; Hill, Alexis
2016-07-01
The Narrow Field InfraRed Adaptive Optics System (NFIRAOS) will be the first-light facility Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). NFIRAOS will be able to host three science instruments that can take advantage of this high performance system. NRC Herzberg is leading the design effort for this critical TMT subsystem. As part of the final design phase of NFIRAOS, we have identified multiple subsystems to be sub-contracted to Canadian industry. The scope of work for each subcontract is guided by the NFIRAOS Work Breakdown Structure (WBS) and is divided into two phases: the completion of the final design and the fabrication, assembly and delivery of the final product. Integration of the subsystems at NRC will require a detailed understanding of the interfaces between the subsystems, and this work has begun by defining the interface physical characteristics, stability, local coordinate systems, and alignment features. In order to maintain our stringent performance requirements, the interface parameters for each subsystem are captured in multiple performance budgets, which allow a bottom-up error estimate. In this paper we discuss our approach for defining the interfaces in a consistent manner and present an example error budget that is influenced by multiple subsystems.
NASA Astrophysics Data System (ADS)
Kim, Youngmi; Choi, Jae-Young; Choi, Kwangseon; Choi, Jung-Hoe; Lee, Sooryong
2011-04-01
As IC design complexity keeps increasing, it is more and more difficult to ensure the pattern transfer after optical proximity correction (OPC) due to the continuous reduction of layout dimensions and lithographic limitation by k1 factor. To guarantee the imaging fidelity, resolution enhancement technologies (RET) such as off-axis illumination (OAI), different types of phase shift masks and OPC technique have been developed. In case of model-based OPC, to cross-confirm the contour image versus target layout, post-OPC verification solutions continuously keep developed - contour generation method and matching it to target structure, method for filtering and sorting the patterns to eliminate false errors and duplicate patterns. The way to detect only real errors by excluding false errors is the most important thing for accurate and fast verification process - to save not only reviewing time and engineer resource, but also whole wafer process time and so on. In general case of post-OPC verification for metal-contact/via coverage (CC) check, verification solution outputs huge of errors due to borderless design, so it is too difficult to review and correct all points of them. It should make OPC engineer to miss the real defect, and may it cause the delay time to market, at least. In this paper, we studied method for increasing efficiency of post-OPC verification, especially for the case of CC check. For metal layers, final CD after etch process shows various CD bias, which depends on distance with neighbor patterns, so it is more reasonable that consider final metal shape to confirm the contact/via coverage. Through the optimization of biasing rule for different pitches and shapes of metal lines, we could get more accurate and efficient verification results and decrease the time for review to find real errors. In this paper, the suggestion in order to increase efficiency of OPC verification process by using simple biasing rule to metal layout instead of etch model application is presented.
Airmass dependence of the Dobson total ozone measurements
NASA Technical Reports Server (NTRS)
Degorska, M.; Rajewska-Wiech, B.
1994-01-01
For many years the airmass dependence of total ozone measurements at Belsk has been observed to vary noticeably from one day to another. Series of AD wavelength pairs measurements taken out to high airmass were analyzed and compared with the two parameter stray light model presented by Basher. The analysis extended to the series of CD measurements indicates the role of atmospheric attenuation in appearing the airmass dependence. The minor noon decline of total ozone has been observed in the CD measurement series similarly as in those of the AD wavelength pairs. Such errors may seriously affect the accuracy of CD measurements at high latitude stations and the observations derived in winter at middle latitude stations.
Accuracy optimization with wavelength tunability in overlay imaging technology
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna
2018-03-01
As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.
Investigation of hyper-NA scanner emulation for photomask CDU performance
NASA Astrophysics Data System (ADS)
Poortinga, Eric; Scheruebl, Thomas; Conley, Will; Sundermann, Frank
2007-02-01
As the semiconductor industry moves toward immersion lithography using numerical apertures above 1.0 the quality of the photomask becomes even more crucial. Photomask specifications are driven by the critical dimension (CD) metrology within the wafer fab. Knowledge of the CD values at resist level provides a reliable mechanism for the prediction of device performance. Ultimately, tolerances of device electrical properties drive the wafer linewidth specifications of the lithography group. Staying within this budget is influenced mainly by the scanner settings, resist process, and photomask quality. Tightening of photomask specifications is one mechanism for meeting the wafer CD targets. The challenge lies in determining how photomask level metrology results influence wafer level imaging performance. Can it be inferred that photomask level CD performance is the direct contributor to wafer level CD performance? With respect to phase shift masks, criteria such as phase and transmission control are generally tightened with each technology node. Are there other photomask relevant influences that effect wafer CD performance? A comprehensive study is presented supporting the use of scanner emulation based photomask CD metrology to predict wafer level within chip CD uniformity (CDU). Using scanner emulation with the photomask can provide more accurate wafer level prediction because it inherently includes all contributors to image formation related to the 3D topography such as the physical CD, phase, transmission, sidewall angle, surface roughness, and other material properties. Emulated images from different photomask types were captured to provide CD values across chip. Emulated scanner image measurements were completed using an AIMS TM45-193i with its hyper-NA, through-pellicle data acquisition capability including the Global CDU Map TM software option for AIMS TM tools. The through-pellicle data acquisition capability is an essential prerequisite for capturing final CDU data (after final clean and pellicle mounting) before the photomask ships or for re-qualification at the wafer fab. Data was also collected on these photomasks using a conventional CD-SEM metrology system with the pellicles removed. A comparison was then made to wafer prints demonstrating the benefit of using scanner emulation based photomask CD metrology.
Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams
NASA Astrophysics Data System (ADS)
Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng
2006-12-01
This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).
Author Correction: Emission budgets and pathways consistent with limiting warming to 1.5 °C
NASA Astrophysics Data System (ADS)
Millar, Richard J.; Fuglestvedt, Jan S.; Friedlingstein, Pierre; Rogelj, Joeri; Grubb, Michael J.; Matthews, H. Damon; Skeie, Ragnhild B.; Forster, Piers M.; Frame, David J.; Allen, Myles R.
2018-06-01
In the version of this Article originally published, a coding error resulted in the erroneous inclusion of a subset of RCP4.5 and RCP8.5 simulations in the sets used for RCP2.6 and RCP6, respectively, leading to an incorrect depiction of the data of the latter two sets in Fig. 1b and RCP2.6 in Table 2. This coding error has now been corrected. The graphic and quantitative changes in the corrected Fig. 1b and Table 2 are contrasted with the originally published display items below. The core conclusions of the paper are not affected, but some numerical values and statements have also been updated as a result; these are listed below. All these errors have now been corrected in the online versions of this Article.
Baumgart, Daniel C.; le Claire, Marie
2016-01-01
Background Crohn’s disease (CD) and ulcerative colitis (UC) challenge economies worldwide. Detailed health economic data of DRG based academic inpatient care for inflammatory bowel disease (IBD) patients in Europe is unavailable. Methods IBD was identified through ICD-10 K50 and K51 code groups. We took an actual costing approach, compared expenditures to G-DRG and non-DRG proceeds and performed detailed cost center and type accounting to identify coverage determinants. Results Of all 3093 hospitalized cases at our department, 164 were CD and 157 UC inpatients in 2012. On average, they were 44.1 (CD 44.9 UC 43.3 all 58) years old, stayed 10.1 (CD 11.8 UC 8.4 vs. all 8) days, carried 5.8 (CD 6.4 UC 5.2 vs. all 6.8) secondary diagnoses, received 7.4 (CD 7.7 UC 7 vs. all 6.2) procedures, had a higher cost weight (CD 2.8 UC 2.4 vs. all 1.6) and required more intense nursing. Their care was more costly (means: total cost IBD 8477€ CD 9051€ UC 7903€ vs. all 5078€). However, expenditures were not fully recovered by DRG proceeds (means: IBD 7413€, CD 8441€, UC 6384€ vs all 4758€). We discovered substantial disease specific mismatches in cost centers and types and identified the medical ward personnel and materials budgets to be most imbalanced. Non-DRG proceeds were almost double (IBD 16.1% vs. all 8.2%), but did not balance deficits at total coverage analysis, that found medications (antimicrobials, biologics and blood products), medical materials (mostly endoscopy items) to contribute most to the deficit. Conclusions DRGs challenge sophisticated IBD care. PMID:26784027
Assessing the Benefits of NASA Category 3, Low Cost Class C/D Missions
NASA Technical Reports Server (NTRS)
Bitten, Robert E.; Shinn, Steven A.; Mahr, Eric M.
2013-01-01
Category 3, Class C/D missions have the benefit of delivering worthwhile science at minimal cost which is increasingly important in NASA's constrained budget environment. Although higher cost Category 1 and 2 missions are necessary to achieve NASA's science objectives, Category 3 missions are shown to be an effective way to provide significant science return at a low cost. Category 3 missions, however, are often reviewed the same as the more risk averse Category 1 and 2 missions. Acknowledging that reviews are not the only aspect of a total engineering effort, reviews are still a significant concern for NASA programs. This can unnecessarily increase the cost and schedule of Category 3 missions. This paper quantifies the benefit and performance of Category 3 missions by looking at the cost vs. capability relative to Category 1 and 2 missions. Lessons learned from successful organizations that develop low cost Category 3, Class C/D missions are also investigated to help provide the basis for suggestions to streamline the review of NASA Category 3 missions.
NASA Technical Reports Server (NTRS)
Stoll, John C.
1995-01-01
The performance of an unaided attitude determination system based on GPS interferometry is examined using linear covariance analysis. The modelled system includes four GPS antennae onboard a gravity gradient stabilized spacecraft, specifically the Air Force's RADCAL satellite. The principal error sources are identified and modelled. The optimal system's sensitivities to these error sources are examined through an error budget and by varying system parameters. The effects of two satellite selection algorithms, Geometric and Attitude Dilution of Precision (GDOP and ADOP, respectively) are examined. The attitude performance of two optimal-suboptimal filters is also presented. Based on this analysis, the limiting factors in attitude accuracy are the knowledge of the relative antenna locations, the electrical path lengths from the antennae to the receiver, and the multipath environment. The performance of the system is found to be fairly insensitive to torque errors, orbital inclination, and the two satellite geometry figures-of-merit tested.
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
NASA Astrophysics Data System (ADS)
Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.
2011-12-01
Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.
Morphology, structure and optical properties of hydrothermally synthesized CeO2/CdS nanocomposites
NASA Astrophysics Data System (ADS)
Mohanty, Biswajyoti; Nayak, J.
2018-04-01
CeO2/CdS nanocomposites were synthesized using a two-step hydrothermal technique. The effects of precursor concentration on the optical and structural properties of the CeO2/CdS nanoparticles were systematically studied. The morphology, composition and the structure of the CeO2/CdS nanocomposite powder were studied by scanning electron microscopy (SEM), energy dispersive X-ray spectrum analysis (EDXA) and X-ray diffraction (XRD), respectively. The optical properties of CeO2/CdS nanocomposites were studied by UV-vis absorption and photoluminescence (PL) spectroscopy. The optical band gaps of the CeO2/CdS nanopowders ranged from 2.34 eV to 2.39 eV as estimated from the UV-vis absorption. In the room temperature photoluminescence spectrum of CeO2/CdS nanopowder, a strong blue emission band was observed at 400 nm. Since the powder shows strong visible luminescence, it may be used as a blue phosphor in future. The original article published with this DOI was submitted in error. The correct article was inadvertently left out of the original submission. This has been rectified and the correct article was published online on 16 April 2018.
NASA Astrophysics Data System (ADS)
Schlegel, N.; Larour, E. Y.; Gardner, A. S.; Lang, C.; Miller, C. E.; van den Broeke, M. R.
2016-12-01
How Greenland ice flow may respond to future increases in surface runoff and to increases in the frequency of extreme melt events is unclear, as it requires detailed comprehension of Greenland surface climate and the ice sheet's sensitivity to associated uncertainties. With established uncertainty quantification tools run within the framework of Ice Sheet System Model (ISSM), we conduct decadal-scale forward modeling experiments to 1) quantify the spatial resolution needed to effectively force distinct components of the surface radiation budget, and subsequently surface mass balance (SMB), in various regions of the ice sheet and 2) determine the dynamic response of Greenland ice flow to variations in components of the net radiation budget. The Glacier Energy and Mass Balance (GEMB) software is a column surface model (1-D) that has recently been embedded as a module within ISSM. Using the ISSM-GEMB framework, we perform sensitivity analyses to determine how perturbations in various components of the surface radiation budget affect model output; these model experiments allow us predict where and on what spatial scale the ice sheet is likely to dynamically respond to changes in these parameters. Preliminary results suggest that SMB should be forced at at least a resolution of 23 km to properly capture dynamic ice response. In addition, Monte-Carlo style sampling analyses reveals that the areas with the largest uncertainty in mass flux are located near the equilibrium line altitude (ELA), upstream of major outlet glaciers in the North and West of the ice sheet. Sensitivity analysis indicates that these areas are also the most vulnerable on the ice sheet to persistent, far-field shifts in SMB, suggesting that continued warming, and upstream shift in the ELA, are likely to result in increased velocities, and consequentially SMB-induced thinning upstream of major outlet glaciers. Here, we extend our investigation to consider various components of the surface radiation budget separately, in order to determine how and where errors in these fields may independently impact ice flow. This work was performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Cryosphere and Interdisciplinary Research in Earth Science Programs.
New method of contour-based mask-shape compiler
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Sugiyama, Akiyuki; Onizawa, Akira; Sato, Hidetoshi; Toyoda, Yasutaka
2007-10-01
We have developed a new method of accurately profiling a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, it is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method for a DFM solution in which two dimensional data are extracted for an error free practical simulation by precise reproduction of a real mask shape in addition to the mask data simulation. The flow centering around the design data is fully automated and provides an environment where optimization and verification for fully automated model calibration with much less error is available. It also allows complete consolidation of input and output functions with an EDA system by constructing a design data oriented system structure. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.
NASA Astrophysics Data System (ADS)
Halpert, G.
1982-07-01
A 50-ampere hour nickel cadmium cell test pack was operated in a power profile simulating the orbit of the Earth Radiation Budget Satellite (ERBS). The objective was to determine the ability of the temperature compensated voltage limit (V sub T) charge control system to maintain energy balance in the half sine wave-type current profile expected of this mission. The four-cell pack (50 E) was tested at the Naval Weapons Support Center (NWSC) at Crane, Indiana. The ERBS evaluation test consisted of two distinct operating sequences, each having a specific purpose. The first phase was a parametric test involving the effect of V sub T level, temperature, and Beta angle on the charge/discharge (C/D) ratio, an indicator of the amount of overcharge. The second phase of testing made use of the C/D ratio limit to augment the V sub T charge limit control. When the C/D limit was reached, the current was switched from the taper mode to a C/67 (0.75 A) trickle charge. The use of an ampere hour integrator limiting the overcharge to a C/67 rate provided a fine tuning of the charge control technique which eliminated the sensitivity problems noted in the initial operating sequence.
NASA Technical Reports Server (NTRS)
Halpert, G.
1982-01-01
A 50-ampere hour nickel cadmium cell test pack was operated in a power profile simulating the orbit of the Earth Radiation Budget Satellite (ERBS). The objective was to determine the ability of the temperature compensated voltage limit (V sub T) charge control system to maintain energy balance in the half sine wave-type current profile expected of this mission. The four-cell pack (50 E) was tested at the Naval Weapons Support Center (NWSC) at Crane, Indiana. The ERBS evaluation test consisted of two distinct operating sequences, each having a specific purpose. The first phase was a parametric test involving the effect of V sub T level, temperature, and Beta angle on the charge/discharge (C/D) ratio, an indicator of the amount of overcharge. The second phase of testing made use of the C/D ratio limit to augment the V sub T charge limit control. When the C/D limit was reached, the current was switched from the taper mode to a C/67 (0.75 A) trickle charge. The use of an ampere hour integrator limiting the overcharge to a C/67 rate provided a fine tuning of the charge control technique which eliminated the sensitivity problems noted in the initial operating sequence.
Global Patterns of Legacy Nitrate Storage in the Vadose Zone
NASA Astrophysics Data System (ADS)
Ascott, M.; Gooddy, D.; Wang, L.; Stuart, M.; Lewis, M.; Ward, R.; Binley, A. M.
2017-12-01
Global-scale nitrogen (N) budgets have been developed to quantify the impact of man's influence on the nitrogen cycle. However, these budgets often do not consider legacy effects such as accumulation of nitrate in the deep vadose zone. In this presentation we show that the vadose zone is an important store of nitrate which should be considered in future nitrogen budgets for effective policymaking. Using estimates of depth to groundwater and nitrate leaching for 1900-2000, we quantify for the first time the peak global storage of nitrate in the vadose zone, estimated as 605 - 1814 Teragrams (Tg). Estimates of nitrate storage are validated using previous national and basin scale estimates of N storage and observed groundwater nitrate data for North America and Europe. Nitrate accumulation per unit area is greatest in North America, China and Central and Eastern Europe where thick vadose zones are present and there is an extensive history of agriculture. In these areas the long solute travel time in the vadose zone means that the anticipated impact of changes in agricultural practices on groundwater quality may be substantially delayed. We argue that in these areas use of conventional nitrogen budget approaches is inappropriate and their continued use will lead to significant errors.
A simple method for the computation of first neighbour frequencies of DNAs from CD spectra
Marck, Christian; Guschlbauer, Wilhelm
1978-01-01
A procedure for the computation of the first neighbour frequencies of DNA's is presented. This procedure is based on the first neighbour approximation of Gray and Tinoco. We show that the knowledge of all the ten elementary CD signals attached to the ten double stranded first neighbour configurations is not necessary. One can obtain the ten frequencies of an unknown DNA with the use of eight elementary CD signals corresponding to eight linearly independent polymer sequences. These signals can be extracted very simply from any eight or more CD spectra of double stranded DNA's of known frequencies. The ten frequencies of a DNA are obtained by least square fit of its CD spectrum with these elementary signals. One advantage of this procedure is that it does not necessitate linear programming, it can be used with CD data digitalized using a large number of wavelengths, thus permitting an accurate resolution of the CD spectra. Under favorable case, the ten frequencies of a DNA (not used as input data) can be determined with an average absolute error < 2%. We have also observed that certain satellite DNA's, those of Drosophila virilis and Callinectes sapidus have CD spectra compatible with those of DNA's of quasi random sequence; these satellite DNA's should adopt also the B-form in solution. PMID:673843
View-Dependent Simplification of Arbitrary Polygonal Environments
2006-01-01
of backfacing nodes are not rendered [ Kumar 96]. 4.3 Triangle-Budget Simplification The screenspace error threshold and silhouette test allow the user...Greg Turk, and Dinesh Manocha for their invaluable guidance and support throughout this project. Funding for this work was provided by DARPA...Proceedings Visualization 95 , IEEE Computer Society Press (Atlanta, GA), 1995, pp. 296-303. [ Kumar 96] Kumar , Subodh, D. Manocha, W. Garrett, M. Lin
LORAN-C LATITUDE-LONGITUDE CONVERSION AT SEA: PROGRAMMING CONSIDERATIONS.
McCullough, James R.; Irwin, Barry J.; Bowles, Robert M.
1985-01-01
Comparisons are made of the precision of arc-length routines as computer precision is reduced. Overland propagation delays are discussed and illustrated with observations from offshore New England. Present practice of LORAN-C error budget modeling is then reviewed with the suggestion that additional terms be considered in future modeling. Finally, some detailed numeric examples are provided to help with new computer program checkout.
Social Security Fraud and Error Prevention Act of 2014
Rep. Becerra, Xavier [D-CA-34
2014-02-26
House - 02/26/2014 Referred to the Committee on Ways and Means, and in addition to the Committee on the Budget, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-05
... Management and Budget (``OMB'') to project aggregate offering price for purposes of the fiscal year 2010... methodology it developed in consultation with the CBO and OMB to project dollar volume for purposes of prior... AAMOP is given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n...
The cost effectiveness and budget impact of natalizumab for formulary inclusion.
Bakhshai, Justin; Bleu-Lainé, Raymond; Jung, Miah; Lim, Jeanne; Reyes, Christian; Sun, Linda; Rochester, Charmaine; Shaya, Fadia T
2010-03-01
Crohn's disease (CD) and multiple sclerosis (MS) are debilitating autoimmune diseases, which represent a substantial cost burden in the context of managed care. As a corollary, there is an unmet pharmacotherapeutic need in patient populations with relapsing forms of MS, in addition to populations with moderately to severely active CD with evidence of inflammation who have experienced an inadequate response to other mainstream therapies. The purpose of this study was to analyze the clinical and economic data associated with natalizumab (Tysabri) and to determine the potential impact of its formulary inclusion in a hypothetical health plan. Regarding MS, the implemented cost-effectiveness and budget-impact models demonstrated an anticipated reduction in relapse rate of 67% over 2 years, and a total therapy cost of $72,120 over 2 years, equating to a cost per relapse avoided of $56,594. With respect to the model assumptions, the market share of natalizumab would experience an increase to 8.5%, resulting in a total per-member, per-month healthcare cost increase of $0.003 ($0.002 for pharmacy costs and $0.001 for medical costs). Regarding CD, over a 2-year period outlined by the model, natalizumab produced the highest average time in remission, steroid-free remission, and remission or response in comparison to the other agents. The mean total costs associated with the initiation of natalizumab, infliximab, and adalimumab were $68,372, $62,090, and $61,796, respectively. Although natalizumab's costs were higher, the mean time spent in remission while on this medication was 4.5 months, as opposed to 2.4 months for infliximab and 2.9 months with adalimumab. This shift in market share was used to estimate the change in total costs (medical + pharmacy), and the per-member per-month change for the model's base case was calculated to be $0.035. The aforementioned cost-effectiveness results for natalizumab in the treatment for CD and MS were limited by the model's predetermined assumptions. These assumptions include anticipated reduction in relapse rate after 2 years of therapy and acquisition costs in the MS model, as well as assuming a certain percentage of patients were primary and secondary failures of TNFalpha inhibitor therapy in the CD model. The evidence presented here demonstrates that natalizumab provides clinical practitioners with another tool in their fight against both MS and CD, albeit by way of a different mechanism of action. After a thorough review of the evidence, the authors find that natalizumab has been shown to be relatively cost effective in the treatment of both conditions from a payer perspective; the therapy adds a new option for those patients for whom conventional treatment was unsuccessful.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Lizarraga, Joy S.; Ockerman, Darwin J.
2010-01-01
The U.S. Geological Survey (USGS), in cooperation with the San Antonio River Authority, the Evergreen Underground Water Conservation District, and the Goliad County Groundwater Conservation District, configured, calibrated, and tested a watershed model for a study area consisting of about 2,150 square miles of the lower San Antonio River watershed in Bexar, Guadalupe, Wilson, Karnes, DeWitt, Goliad, Victoria, and Refugio Counties in south-central Texas. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge using rainfall, potential ET, and upstream discharge data obtained from National Weather Service meteorological stations and USGS streamflow-gaging stations. Additional time-series inputs to the model include wastewater treatment-plant discharges, withdrawals for cropland irrigation, and estimated inflows from springs. Model simulations of streamflow, ET, and groundwater recharge were done for 2000-2007. Because of the complexity of the study area, the lower San Antonio River watershed was divided into four subwatersheds; separate HSPF models were developed for each subwatershed. Simulation of the overall study area involved running simulations of the three upstream models, then running the downstream model. The surficial geology was simplified as nine contiguous water-budget zones to meet model computational limitations and also to define zones for which ET, recharge, and other water-budget information would be output by the model. The model was calibrated and tested using streamflow data from 10 streamflow-gaging stations; additionally, simulated ET was compared with measured ET from a meteorological station west of the study area. The model calibration is considered very good; streamflow volumes were calibrated to within 10 percent of measured streamflow volumes. During 2000-2007, the estimated annual mean rainfall for the water-budget zones ranged from 33.7 to 38.5 inches per year; the estimated annual mean rainfall for the entire watershed was 34.3 inches. Using the HSPF model it was estimated that for 2000-2007, less than 10 percent of the annual mean rainfall on the study watershed exited the watershed as streamflow, whereas about 82 percent, or an average of 28.2 inches per year, exited the watershed as ET. Estimated annual mean groundwater recharge for the entire study area was 3.0 inches, or about 9 percent of annual mean rainfall. Estimated annual mean recharge was largest in water-budget zone 3, the zone where the Carrizo Sand outcrops. In water-budget zone 3, the estimated annual mean recharge was 5.1 inches or about 15 percent of annual mean rainfall. Estimated annual mean recharge was smallest in water-budget zone 6, about 1.1 inches or about 3 percent of annual mean rainfall. The Cibolo Creek subwatershed and the subwatershed of the San Antonio River upstream from Cibolo Creek had the largest and smallest basin yields, about 4.8 inches and 1.2 inches, respectively. Estimated annual ET and annual recharge generally increased with increasing annual rainfall. Also, ET was larger in zones 8 and 9, the most downstream zones in the watershed. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error.
Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)
NASA Technical Reports Server (NTRS)
Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.
2006-01-01
Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
Secondary Forest Age and Tropical Forest Biomass Estimation Using TM
NASA Technical Reports Server (NTRS)
Nelson, R. F.; Kimes, D. S.; Salas, W. A.; Routhier, M.
1999-01-01
The age of secondary forests in the Amazon will become more critical with respect to the estimation of biomass and carbon budgets as tropical forest conversion continues. Multitemporal Thematic Mapper data were used to develop land cover histories for a 33,000 Square kM area near Ariquemes, Rondonia over a 7 year period from 1989-1995. The age of the secondary forest, a surrogate for the amount of biomass (or carbon) stored above-ground, was found to be unimportant in terms of biomass budget error rates in a forested TM scene which had undergone a 20% conversion to nonforest/agricultural cover types. In such a situation, the 80% of the scene still covered by primary forest accounted for over 98% of the scene biomass. The difference between secondary forest biomass estimates developed with and without age information were inconsequential relative to the estimate of biomass for the entire scene. However, in futuristic scenarios where all of the primary forest has been converted to agriculture and secondary forest (55% and 42% respectively), the ability to age secondary forest becomes critical. Depending on biomass accumulation rate assumptions, scene biomass budget errors on the order of -10% to +30% are likely if the age of the secondary forests are not taken into account. Single-date TM imagery cannot be used to accurately age secondary forests into single-year classes. A neural network utilizing TM band 2 and three TM spectral-texture measures (bands 3 and 5) predicted secondary forest age over a range of 0-7 years with an RMSE of 1.59 years and an R(Squared) (sub actual vs predicted) = 0.37. A proposal is made, based on a literature review, to use satellite imagery to identify general secondary forest age groups which, within group, exhibit relatively constant biomass accumulation rates.
NASA Technical Reports Server (NTRS)
Yang, R.; Houser, P.; Joiner, J.
1998-01-01
The surface ground temperature (Tg) is an important meteorological variable, because it represents an integrated thermal state of the land surface determined by a complex surface energy budget. Furthermore, Tg affects both the surface sensible and latent heat fluxes. Through these fluxes. the surface budget is coupled with the atmosphere above. Accurate Tg data are useful for estimating the surface radiation budget and fluxes, as well as soil moisture. Tg is not included in conventional synoptical weather station reports. Currently, satellites provide Tg estimates globally. It is necessary to carefully consider appropriate methods of using these satellite data in a data assimilation system. Recently, an Off-line Land surface GEOS Assimilation (OLGA) system was implemented at the Data Assimilation Office at NASA-GSFC. One of the goals of OLGA is to assimilate satellite-derived Tg data. Prior to the Tg assimilation, a thorough investigation of satellite- and model-derived Tg, including error estimates, is required. In this study we examine the Tg from the n Project (ISCCP DI) data and the OLGA simulations. The ISCCP data used here are 3-hourly DI data (2.5x2.5 degree resolution) for 1992 summer months (June, July, and August) and winter months (January and February). The model Tg for the same periods were generated by OLGA. The forcing data for this OLGA 1992 simulation were generated from the GEOS-1 Data Assimilation System (DAS) at Data Assimilation Office NASA-GSFC. We examine the discrepancies between ISCCP and OLGA Tg with a focus on its spatial and temporal characteristics, particularly on the diurnal cycle. The error statistics in both data sets, including bias, will be estimated. The impact of surface properties, including vegetation cover and type, topography, etc, on the discrepancies will be addressed.
NASA Astrophysics Data System (ADS)
Doytchinov, I.; Tonnellier, X.; Shore, P.; Nicquevert, B.; Modena, M.; Mainaud Durand, H.
2018-05-01
Micrometric assembly and alignment requirements for future particle accelerators, and especially large assemblies, create the need for accurate uncertainty budgeting of alignment measurements. Measurements and uncertainties have to be accurately stated and traceable, to international standards, for metre-long sized assemblies, in the range of tens of µm. Indeed, these hundreds of assemblies will be produced and measured by several suppliers around the world, and will have to be integrated into a single machine. As part of the PACMAN project at CERN, we proposed and studied a practical application of probabilistic modelling of task-specific alignment uncertainty by applying a simulation by constraints calibration method. Using this method, we calibrated our measurement model using available data from ISO standardised tests (10360 series) for the metrology equipment. We combined this model with reference measurements and analysis of the measured data to quantify the actual specific uncertainty of each alignment measurement procedure. Our methodology was successfully validated against a calibrated and traceable 3D artefact as part of an international inter-laboratory study. The validated models were used to study the expected alignment uncertainty and important sensitivity factors in measuring the shortest and longest of the compact linear collider study assemblies, 0.54 m and 2.1 m respectively. In both cases, the laboratory alignment uncertainty was within the targeted uncertainty budget of 12 µm (68% confidence level). It was found that the remaining uncertainty budget for any additional alignment error compensations, such as the thermal drift error due to variation in machine operation heat load conditions, must be within 8.9 µm and 9.8 µm (68% confidence level) respectively.
A comparison of advanced overlay technologies
NASA Astrophysics Data System (ADS)
Dasari, Prasad; Smith, Nigel; Goelzer, Gary; Liu, Zhuan; Li, Jie; Tan, Asher; Koh, Chin Hwee
2010-03-01
The extension of optical lithography to 22nm and beyond by Double Patterning Technology is often challenged by CDU and overlay control. With reduced overlay measurement error budgets in the sub-nm range, relying on traditional Total Measurement Uncertainty (TMU) estimates alone is no longer sufficient. In this paper we will report scatterometry overlay measurements data from a set of twelve test wafers, using four different target designs. The TMU of these measurements is under 0.4nm, within the process control requirements for the 22nm node. Comparing the measurement differences between DBO targets (using empirical and model based analysis) and with image-based overlay data indicates the presence of systematic and random measurement errors that exceeds the TMU estimate.
Wright, Scott A.; Grams, Paul E.
2010-01-01
This report describes numerical modeling simulations of sand transport and sand budgets for reaches of the Colorado River below Glen Canyon Dam. Two hypothetical Water Year 2011 annual release volumes were each evaluated with six hypothetical operational scenarios. The six operational scenarios include the current operation, scenarios with modifications to the monthly distribution of releases, and scenarios with modifications to daily flow fluctuations. Uncertainties in model predictions were evaluated by conducting simulations with error estimates for tributary inputs and mainstem transport rates. The modeling results illustrate the dependence of sand transport rates and sand budgets on the annual release volumes as well as the within year operating rules. The six operational scenarios were ranked with respect to the predicted annual sand budgets for Marble Canyon and eastern Grand Canyon reaches. While the actual WY 2011 annual release volume and levels of tributary inputs are unknown, the hypothetical conditions simulated and reported herein provide reasonable comparisons between the operational scenarios, in a relative sense, that may be used by decision makers within the Glen Canyon Dam Adaptive Management Program.
Developing an Earth system Inverse model for the Earth's energy and water budgets.
NASA Astrophysics Data System (ADS)
Haines, K.; Thomas, C.; Liu, C.; Allan, R. P.; Carneiro, D. M.
2017-12-01
The CONCEPT-Heat project aims at developing a consistent energy budget for the Earth system in order to better understand and quantify global change. We advocate a variational "Earth system inverse" solution as the best methodology to bring the necessary expertise from different disciplines together. L'Ecuyer et al (2015) and Rodell et al (2015) first used a variational approach to adjust multiple satellite data products for air-sea-land vertical fluxes of heat and freshwater, achieving closed budgets on a regional and global scale. However their treatment of horizontal energy and water redistribution and its uncertainties was limited. Following the recent work of Liu et al (2015, 2017) which used atmospheric reanalysis convergences to derive a new total surface heat flux product from top of atmosphere fluxes, we have revisited the variational budget approach introducing a more extensive analysis of the role of horizontal transports of heat and freshwater, using multiple atmospheric and ocean reanalysis products. We find considerable improvements in fluxes in regions such as the North Atlantic and Arctic, for example requiring higher atmospheric heat and water convergences over the Arctic than given by ERA-Interim, thereby allowing lower and more realistic oceanic transports. We explore using the variational uncertainty analysis to produce lower resolution corrections to higher resolution flux products and test these against in situ flux data. We also explore the covariance errors implied between component fluxes that are imposed by the regional budget constraints. Finally we propose this as a valuable methodology for developing consistent observational constraints on the energy and water budgets in climate models. We take a first look at the same regional budget quantities in CMIP5 models and consider the implications of the differences for the processes and biases active in the models. Many further avenues of investigation are possible focused on better valuing the uncertainties in observational flux products and setting requirement targets for future observation programs.
NASA Technical Reports Server (NTRS)
Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.
1984-01-01
Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.
Advancing Technology for Starlight Suppression via an External Occulter
NASA Technical Reports Server (NTRS)
Kasdin, N. J.; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Walkemeyer, P.; Bach, V.; Oakes, E.; Cady, E.;
2011-01-01
External occulters provide the starlight suppression needed for detecting and characterizing exoplanets with a much simpler telescope and instrument than is required for the equivalent performing coronagraph. In this paper we describe progress on our Technology Development for Exoplanet Missions project to design, manufacture, and measure a prototype occulter petal. We focus on the key requirement of manufacturing a precision petal while controlling its shape within precise tolerances. The required tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We discuss the deployable starshade design, representative error budget, thermal analysis, and prototype manufacturing. We also present our meteorology system and methodology for verifying that the petal shape meets the contrast requirement. Finally, we summarize the progress to date building the prototype petal.
Space shuttle entry and landing navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Crawford, B. S.
1974-01-01
A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.
In-die mask registration measurement on 28nm-node and beyond
NASA Astrophysics Data System (ADS)
Chen, Shen Hung; Cheng, Yung Feng; Chen, Ming Jui
2013-09-01
As semiconductor go to smaller node, the critical dimension (CD) of process become more and more small. For lithography, RET (Resolution Enhancement Technology) applications can be used for wafer printing of smaller CD/pitch on 28nm node and beyond. SMO (Source Mask Optimization), DPT (Double Patterning Technology) and SADP (Self-Align Double Patterning) can provide lower k1 value for lithography. In another way, image placement error and overlay control also become more and more important for smaller chip size (advanced node). Mask registration (image placement error) and mask overlay are important factors to affect wafer overlay control/performance especially for DPT or SADP. In traditional method, the designed registration marks (cross type, square type) with larger CD were put into scribe-line of mask frame for registration and overlay measurement. However, these patterns are far way from real patterns. It does not show the registration of real pattern directly and is not a convincing method. In this study, the in-die (in-chip) registration measurement is introduced. We extract the dummy patterns that are close to main pattern from post-OPC (Optical Proximity Correction) gds by our desired rule and choose the patterns that distribute over whole mask uniformly. The convergence test shows 100 points measurement has a reliable result.
Giblin, Jay; Syed, Muhammad; Banning, Michael T; Kuno, Masaru; Hartland, Greg
2010-01-26
Absorption cross sections ((sigma)abs) of single branched CdSe nanowires (NWs) have been measured by photothermal heterodyne imaging (PHI). Specifically, PHI signals from isolated gold nanoparticles (NPs) with known cross sections were compared to those of individual CdSe NWs excited at 532 nm. This allowed us to determine average NW absorption cross sections at 532 nm of (sigma)abs = (3.17 +/- 0.44) x 10(-11) cm2/microm (standard error reported). This agrees well with a theoretical value obtained using a classical electromagnetic analysis ((sigma)abs = 5.00 x 10(-11) cm2/microm) and also with prior ensemble estimates. Furthermore, NWs exhibit significant absorption polarization sensitivities consistent with prior NW excitation polarization anisotropy measurements. This has enabled additional estimates of the absorption cross section parallel ((sigma)abs) and perpendicular ((sigma)abs(perpendicular) to the NW growth axis, as well as the corresponding NW absorption anisotropy ((rho)abs). Resulting values of (sigma)abs = (5.6 +/- 1.1) x 10(-11) cm2/microm, (sigma)abs(perpendicular) = (1.26 +/- 0.21) x 10(-11) cm2/microm, and (rho)abs = 0.63+/- 0.04 (standard errors reported) are again in good agreement with theoretical predictions. These measurements all indicate sizable NW absorption cross sections and ultimately suggest the possibility of future direct single NW absorption studies.
NASA Technical Reports Server (NTRS)
1990-01-01
Cost estimates for phase C/D of the laser atmospheric wind sounder (LAWS) program are presented. This information provides a framework for cost, budget, and program planning estimates for LAWS. Volume 3 is divided into three sections. Section 1 details the approach taken to produce the cost figures, including the assumptions regarding the schedule for phase C/D and the methodology and rationale for costing the various work breakdown structure (WBS) elements. Section 2 shows a breakdown of the cost by WBS element, with the cost divided in non-recurring and recurring expenditures. Note that throughout this volume the cost is given in 1990 dollars, with bottom line totals also expressed in 1988 dollars (1 dollar(88) = 0.93 1 dollar(90)). Section 3 shows a breakdown of the cost by year. The WBS and WBS dictionary are included as an attachment to this report.
Overlay improvement by exposure map based mask registration optimization
NASA Astrophysics Data System (ADS)
Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric
2015-03-01
Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density could also achieve under 5nm performance. We assume mask registration excluding random error is mostly induced by charge accumulation during mask writing, which may be calculated from surrounding exposed pattern density. Multi-loading test mask registration result shows that with x direction writing sequence, mask registration behavior in x direction is mainly related to sequence direction, but mask registration in y direction would be highly impacted by pattern density distribution map. It proves part of mask registration error is due to charge issue from nearby environment. If exposure sequence is chip by chip for normal multi chip layout case, mask registration of both x and y direction would be impacted analogously, which has also been proved by real data. Therefore, we try to set up a simple model to predict the mask registration error based on mask exposure map, and correct it with the given POSCOR (position correction) file for advanced mask writing if needed.
NASA Astrophysics Data System (ADS)
Maurer, Edwin P.; O'Donnell, Greg M.; Lettenmaier, Dennis P.; Roads, John O.
2001-08-01
The ability of the National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis (NRA1) and the follow-up NCEP/Department of Energy (DOE) reanalysis (NRA2), to reproduce the hydrologic budgets over the Mississippi River basin is evaluated using a macroscale hydrology model. This diagnosis is aided by a relatively unconstrained global climate simulation using the NCEP global spectral model, and a more highly constrained regional climate simulation using the NCEP regional spectral model, both employing the same land surface parameterization (LSP) as the reanalyses. The hydrology model is the variable infiltration capacity (VIC) model, which is forced by gridded observed precipitation and temperature. It reproduces observed streamflow, and by closure is constrained to balance other terms in the surface water and energy budgets. The VIC-simulated surface fluxes therefore provide a benchmark for evaluating the predictions from the reanalyses and the climate models. The comparisons, conducted for the 10-year period 1988-1997, show the well-known overestimation of summer precipitation in the southeastern Mississippi River basin, a consistent overestimation of evapotranspiration, and an underprediction of snow in NRA1. These biases are generally lower in NRA2, though a large overprediction of snow water equivalent exists. NRA1 is subject to errors in the surface water budget due to nudging of modeled soil moisture to an assumed climatology. The nudging and precipitation bias alone do not explain the consistent overprediction of evapotranspiration throughout the basin. Another source of error is the gravitational drainage term in the NCEP LSP, which produces the majority of the model's reported runoff. This may contribute to an overprediction of persistence of surface water anomalies in much of the basin. Residual evapotranspiration inferred from an atmospheric balance of NRA1, which is more directly related to observed atmospheric variables, matches the VIC prediction much more closely than the coupled models. However, the persistence of the residual evapotranspiration is much less than is predicted by the hydrological model or the climate models.
AFGL Atmospheric Constituent Profiles (0.120km)
1986-05-15
compilations and (d) individual constituents. Each species is followed by the set of journal refer- ences which contributed either directly or indirectly to... enced materials; those publications that can be associated with particular molecules are so identified. 3. ERROR ESTIMATES/VARIABILITY The practical...budgets, J. Geophys. Res; 88, 10785-10807. (NO, NO 2 , HNO 3 , NO 3] Louisnard, N., Fergant, G., Girard, A., Gramont, L., Lado -Bordowsky, 0., Laurent, J
1993-04-01
determining effective group functioning, leader-group interaction , and decision making; (2) factors that determine effective, low error human performance...infectious disease and biological defense vaccines and drugs , vision, neurotxins, neurochemistry, molecular neurobiology, neurodegenrative diseases...Potential Rotor/Comprehensive Analysis Model for Rotor Aerodynamics-Johnson Aeronautics (FPR/CAMRAD-JA) code to predict Blade Vortex Interaction (BVI
JASMINE: Data analysis and simulation
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Sako, Nobutada; Jasmine Working Group
JASMINE will study the structure and evolution of the Milky Way Galaxy. To accomplish these objectives JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μas at z = 14 mag. In this paper methods for data analysis and error budgets, on-board data handling such as sampling strategy and data compression, and simulation software for end-to-end simulation are presented.
In Situ Metrology for the Corrective Polishing of Replicating Mandrels
2010-06-08
distribution is unlimited. 13. SUPPLEMENTARY NOTES Presented at Mirror Technology Days, Boulder, Colorado, USA, 7-9 June 2010. 14...ABSTRACT The International X-ray Observatory (IXO) will require mandrel metrology with extremely tight tolerances on mirrors with up to 1.6 meter radii...ideal. Error budgets for the IXO mirror segments are presented. A potential solution is presented that uses a voice-coil controlled gauging head, air
Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument
NASA Astrophysics Data System (ADS)
Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory
2014-10-01
The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.
Kennedy Space Center Timing and Countdown Interface to Kennedy Ground Control Subsystem
NASA Technical Reports Server (NTRS)
Olsen, James C.
2015-01-01
Kennedy Ground Control System (KGCS) engineers at the National Aeronautics and Space Administration (NASA) Kennedy Space Center (KSC) are developing a time-tagging process to enable reconstruction of the events during a launch countdown. Such a process can be useful in the case of anomalies or other situations where it is necessary to know the exact time an event occurred. It is thus critical for the timing information to be accurate. KGCS will synchronize all items with Coordinated Universal Time (UTC) obtained from the Timing and Countdown (T&CD) organization. Network Time Protocol (NTP) is the protocol currently in place for synchronizing UTC. However, NTP has a peak error that is too high for today's standards. Precision Time Protocol (PTP) is a newer protocol with a much smaller peak error. The focus of this project has been to implement a PTP solution on the network to increase timing accuracy while introducing and configuring the implementation of a firewall between T&CD and the KGCS network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Notz, Dirk; Jahn, Alexandra; Holland, Marika
A better understanding of the role of sea ice for the changing climate of our planet is the central aim of the diagnostic Coupled Model Intercomparison Project 6 (CMIP6)-endorsed Sea-Ice Model Intercomparison Project (SIMIP). To reach this aim, SIMIP requests sea-ice-related variables from climate-model simulations that allow for a better understanding and, ultimately, improvement of biases and errors in sea-ice simulations with large-scale climate models. This then allows us to better understand to what degree CMIP6 model simulations relate to reality, thus improving our confidence in answering sea-ice-related questions based on these simulations. Furthermore, the SIMIP protocol provides a standardmore » for sea-ice model output that will streamline and hence simplify the analysis of the simulated sea-ice evolution in research projects independent of CMIP. To reach its aims, SIMIP provides a structured list of model output that allows for an examination of the three main budgets that govern the evolution of sea ice, namely the heat budget, the momentum budget, and the mass budget. Furthermore, we explain the aims of SIMIP in more detail and outline how its design allows us to answer some of the most pressing questions that sea ice still poses to the international climate-research community.« less
Notz, Dirk; Jahn, Alexandra; Holland, Marika; ...
2016-09-23
A better understanding of the role of sea ice for the changing climate of our planet is the central aim of the diagnostic Coupled Model Intercomparison Project 6 (CMIP6)-endorsed Sea-Ice Model Intercomparison Project (SIMIP). To reach this aim, SIMIP requests sea-ice-related variables from climate-model simulations that allow for a better understanding and, ultimately, improvement of biases and errors in sea-ice simulations with large-scale climate models. This then allows us to better understand to what degree CMIP6 model simulations relate to reality, thus improving our confidence in answering sea-ice-related questions based on these simulations. Furthermore, the SIMIP protocol provides a standardmore » for sea-ice model output that will streamline and hence simplify the analysis of the simulated sea-ice evolution in research projects independent of CMIP. To reach its aims, SIMIP provides a structured list of model output that allows for an examination of the three main budgets that govern the evolution of sea ice, namely the heat budget, the momentum budget, and the mass budget. Furthermore, we explain the aims of SIMIP in more detail and outline how its design allows us to answer some of the most pressing questions that sea ice still poses to the international climate-research community.« less
High-frequency variations in Earth rotation and the planetary momentum budget
NASA Technical Reports Server (NTRS)
Rosen, Richard D.
1995-01-01
The major focus of the subject contract was on helping to resolve one of the more notable discrepancies still existing in the axial momentum budget of the solid Earth-atmosphere system, namely the disappearance of coherence between length-of-day (l.o.d.) and atmospheric angular momentum (AAM) at periods shorter than about a fortnight. Recognizing the importance of identifying the source of the high-frequency momentum budget anomaly, the scientific community organized two special measurement campaigns (SEARCH '92 and CONT '94) to obtain the best possible determinations of l.o.d. and AAM. An additional goal was to analyze newly developed estimates of the torques that transfer momentum between the atmosphere and its underlying surface to determine whether the ocean might be a reservoir of momentum on short time scales. Discrepancies between AAM and l.o.d. at sub-fortnightly periods have been attributed to either measurement errors in these quantities or the need to incorporate oceanic angular momentum into the planetary budget. Results from the SEARCH '92 and CONT '94 campaigns suggest that when special attention is paid to the quality of the measurements, better agreement between l.o.d. and AAM at high frequencies can be obtained. The mechanism most responsible for the high-frequency changes observed in AAM during these campaigns involves a direct coupling to the solid Earth, i.e, the mountain torque, thereby obviating a significant oceanic role.
NASA Astrophysics Data System (ADS)
2005-01-01
WE RECOMMEND Advancing Physics CD Quick Tour This software makes the Advancing Physics CD easier to use. From Silicon to Computer This CD on computer technology operates like an electronic textbook. Powers of Ten This documentary film gives pupils a feel for the scale of our universe. Multimedia Waves The material on this CD demonstrates various wave phenomena. Infrared thermometer This instant response, remote sensor has numerous lab applications. Magic Universe, The Oxford Guide to Modern Science Acollection of short essays, this book is aimed at A-level students. Fermi Remembered Ajoy to read, this piece of non-fiction leaves you eager for more. Big Bang (lecture and book) Both the book and the lecture are engaging and hugely entertaining. WORTH A LOOK The Way Things Go Lasting just 30 minutes, this film will liven up any mechanics lesson. The Video Encyclopaedia of Physics Demonstrations It may blow your budget, but this DVD is a superb physics resource. Go!Link and Go!Temp Go!Link is a useful, cheap datalogger. Go!Temp seems superfluous. Cracker snaps Cheap and cheerful, cracker snaps can be used to demonstrate force. VPython This 3D animation freeware can be adapted to fit your needs. HANDLE WITH CARE Physics A-Level Presentations It might be better to generate slides yourself rather than modify these. London Planetarium and Madame Tussaud's A day out here is definitely not a worthwhile science excursion.
[Use of medical inpatient services by heavy users: a case of hypochondriasis].
Höfer, Peter; Ossege, Michael; Aigner, Martin
2012-01-01
Hypochondriasis is defined by ICD-10 and DSM-IV through the persistent preoccupation with the possibility of having one or more serious and progressive physical disorders. Patients suffering from hypochondriasis can be responsible for a high utilization of mental health system services. Data have shown that "Heavy User" require a disproportionate part of inpatient admissions and mental health budget costs. We assume that a psychotherapeutic approach, targeting a cognitive behavioral model in combination with neuropsychopharmacological treatment is useful. In our case report we present the "Heavy Using-Phenomenon" based on a patient hospitalized predominantly in neurological inpatient care facilities. From a medical point of view we want to point out to possible treatment errors, on the other hand we want to make aware of financial-socioeconomic factors leading to a massive burden on the global mental health budget.
Improvements in lake water budget computations using Landsat data
NASA Technical Reports Server (NTRS)
Gervin, J. C.; Shih, S. F.
1979-01-01
A supervised multispectral classification was performed on Landsat data for Lake Okeechobee's extensive littoral zone to provide two types of information. First, the acreage of a given plant species as measured by satellite was combined with a more accurate transpiration rate to give a better estimate of evapotranspiration from the littoral zone. Second, the surface area coupled by plant communities was used to develop a better estimate of the water surface as a function of lake stage. Based on this information, more detailed representations of evapotranspiration and total water surface (and hence total lake volume) were provided to the water balance budget model for lake volume predictions. The model results based on information derived from satellite demonstrated a 94 percent reduction in cumulative lake stage error and a 70 percent reduction in the maximum deviation of the lake stage.
Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
The future X-ray observatory missions, such as International X-ray Observatory, require grazing incidence replicated optics of extremely large collecting area (3 m2) in combination with angular resolution of less than 5 arcsec half-power diameter. The resolution of a mirror shell depends ultimately on the quality of the cylindrical mandrels from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation studies have been performed to optimize the operational parameters as well as the polishing lap configuration. Furthermore, depending upon the surface error profile, a model for localized polishing based on dwell time approach is developed. Using the inputs from the mathematical model, a mandrel, having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. We report our first experimental results and discuss plans for further improvements in the polishing process.
Post, Steven R; Post, Ginell R; Nikolic, Dejan; Owens, Rebecca; Insuasti-Beltran, Giovanni
2018-03-24
Despite increased usage of multiparameter flow cytometry (MFC) to assess diagnosis, prognosis, and therapeutic efficacy (minimal residual disease, MRD) in plasma cell neoplasms (PCNs), standardization of methodology and data analysis is suboptimal. We investigated the utility of using the mean and median fluorescence intensities (FI) obtained from MFC to objectively describe parameters that distinguish plasma cell (PC) phenotypes. In this retrospective study, flow cytometry results from bone marrow aspirate specimens from 570 patients referred to the Myeloma Institute at UAMS were evaluated. Mean and median FI data were obtained from 8-color MFC of non-neoplastic, malignant, and mixed PC populations using antibodies to CD38, CD138, CD19, CD20, CD27, CD45, CD56, and CD81. Of 570 cases, 252 cases showed only non-neoplastic PCs, 168 showed only malignant PCs, and 150 showed mixed PC populations. Statistical analysis of median FI data for each CD marker showed no difference in expression intensity on non-neoplastic and malignant PCs, between pure and mixed PC populations. ROC analysis of the median FI of CD expression in non-neoplastic and malignant PCs was used to develop an algorithm to convert quantitative FI values to qualitative assessments including "negative," "positive," "dim," and "heterogeneous" expression. FI data derived from 8-color MFC can be used to define marker expression on PCs. Translation of FI data from Infinicyt software to an Excel worksheet streamlines workflow and eliminates transcriptional errors when generating flow reports. © 2018 International Clinical Cytometry Society. © 2018 International Clinical Cytometry Society.
Patterned wafer geometry grouping for improved overlay control
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Park, Junbeom; Song, Changrock; Anis, Fatima; Vukkadala, Pradeep; Jeon, Sanghuck; Choi, DongSub; Huang, Kevin; Heo, Hoyoung; Smith, Mark D.; Robinson, John C.
2017-03-01
Process-induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget including non-uniform wafer stress. Previous studies have shown the correlation between process-induced stress and overlay and the opportunity for improvement in process control, including the use of patterned wafer geometry (PWG) metrology to reduce stress-induced overlay signatures. Key challenges of volume semiconductor manufacturing are how to improve not only the magnitude of these signatures, but also the wafer to wafer variability. This work involves a novel technique of using PWG metrology to provide improved litho-control by wafer-level grouping based on incoming process induced overlay, relevant for both 3D NAND and DRAM. Examples shown in this study are from 19 nm DRAM manufacturing.
Daly, Rich
2011-11-21
Providers say the administration's growing emphasis on billing audits is pushing them to the limit and threatens to increase their costs. Many billing problems stem from simple errors, not fraud, they say. "When you get into the nuts and bolts of some of these programs you realize it's not as easy as taking the overpayment line out of the budget," says Michael Regier, of VHA.
2015-01-01
emissivity and the radiative intensity of the gas over a spectral band. The temperature is then calculated from the Planck function. The technique does not...pressure budget for cooling channels reduces pump horsepower and turbine inlet temperature DISTRIBUTION STATEMENT A – Approved for public release...distribution unlimited 4 Status of Modeling and Simulation • Existing data set for film cooling effectiveness consists of wall heat flux measurements • CFD
Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data
NASA Technical Reports Server (NTRS)
Larden, D. R.; Bender, P. L.
1982-01-01
The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm.
Simultaneous orbit determination
NASA Technical Reports Server (NTRS)
Wright, J. R.
1988-01-01
Simultaneous orbit determination is demonstrated using live range and Doppler data for the NASA/Goddard tracking configuration defined by the White Sands Ground Terminal (WSGT), the Tracking and Data Relay Satellite (TDRS), and the Earth Radiation Budget Satellite (ERBS). A physically connected sequential filter-smoother was developed for this demonstration. Rigorous necessary conditions are used to show that the state error covariance functions are realistic; and this enables the assessment of orbit estimation accuracies for both TDRS and ERBS.
Low-Power Fault Tolerance for Spacecraft FPGA-Based Numerical Computing
2006-09-01
Ranganathan , “Power Management – Guest Lecture for CS4135, NPS,” Naval Postgraduate School, Nov 2004 [32] R. L. Phelps, “Operational Experiences with the...4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2...undesirable, are not necessarily harmful. Our intent is to prevent errors by properly managing faults. This research focuses on developing fault-tolerant
NASA Astrophysics Data System (ADS)
Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh; Song, Yang; Karion, Anna; Oda, Tomohiro; Patarasuk, Risa; Razlivanov, Igor; Sarmiento, Daniel; Shepson, Paul; Sweeney, Colm; Turnbull, Jocelyn; Wu, Kai
2016-05-01
Based on a uniquely dense network of surface towers measuring continuously the atmospheric concentrations of greenhouse gases (GHGs), we developed the first comprehensive monitoring systems of CO2 emissions at high resolution over the city of Indianapolis. The urban inversion evaluated over the 2012-2013 dormant season showed a statistically significant increase of about 20% (from 4.5 to 5.7 MtC ± 0.23 MtC) compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product. Spatial structures in prior emission errors, mostly undetermined, appeared to affect the spatial pattern in the inverse solution and the total carbon budget over the entire area by up to 15%, while the inverse solution remains fairly insensitive to the CO2 boundary inflow and to the different prior emissions (i.e., ODIAC). Preceding the surface emission optimization, we improved the atmospheric simulations using a meteorological data assimilation system also informing our Bayesian inversion system through updated observations error variances. Finally, we estimated the uncertainties associated with undetermined parameters using an ensemble of inversions. The total CO2 emissions based on the ensemble mean and quartiles (5.26-5.91 MtC) were statistically different compared to the prior total emissions (4.1 to 4.5 MtC). Considering the relatively small sensitivity to the different parameters, we conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emission error structures are required to determine the spatial structures of urban emissions at high resolution.
CD44s and CD44v6 Expression in Head and Neck Epithelia
Mack, Brigitte; Gires, Olivier
2008-01-01
Background CD44 splice variants are long-known as being associated with cell transformation. Recently, the standard form of CD44 (CD44s) was shown to be part of the signature of cancer stem cells (CSCs) in colon, breast, and in head and neck squamous cell carcinomas (HNSCC). This is somewhat in contradiction to previous reports on the expression of CD44s in HNSCC. The aim of the present study was to clarify the actual pattern of CD44 expression in head and neck epithelia. Methods Expression of CD44s and CD44v6 was analysed by immunohistochemistry with specific antibodies in primary head and neck tissues. Scoring of all specimens followed a two-parameters system, which implemented percentages of positive cells and staining intensities from − to +++ (score = %×intensity; resulting max. score 300). In addition, cell surface expression of CD44s and CD44v6 was assessed in lymphocytes and HNSCC. Results In normal epithelia CD44s and CD44v6 were expressed in 60–95% and 50–80% of cells and yielded mean scores with a standard error of a mean (SEM) of 249.5±14.5 and 198±11.13, respectively. In oral leukoplakia and in moderately differentiated carcinomas CD44s and CD44v6 levels were slightly increased (278.9±7.16 and 242±11.7; 291.8±5.88 and 287.3±6.88). Carcinomas in situ displayed unchanged levels of both proteins whereas poorly differentiated carcinomas consistently expressed diminished CD44s and CD44v6 levels. Lymphocytes and HNSCC lines strongly expressed CD44s but not CD44v6. Conclusion CD44s and CD44v6 expression does not distinguish normal from benign or malignant epithelia of the head and neck. CD44s and CD44v6 were abundantly present in the great majority of cells in head and neck tissues, including carcinomas. Hence, the value of CD44s as a marker for the definition of a small subset of cells (i.e. less than 10%) representing head and neck cancer stem cells may need revision. PMID:18852874
Edwards, Emily S J; Bier, Julia; Cole, Theresa S; Wong, Melanie; Hsu, Peter; Berglund, Lucinda J; Boztug, Kaan; Lau, Anthony; Gostick, Emma; Price, David A; O'Sullivan, Michael; Meyts, Isabelle; Choo, Sharon; Gray, Paul; Holland, Steven M; Deenick, Elissa K; Uzel, Gulbu; Tangye, Stuart G
2018-05-22
Germline gain-of function (GOF) mutations in PIK3CD, encoding the catalytic p110δ subunit of phosphatidylinositol-3 kinase, result in hyperactivation of the PI3K-AKT-mTOR pathway and underlie a novel inborn error of immunity. Affected individuals exhibit perturbed humoral and cellular immunity, manifesting as recurrent infections, autoimmunity, hepatosplenomegaly, uncontrolled EBV and/or CMV infection, and an increased incidence of B-cell lymphoproliferation and/or lymphoma. Mechanisms underlying disease pathogenesis remain unknown. Understanding the cellular and molecular mechanisms underpinning inefficient surveillance of EBV-infected B cells is required to understand disease in individuals with PIK3CD GOF mutations, identify key molecules required for cell mediated immunity against EBV, and develop immunotherapeutic interventions for the treatment of this as well as other EBV-opathies. We studied the consequences of PIK3CD GOF mutations on the generation, differentiation and function of CD8 + T cells and NK cells, which are implicated in host defense against infection with herpesviruses including EBV. PIK3CD GOF total and EBV-specific CD8 + T cells were skewed towards an effector phenotype, with exaggerated expression of markers associated with premature immunosenescence/exhaustion, and increased susceptibility to re-activation induced cell death. These findings were recapitulated in a novel mouse model of PI3K GOF. NK cells in PIK3CD GOF individuals also exhibited perturbed expression of differentiation-associated molecules. Both CD8 + T cells and NK cells had reduced capacity to kill EBV-infected B cells. PIK3CD GOF B cells had increased expression of CD48, PDL-1/2 and CD70. PIK3CD GOF mutations aberrantly induce exhaustion and/or senescence and impair cytotoxicity of CD8+ T and NK cells. These defects may contribute to clinical features of affected individuals, such as impaired immunity to herpesviruses and tumor surveillance. Copyright © 2018. Published by Elsevier Inc.
2015-11-01
This work should also stimulate future investigations into therapeutic interventions that restore ipRGC function as a potential therapy for...visual acuity, refractive error, cover test, and optic nerve cup -to-disk (C/D) ratio values are means (SEM). These values were compared using Mann
Fast scattering simulation tool for multi-energy x-ray imaging
NASA Astrophysics Data System (ADS)
Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.
2015-12-01
A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.
Recreational Drug Use and T Lymphocyte Subpopulations in HIV-uninfected and HIV-infected Men
Chao, Chun; Jacobson, Lisa P; Tashkin, Donald; Martínez-Maza, Otoniel; Roth, Michael D; Margolick, Joseph B; Chmiel, Joan S; Rinaldo, Charles; Zhang, Zuo-Feng; Detels, Roger
2009-01-01
The effects of recreational drugs on CD4 and CD8 T cells in humans are not well understood. We conducted a longitudinal analysis of men who have sex with men (MSM) enrolled in the Multicenter AIDS Cohort Study to define associations between self-reported use of marijuana, cocaine, poppers and amphetamines, and CD4 and CD8 T cell parameters in both HIV-uninfected and HIV-infected MSM. For the HIV-infected MSM, we used clinical and laboratory data collected semiannually before 1996 to avoid potential effects of antiretroviral treatment. A regression model that allowed random intercepts and slopes as well as autoregressive covariance structure for within subject errors was used. Potential confounders adjusted for included length of follow-up, demographics, tobacco smoking, alcohol use, risky sexual behaviors, history of sexually transmitted infections, and antiviral therapy. We found no clinically meaningful associations between use of marijuana, cocaine, poppers, or amphetamines and CD4 and CD8 T cell counts, percentages, or rates of change in either HIV-uninfected or -infected men. The regression coefficients were of minimum magnitude despite some reaching statistical significance. No threshold effect was detected for frequent (at least weekly) or continuous substance use in the previous year. These results indicate that use of these substances does not adversely affect the numbers and percentages of circulating CD4 or CD8 T cells in either HIV-uninfected or -infected MSM. PMID:18180115
Intuitive Tools for the Design and Analysis of Communication Payloads for Satellites
NASA Technical Reports Server (NTRS)
Culver, Michael R.; Soong, Christine; Warner, Joseph D.
2014-01-01
In an effort to make future communications satellite payload design more efficient and accessible, two tools were created with intuitive graphical user interfaces (GUIs). The first tool allows payload designers to graphically design their payload by using simple drag and drop of payload components onto a design area within the program. Information about each picked component is pulled from a database of common space-qualified communication components sold by commerical companies. Once a design is completed, various reports can be generated, such as the Master Equipment List. The second tool is a link budget calculator designed specifically for ease of use. Other features of this tool include being able to access a database of NASA ground based apertures for near Earth and Deep Space communication, the Tracking and Data Relay Satellite System (TDRSS) base apertures, and information about the solar system relevant to link budget calculations. The link budget tool allows for over 50 different combinations of user inputs, eliminating the need for multiple spreadsheets and the user errors associated with using them. Both of the aforementioned tools increase the productivity of space communication systems designers, and have the colloquial latitude to allow non-communication experts to design preliminary communication payloads.
Automatic performance budget: towards a risk reduction
NASA Astrophysics Data System (ADS)
Laporte, Philippe; Blake, Simon; Schmoll, Jürgen; Rulten, Cameron; Savoie, Denis
2014-08-01
In this paper, we discuss the performance matrix of the SST-GATE telescope developed to allow us to partition and allocate the important characteristics to the various subsystems as well as to describe the process in order to verify that the current design will deliver the required performance. Due to the integrated nature of the telescope, a large number of parameters have to be controlled and effective calculation tools must be developed such as an automatic performance budget. Its main advantages consist in alleviating the work of the system engineer when changes occur in the design, in avoiding errors during any re-allocation process and recalculate automatically the scientific performance of the instrument. We explain in this paper the method to convert the ensquared energy (EE) and the signal-to-noise ratio (SNR) required by the science cases into the "as designed" instrument. To ensure successful design, integration and verification of the next generation instruments, it is of the utmost importance to have methods to control and manage the instrument's critical performance characteristics at its very early design steps to limit technical and cost risks in the project development. Such a performance budget is a tool towards this goal.
Rösler, Lara; Rolfs, Martin; van der Stigchel, Stefan; Neggers, Sebastiaan F. W.; Cahn, Wiepke; Kahn, René S.
2015-01-01
Corollary discharge (CD) refers to “copies” of motor signals sent to sensory areas, allowing prediction of future sensory states. They enable the putative mechanisms supporting the distinction between self-generated and externally generated sensations. Accordingly, many authors have suggested that disturbed CD engenders psychotic symptoms of schizophrenia, which are characterized by agency distortions. CD also supports perceived visual stability across saccadic eye movements and is used to predict the postsaccadic retinal coordinates of visual stimuli, a process called remapping. We tested whether schizophrenia patients (SZP) show remapping disturbances as evidenced by systematic transsaccadic mislocalizations of visual targets. SZP and healthy controls (HC) performed a task in which a saccadic target disappeared upon saccade initiation and, after a brief delay, reappeared at a horizontally displaced position. HC judged the direction of this displacement accurately, despite spatial errors in saccade landing site, indicating that their comparison of the actual to predicted postsaccadic target location relied on accurate CD. SZP performed worse and relied more on saccade landing site as a proxy for the presaccadic target, consistent with disturbed CD. This remapping failure was strongest in patients with more severe psychotic symptoms, consistent with the theoretical link between disturbed CD and phenomenological experiences in schizophrenia. PMID:26108951
NASA Astrophysics Data System (ADS)
Lv, M.; Ma, Z.; Yuan, X.
2017-12-01
It is important to evaluate the water budget closure on the basis of the currently available data including precipitation, evapotranspiration (ET), runoff, and GRACE-derived terrestrial water storage change (TWSC) before using them to resolve water-related issues. However, it remains challenging to achieve the balance without the consideration of human water use (e.g., inter-basin water diversion and irrigation) for the estimation of other water budget terms such as the ET. In this study, the terrestrial water budget closure is tested over the Yellow River Basin (YRB) and Changjiang River Basin (CJB, Yangtze River Basin) of China. First, the actual ET is reconstructed by using the GLDAS-1 land surface models, the high quality observation-based precipitation, naturalized streamflow, and the irrigation water (hereafter, ETrecon). The ETrecon, evaluated using the mean annual water-balance equation, is of good quality with the absolute relative errors less than 1.9% over the two studied basins. The total basin discharge (Rtotal) is calculated as the residual of the water budget among the observation-based precipitation, ETrecon, and the GRACE-TWSC. The value of the Rtotal minus the observed total basin discharge is used to evaluate the budget closure, with the consideration of inter-basin water diversion. After the ET reconstruction, the mean absolute imbalance value reduced from 3.31 cm/year to 1.69 cm/year and from 15.40 cm/year to 1.96 cm/year over the YRB and CJB, respectively. The estimation-to-observation ratios of total basin discharge improved from 180.8% to 86.8% over the YRB, and from 67.0% to 101.1% over the CJB. The proposed ET reconstruction method is applicable to other human-managed river basins to provide an alternative estimation.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
NASA Astrophysics Data System (ADS)
Xia, Youlong; Cosgrove, Brian A.; Mitchell, Kenneth E.; Peters-Lidard, Christa D.; Ek, Michael B.; Kumar, Sujay; Mocko, David; Wei, Helin
2016-01-01
This paper compares the annual and monthly components of the simulated energy budget from the North American Land Data Assimilation System phase 2 (NLDAS-2) with reference products over the domains of the 12 River Forecast Centers (RFCs) of the continental United States (CONUS). The simulations are calculated from both operational and research versions of NLDAS-2. The reference radiation components are obtained from the National Aeronautics and Space Administration Surface Radiation Budget product. The reference sensible and latent heat fluxes are obtained from a multitree ensemble method applied to gridded FLUXNET data from the Max Planck Institute, Germany. As these references are obtained from different data sources, they cannot fully close the energy budget, although the range of closure error is less than 15% for mean annual results. The analysis here demonstrates the usefulness of basin-scale surface energy budget analysis for evaluating model skill and deficiencies. The operational (i.e., Noah, Mosaic, and VIC) and research (i.e., Noah-I and VIC4.0.5) NLDAS-2 land surface models exhibit similarities and differences in depicting basin-averaged energy components. For example, the energy components of the five models have similar seasonal cycles, but with different magnitudes. Generally, Noah and VIC overestimate (underestimate) sensible (latent) heat flux over several RFCs of the eastern CONUS. In contrast, Mosaic underestimates (overestimates) sensible (latent) heat flux over almost all 12 RFCs. The research Noah-I and VIC4.0.5 versions show moderate-to-large improvements (basin and model dependent) relative to their operational versions, which indicates likely pathways for future improvements in the operational NLDAS-2 system.
NASA Technical Reports Server (NTRS)
Xia, Youlong; Peters-Lidard, Christa D.; Cosgrove, Brian A.; Mitchell, Kenneth E.; Peters-Lidard, Christa; Ek, Michael B.; Kumar, Sujay V.; Mocko, David M.; Wei, Helin
2015-01-01
This paper compares the annual and monthly components of the simulated energy budget from the North American Land Data Assimilation System phase 2 (NLDAS-2) with reference products over the domains of the 12 River Forecast Centers (RFCs) of the continental United States (CONUS). The simulations are calculated from both operational and research versions of NLDAS-2. The reference radiation components are obtained from the National Aeronautics and Space Administration Surface Radiation Budget product. The reference sensible and latent heat fluxes are obtained from a multitree ensemble method applied to gridded FLUXNET data from the Max Planck Institute, Germany. As these references are obtained from different data sources, they cannot fully close the energy budget, although the range of closure error is less than 15%formean annual results. The analysis here demonstrates the usefulness of basin-scale surface energy budget analysis for evaluating model skill and deficiencies. The operational (i.e., Noah, Mosaic, and VIC) and research (i.e., Noah-I and VIC4.0.5) NLDAS-2 land surface models exhibit similarities and differences in depicting basin-averaged energy components. For example, the energy components of the five models have similar seasonal cycles, but with different magnitudes. Generally, Noah and VIC overestimate (underestimate) sensible (latent) heat flux over several RFCs of the eastern CONUS. In contrast, Mosaic underestimates (overestimates) sensible (latent) heat flux over almost all 12 RFCs. The research Noah-I and VIC4.0.5 versions show moderate-to-large improvements (basin and model dependent) relative to their operational versions, which indicates likely pathways for future improvements in the operational NLDAS-2 system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, B.; Boyanov, M.; Bunker, B. A.
2010-08-01
Bulk Cd adsorption isotherm experiments, thermodynamic equilibrium modeling, and Cd K edge EXAFS were used to constrain the mechanisms of proton and Cd adsorption to bacterial cells of the commonly occurring Gram-positive and Gram-negative bacteria, Bacillus subtilis and Shewanella oneidensis, respectively. Potentiometric titrations were used to characterize the functional group reactivity of the S. oneidensis cells, and we model the titration data using the same type of non-electrostatic surface complexation approach as was applied to titrations of B. subtilis suspensions by Fein et al. (2005). Similar to the results for B. subtilis, the S. oneidensis cells exhibit buffering behavior frommore » approximately pH 3-9 that requires the presence of four distinct sites, with pK{sub a} values of 3.3 {+-} 0.2, 4.8 {+-} 0.2, 6.7 {+-} 0.4, and 9.4 {+-} 0.5, and site concentrations of 8.9({+-}2.6) x 10{sup -5}, 1.3({+-}0.2) x 10{sup -4}, 5.9({+-}3.3) x 10{sup -5}, and 1.1({+-}0.6) x 10{sup -4} moles/g bacteria (wet mass), respectively. The bulk Cd isotherm adsorption data for both species, conducted at pH 5.9 as a function of Cd concentration at a fixed biomass concentration, were best modeled by reactions with a Cd:site stoichiometry of 1:1. EXAFS data were collected for both bacterial species as a function of Cd concentration at pH 5.9 and 10 g/L bacteria. The EXAFS results show that the same types of binding sites are responsible for Cd sorption to both bacterial species at all Cd loadings tested (1-200 ppm). Carboxyl sites are responsible for the binding at intermediate Cd loadings. Phosphoryl ligands are more important than carboxyl ligands for Cd binding at high Cd loadings. For the lowest Cd loadings studied here, a sulfhydryl site was found to dominate the bound Cd budgets for both species, in addition to the carboxyl and phosphoryl sites that dominate the higher loadings. The EXAFS results suggest that both Gram-positive and Gram-negative bacterial cell walls have a low concentration of very high-affinity sulfhydryl sites which become masked by the more abundant carboxyl and phosphoryl sites at higher metal:bacteria ratios. This study demonstrates that metal loading plays a vital role in determining the important metal-binding reactions that occur on bacterial cell walls, and that high affinity, low-density sites can be revealed by spectroscopy of biomass samples. Such sites may control the fate and transport of metals in realistic geologic settings, where metal concentrations are low.« less
Noordermeer, Siri D S; Luman, Marjolein; Oosterlaan, Jaap
2016-03-01
Oppositional defiant disorder (ODD) and conduct disorder (CD) are common behavioural disorders in childhood and adolescence and are associated with brain abnormalities. This systematic review and meta-analysis investigates structural (sMRI) and functional MRI (fMRI) findings in individuals with ODD/CD with and without attention-deficit hyperactivity disorder (ADHD). Online databases were searched for controlled studies, resulting in 12 sMRI and 17 fMRI studies. In line with current models on ODD/CD, studies were classified in hot and cool executive functioning (EF). Both the meta-analytic and narrative reviews showed evidence of smaller brain structures and lower brain activity in individuals with ODD/CD in mainly hot EF-related areas: bilateral amygdala, bilateral insula, right striatum, left medial/superior frontal gyrus, and left precuneus. Evidence was present in both structural and functional studies, and irrespective of the presence of ADHD comorbidity. There is strong evidence that abnormalities in the amygdala are specific for ODD/CD as compared to ADHD, and correlational studies further support the association between abnormalities in the amygdala and ODD/CD symptoms. Besides the left precuneus, there was no evidence for abnormalities in typical cool EF related structures, such as the cerebellum and dorsolateral prefrontal cortex. Resulting areas are associated with emotion-processing, error-monitoring, problem-solving and self-control; areas associated with neurocognitive and behavioural deficits implicated in ODD/CD. Our findings confirm the involvement of hot, and to a smaller extent cool, EF associated brain areas in ODD/CD, and support an integrated model for ODD/CD (e.g. Blair, Development and Psychopathology, 17(3), 865-891, 2005).
A staggered-grid convolutional differentiator for elastic wave modelling
NASA Astrophysics Data System (ADS)
Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun
2015-11-01
The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.
NASA Astrophysics Data System (ADS)
Joetzjer, E.; Pillet, M.; Ciais, P.; Barbier, N.; Chave, J.; Schlund, M.; Maignan, F.; Barichivich, J.; Luyssaert, S.; Hérault, B.; von Poncet, F.; Poulter, B.
2017-07-01
Despite advances in Earth observation and modeling, estimating tropical biomass remains a challenge. Recent work suggests that integrating satellite measurements of canopy height within ecosystem models is a promising approach to infer biomass. We tested the feasibility of this approach to retrieve aboveground biomass (AGB) at three tropical forest sites by assimilating remotely sensed canopy height derived from a texture analysis algorithm applied to the high-resolution Pleiades imager in the Organizing Carbon and Hydrology in Dynamic Ecosystems Canopy (ORCHIDEE-CAN) ecosystem model. While mean AGB could be estimated within 10% of AGB derived from census data in average across sites, canopy height derived from Pleiades product was spatially too smooth, thus unable to accurately resolve large height (and biomass) variations within the site considered. The error budget was evaluated in details, and systematic errors related to the ORCHIDEE-CAN structure contribute as a secondary source of error and could be overcome by using improved allometric equations.
40-Gb/s PAM4 with low-complexity equalizers for next-generation PON systems
NASA Astrophysics Data System (ADS)
Tang, Xizi; Zhou, Ji; Guo, Mengqi; Qi, Jia; Hu, Fan; Qiao, Yaojun; Lu, Yueming
2018-01-01
In this paper, we demonstrate 40-Gb/s four-level pulse amplitude modulation (PAM4) transmission with 10 GHz devices and low-complexity equalizers for next-generation passive optical network (PON) systems. Simple feed-forward equalizer (FFE) and decision feedback equalizer (DFE) enable 20 km fiber transmission while high-complexity Volterra algorithm in combination with FFE and DFE can extend the transmission distance to 40 km. A simplified Volterra algorithm is proposed for reducing computational complexity. Simulation results show that the simplified Volterra algorithm reduces up to ∼75% computational complexity at a relatively low cost of only 0.4 dB power budget. At a forward error correction (FEC) threshold of 10-3 , we achieve 31.2 dB and 30.8 dB power budget over 40 km fiber transmission using traditional FFE-DFE-Volterra and our simplified FFE-DFE-Volterra, respectively.
Pellicle transmission uniformity requirements
NASA Astrophysics Data System (ADS)
Brown, Thomas L.; Ito, Kunihiro
1998-12-01
Controlling critical dimensions of devices is a constant battle for the photolithography engineer. Current DUV lithographic process exposure latitude is typically 12 to 15% of the total dose. A third of this exposure latitude budget may be used up by a variable related to masking that has not previously received much attention. The emphasis on pellicle transmission has been focused on increasing the average transmission. Much less, attention has been paid to transmission uniformity. This paper explores the total demand on the photospeed latitude budget, the causes of pellicle transmission nonuniformity and examines reasonable expectations for pellicle performance. Modeling is used to examine how the two primary errors in pellicle manufacturing contribute to nonuniformity in transmission. World-class pellicle transmission uniformity standards are discussed and a comparison made between specifications of other components in the photolithographic process. Specifications for other materials or parameters are used as benchmarks to develop a proposed industry standard for pellicle transmission uniformity.
NASA Astrophysics Data System (ADS)
Yamada, Y.; Gouda, N.; Yano, T.; Kobayashi, Y.; Niwa, Y.; Niwa
2008-07-01
Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with a 10 μas accuracy. We use z-band CCD or K-band array detector to avoid dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. In this poster, we show the observation strategy, reduction scheme, and error budget. We also show the basic design of the software for the end-to-end simulation of JASMINE, named JASMINE Simulator.
The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.
1985-05-08
also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies
Geostationary Operational Environmental Satellite (GOES-N report). Volume 2: Technical appendix
NASA Technical Reports Server (NTRS)
1992-01-01
The contents include: operation with inclinations up to 3.5 deg to extend life; earth sensor improvements to reduce noise; sensor configurations studied; momentum management system design; reaction wheel induced dynamic interaction; controller design; spacecraft motion compensation; analog filtering; GFRP servo design - modern control approach; feedforward compensation as applied to GOES-1 sounder; discussion of allocation of navigation, inframe registration and image-to-image error budget overview; and spatial response and cloud smearing study.
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurtis; Hair, Jason; McAndrew, Brendan; Jennings, Don; Rabin, Douglas; Daw, Adrian; Lundsford, Allen
2012-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission key goals include enabling observation of high accuracy long-term climate change trends, use of these observations to test and improve climate forecasts, and calibration of operational and research sensors. The spaceborne instrument suites include a reflected solar spectroradiometer, emitted infrared spectroradiometer, and radio occultation receivers. The requirement for the RS instrument is that derived reflectance must be traceable to Sl standards with an absolute uncertainty of <0.3% and the error budget that achieves this requirement is described in previo1L5 work. This work describes the Solar/Lunar Absolute Reflectance Imaging Spectroradiometer (SOLARIS), a calibration demonstration system for RS instrument, and presents initial calibration and characterization methods and results. SOLARIS is an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm over a full field-of-view of 10 degrees with 0.27 milliradian sampling. Results from laboratory measurements including use of integrating spheres, transfer radiometers and spectral standards combined with field-based solar and lunar acquisitions are presented. These results will be used to assess the accuracy and repeatability of the radiometric and spectral characteristics of SOLARIS, which will be presented against the sensor-level requirements addressed in the CLARREO RS instrument error budget.
The next generation in optical transport semiconductors: IC solutions at the system level
NASA Astrophysics Data System (ADS)
Gomatam, Badri N.
2005-02-01
In this tutorial overview, we survey some of the challenging problems facing Optical Transport and their solutions using new semiconductor-based technologies. Advances in 0.13um CMOS, SiGe/HBT and InP/HBT IC process technologies and mixed-signal design strategies are the fundamental breakthroughs that have made these solutions possible. In combination with innovative packaging and transponder/transceiver architectures IC approaches have clearly demonstrated enhanced optical link budgets with simultaneously lower (perhaps the lowest to date) cost and manufacturability tradeoffs. This paper will describe: *Electronic Dispersion Compensation broadly viewed as the overcoming of dispersion based limits to OC-192 links and extending link budgets, *Error Control/Coding also known as Forward Error Correction (FEC), *Adaptive Receivers for signal quality monitoring for real-time estimation of Q/OSNR, eye-pattern, signal BER and related temporal statistics (such as jitter). We will discuss the theoretical underpinnings of these receiver and transmitter architectures, provide examples of system performance and conclude with general market trends. These Physical layer IC solutions represent a fundamental new toolbox of options for equipment designers in addressing systems level problems. With unmatched cost and yield/performance tradeoffs, it is expected that IC approaches will provide significant flexibility in turn, for carriers and service providers who must ultimately manage the network and assure acceptable quality of service under stringent cost constraints.
Optomechanical design of the vacuum compatible EXCEDE's mission testbed
NASA Astrophysics Data System (ADS)
Bendek, Eduardo A.; Belikov, Ruslan; Lozi, Julien; Schneider, Glenn; Thomas, Sandrine; Pluzhnik, Eugene; Lynch, Dana
2014-08-01
In this paper we describe the opto-mechanical design, tolerance error budget an alignment strategies used to build the Starlight Suppression System (SSS) for the Exoplanetary Circumstellar Environments and Disk Explorer (EXCEDE) NASA's mission. EXCEDE is a highly efficient 0.7m space telescope concept designed to directly image and spatially resolve circumstellar disks with as little as 10 zodis of circumstellar dust, as well as large planets. The main focus of this work was the design of a vacuum compatible opto-mechanical system that allows remote alignment and operation of the main components of the EXCEDE. SSS, which are: a Phase Induced Amplitude Apodization (PIAA) coronagraph to provide high throughput and high contrast at an inner working angle (IWA) equal to the diffraction limit (IWA = 1.2 l/D), a wavefront (WF) control system based on a Micro-Electro-Mechanical-System deformable mirror (MEMS DM), and low order wavefront sensor (LOWFS) for fine pointing and centering. We describe in strategy and tolerance error budget for this system, which is especially relevant to achieve the theoretical performance that PIAA coronagraph can offer. We also discuss the vacuum cabling design for the actuators, cameras and the Deformable Mirror. This design has been implemented at the vacuum chamber facility at Lockheed Martin (LM), which is based on successful technology development at the Ames Coronagraph Experiment (ACE) facility.
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2017-12-01
Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.
NASA Astrophysics Data System (ADS)
Yahiro, Takehisa; Sawamura, Junpei; Dosho, Tomonori; Shiba, Yuji; Ando, Satoshi; Ishikawa, Jun; Morita, Masahiro; Shibazaki, Yuichi
2018-03-01
One of the main components of an On-Product Overlay (OPO) error budget is the process induced wafer error. This necessitates wafer-to-wafer correction in order to optimize overlay accuracy. This paper introduces the Litho Booster (LB), standalone alignment station as a solution to improving OPO. LB can execute high speed alignment measurements without throughput (THP) loss. LB can be installed in any lithography process control loop as a metrology tool, and is then able to provide feed-forward (FF) corrections to the scanners. In this paper, the detailed LB design is described and basic LB performance and OPO improvement is demonstrated. Litho Booster's extendibility and applicability as a solution for next generation manufacturing accuracy and productivity challenges are also outlined
Astrometry for New Reductions: The ANR method
NASA Astrophysics Data System (ADS)
Robert, Vincent; Le Poncin-Lafitte, Christophe
2018-04-01
Accurate positional measurements of planets and satellites are used to improve our knowledge of their orbits and dynamics, and to infer the accuracy of the planet and satellite ephemerides. With the arrival of the Gaia-DR1 reference star catalog and its complete release afterward, the methods for ground-based astrometry become outdated in terms of their formal accuracy compared to the catalog's which is used. Systematic and zonal errors of the reference stars are eliminated, and the astrometric process now dominates in the error budget. We present a set of algorithms for computing the apparent directions of planets, satellites and stars on any date to micro-arcsecond precision. The expressions are consistent with the ICRS reference system, and define the transformation between theoretical reference data, and ground-based astrometric observables.
Active Optics: stress polishing of toric mirrors for the VLT SPHERE adaptive optics system.
Hugot, Emmanuel; Ferrari, Marc; El Hadi, Kacem; Vola, Pascal; Gimenez, Jean Luc; Lemaitre, Gérard R; Rabou, Patrick; Dohlen, Kjetil; Puget, Pascal; Beuzit, Jean Luc; Hubin, Norbert
2009-05-20
The manufacturing of toric mirrors for the Very Large Telescope-Spectro-Polarimetric High-Contrast Exoplanet Research instrument (SPHERE) is based on Active Optics and stress polishing. This figuring technique allows minimizing mid and high spatial frequency errors on an aspherical surface by using spherical polishing with full size tools. In order to reach the tight precision required, the manufacturing error budget is described to optimize each parameter. Analytical calculations based on elasticity theory and finite element analysis lead to the mechanical design of the Zerodur blank to be warped during the stress polishing phase. Results on the larger (366 mm diameter) toric mirror are evaluated by interferometry. We obtain, as expected, a toric surface within specification at low, middle, and high spatial frequencies ranges.
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Kubis, Michael; Hinnen, Paul; de Graaf, Roelof; van der Laan, Hans; Padiy, Alexander; Menchtchikov, Boris
2013-04-01
Immersion lithography is being extended to the 20-nm and 14-nm node and the lithography performance requirements need to be tightened further to enable this shrink. In this paper we present an integral method to enable high-order fieldto- field corrections for both imaging and overlay, and we show that this method improves the performance with 20% - 50%. The lithography architecture we build for these higher order corrections connects the dynamic scanner actuators with the angle resolved scatterometer via a separate application server. Improvements of CD uniformity are based on enabling the use of freeform intra-field dose actuator and field-to-field control of focus. The feedback control loop uses CD and focus targets placed on the production mask. For the overlay metrology we use small in-die diffraction based overlay targets. Improvements of overlay are based on using the high order intra-field correction actuators on a field-tofield basis. We use this to reduce the machine matching error, extending the heating control and extending the correction capability for process induced errors.
A, Boldt; S, Borte; S, Fricke; K, Kentouche; F, Emmrich; M, Borte; F, Kahlenberg; U, Sack
2014-01-16
Background: The heterogeneity of primary and secondary immunodeficiencies demands for the development of a comprehensive flow cytometric screening system, based on reference values that support a standardized immunophenotypic characterization of most lymphocyte subpopulations. Methods: Peripheral blood samples from healthy adult volunteers (n=25) were collected and split into eight panel fractions (100µl each). Subsequently, pre-mixed 8-color antibody cocktails were incubated per specific panel of whole blood to detect and differentiate cell subsets of: (i) a general lymphocyte overviews, (ii) B-cell subpopulations, (iii) CD4+ subpopulations, (iv) CD8+ subpopulations, (v) regulatory T-cells, (vi) recent thymic emigrants, (vii) NK-cell subpopulations, (viii) NK-cell activation markers. All samples were lysed, washed and measured by flow cytometry. FACS DIVA software was used for data analysis and calculation of quadrant statistics (mean values, standard error of mean, percentile ranges). Results: Whole blood staining of lymphocytes provided the analysis of: (i) CD3+, 4+, 8+, 19+, 16/56+, and activated CD4/8 cells; (ii) immature, naïve, non-switched/switched, memory, (activated) CD21 low , transitional B-cells, plasmablasts/plasmacells; (iii and iv) naïve, central memory, effector, effector memory, TH1/TH2/TH17-like and CCR5+CD8-cells; (v) CD25+, regulatory T-cells (naïve/memory, HLA-DR+); (vi) α/β- and γ/δ-T-cells, recent thymic emigrants in CD4/CD8 cells; (vii) immature/mature CD56 bright , CD94/NKG2D+ NK-cells; and (viii) Nkp30, 44, 46 and CD57+NK-cells. Clinical examples and quadrant statistics are provided. Conclusion: The present study represents a practical approach to standardize the immunophenotyping of most T-, B- and NK-cell subpopulations. That allows differentiating, whether abnormalities or developmental shifts observed in lymphocyte subpopulations originates either from primary or secondary immunological disturbance. © 2014 Clinical Cytometry Society. Copyright © 2014 Clinical Cytometry Society.
Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models
NASA Astrophysics Data System (ADS)
Southworth, John
2010-11-01
I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the effective temperature of the star. I find that the correlations of planetary surface gravity and mass with orbital period have significance levels of only 3.1σ and 2.3σ, respectively. The significance of the latter has not increased with the addition of new data since Paper II. The division of planets into two classes based on Safronov number is increasingly blurred. Most of the objects studied here would benefit from improved photometric and spectroscopic observations, as well as improvements in our understanding of low-mass stars and their effective temperature scale.
Forward to the Future: Estimating River Discharge with McFLI
NASA Astrophysics Data System (ADS)
Gleason, C. J.; Durand, M. T.; Garambois, P. A.
2016-12-01
The global surface water budget is still poorly understood, and improving our understanding of freshwater budgets requires coordination between in situ observations, models, and remote sensing. The upcoming launch of the NASA/CNES Surface Water and Ocean Topography (SWOT) satellite has generated considerable excitement as a new tool enabling hydrologists to tackle some of the most pressing questions facing their discipline. One question in particular which SWOT seems well suited to answer is river discharge (flow rate) estimation in ungauged basins: SWOT's anticipated measurements of river surface height and area have ushered in a new technique in hydrology- what we are here calling Mass conserved Flow Law Inversions, or McFLI. McFLI algorithms leverage classic hydraulic flow expressions (e.g. Manning's Equation, hydraulic geometry) within mass conserved river reaches to construct a simplified but still underconstrained system of equations to be solved for an unknown discharge. Most existing McFLI techniques have been designed to take advantage of SWOT's measurements and Manning's Equation: SWOT will observe changes in cross sectional area and river surface slope over time, so the McFLI need only solve for baseflow area and Manning's roughness coefficient. Recently published preliminary results have indicated that McFLI can be a viable tool in a global hydrologist's toolbox (discharge errors less than 30% as compared to gauges are possible in most cases). Therefore, we here outline the progress to date for McFLI techniques, and highlight three key areas for future development: 1) Maximize the accuracy and robustness of McFLI by incorporating ancillary data from satellites, models, and in situ observations. 2) Develop new McFLI techniques using novel or underutilized flow laws. 3) Systematically test McFLI to define different inversion classes of rivers with well-defined error budgets based on geography and available data for use in gauged and ungauged basins alike.
NASA Astrophysics Data System (ADS)
Van-Wierts, S.; Bernatchez, P.
2012-04-01
Coastal erosion is an important issue within the St-Lawrence estuary and gulf, especially in zones of unconsolidated material. Wide beaches are important coastal environments; they act as a buffer against breaking waves by absorbing and dissipating their energy, thus reducing the rate of coastal erosion. They also offer protection to humans and nearby ecosystems, providing habitat for plants, animals and lifeforms such as algae and microfauna. Conventional methods, such as aerial photograph analysis, fail to adequately quantify the morphosedimentary behavior of beaches at the scale of a hydrosedimentary cells. The lack of reliable and quantitative data leads to considerable errors of overestimation and underestimation of sediment budgets. To address these gaps and to minimize acquisition costs posed by airborne LiDAR survey, a mobile terrestrial LiDAR has been set up to acquire topographic data of the coastal zone. The acquisition system includes a LiDAR sensor, a high precision navigation system (GPS-INS) and a video camera. Comparison of LiDAR data with 1050 DGPS control points shows a vertical mean absolute error of 0.1 m in beach areas. The extracted data is used to calculate sediment volumes, widths, slopes, and a sediment budget index. A high accuracy coastal characterization is achieved through the integration of laser data and video. The main objective of this first project using this system is to quantify the impact of rigid coastal protective structures on sediment budget and beach morphology. Results show that the average sediment volume of beaches located before a rock armour barrier (12 m3/m) were three times narrower than for natural beaches (35,5 m3/m). Natural beaches were also found to have twice the width (25.4 m) of the beaches bordering inhabited areas (12.7 m). The development of sediment budget index for beach areas is an excellent proxy to quickly identify deficit areas and therefore the coastal segments most at risk of erosion. The obtained LiDAR coverage also revealed that beach profiles made at an interval of more than 200 m on diversified coasts lead to results significantly different from reality. However, profile intervals have little impact on long uniform beaches.
NASA Astrophysics Data System (ADS)
Blyth, E.; Martinez-de la Torre, A.; Ellis, R.; Robinson, E.
2017-12-01
The fresh-water budget of the Artic region has a diverse range of impacts: the ecosystems of the region, ocean circulation response to Arctic freshwater, methane emissions through changing wetland extent as well as the available fresh water for human consumption. But there are many processes that control the budget including a seasonal snow packs building and thawing, freezing soils and permafrost, extensive organic soils and large wetland systems. All these processes interact to create a complex hydrological system. In this study we examine a suite of 10 models that bring all those processes together in a 25 year reanalysis of the global water budget. We assess their performance in the Arctic region. There are two approaches to modelling fresh-water flows at large scales, referred to here as `Hydrological' and `Land Surface' models. While both approaches include a physically based model of the water stores and fluxes, the Land Surface models links the water flows to an energy-based model for processes such as snow melt and soil freezing. This study will analyse the impact of that basic difference on the regional patterns of evapotranspiration, runoff generation and terrestrial water storage. For the evapotranspiration, the Hydrological models tend to have a bigger spatial range in the model bias (difference to observations), implying greater errors compared to the Land-Surface models. For instance, some regions such as Eastern Siberia have consistently lower Evaporation in the Hydrological models than the Land Surface models. For the Runoff however, the results are the other way round with a slightly higher spatial range in bias for the Land Surface models implying greater errors than the Hydrological models. A simple analysis would suggest that Hydrological models are designed to get the runoff right, while Land Surface models designed to get the evapotranspiration right. Tracing the source of the difference suggests that the difference comes from the treatment of snow and evapotranspiration. The study reveals that expertise in the role of snow on runoff generation and evapotranspiration in Hydrological and Land Surface could be combined to improve the representation of the fresh water flows in the Arctic in both approaches. Improved observations are essential to make these modelling advances possible.
Uncertainty of the 20th century sea-level rise due to vertical land motion errors
NASA Astrophysics Data System (ADS)
Santamaría-Gómez, Alvaro; Gravelle, Médéric; Dangendorf, Sönke; Marcos, Marta; Spada, Giorgio; Wöppelmann, Guy
2017-09-01
Assessing the vertical land motion (VLM) at tide gauges (TG) is crucial to understanding global and regional mean sea-level changes (SLC) over the last century. However, estimating VLM with accuracy better than a few tenths of a millimeter per year is not a trivial undertaking and many factors, including the reference frame uncertainty, must be considered. Using a novel reconstruction approach and updated geodetic VLM corrections, we found the terrestrial reference frame and the estimated VLM uncertainty may contribute to the global SLC rate error by ± 0.2 mmyr-1. In addition, a spurious global SLC acceleration may be introduced up to ± 4.8 ×10-3 mmyr-2. Regional SLC rate and acceleration errors may be inflated by a factor 3 compared to the global. The difference of VLM from two independent Glacio-Isostatic Adjustment models introduces global SLC rate and acceleration biases at the level of ± 0.1 mmyr-1 and 2.8 ×10-3 mmyr-2, increasing up to 0.5 mm yr-1 and 9 ×10-3 mmyr-2 for the regional SLC. Errors in VLM corrections need to be budgeted when considering past and future SLC scenarios.
Experimental demonstration of laser tomographic adaptive optics on a 30-meter telescope at 800 nm
NASA Astrophysics Data System (ADS)
Ammons, S., Mark; Johnson, Luke; Kupke, Renate; Gavel, Donald T.; Max, Claire E.
2010-07-01
A critical goal in the next decade is to develop techniques that will extend Adaptive Optics correction to visible wavelengths on Extremely Large Telescopes (ELTs). We demonstrate in the laboratory the highly accurate atmospheric tomography necessary to defeat the cone effect on ELTs, an essential milestone on the path to this capability. We simulate a high-order Laser Tomographic AO System for a 30-meter telescope with the LTAO/MOAO testbed at UCSC. Eight Sodium Laser Guide Stars (LGSs) are sensed by 99x99 Shack-Hartmann wavefront sensors over 75". The AO system is diffraction-limited at a science wavelength of 800 nm (S ~ 6-9%) over a field of regard of 20" diameter. Openloop WFS systematic error is observed to be proportional to the total input atmospheric disturbance and is nearly the dominant error budget term (81 nm RMS), exceeded only by tomographic wavefront estimation error (92 nm RMS). The total residual wavefront error for this experiment is comparable to that expected for wide-field tomographic adaptive optics systems of similar wavefront sensor order and LGS constellation geometry planned for Extremely Large Telescopes.
Mass load estimation errors utilizing grab sampling strategies in a karst watershed
Fogle, A.W.; Taraba, J.L.; Dinger, J.S.
2003-01-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
Waffle mode error in the AEOS adaptive optics point-spread function
NASA Astrophysics Data System (ADS)
Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.
2003-02-01
Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.
Quantifying uncertainty in carbon and nutrient pools of coarse woody debris
NASA Astrophysics Data System (ADS)
See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.
2016-12-01
Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.
Line-edge roughness performance targets for EUV lithography
NASA Astrophysics Data System (ADS)
Brunner, Timothy A.; Chen, Xuemei; Gabor, Allen; Higgins, Craig; Sun, Lei; Mack, Chris A.
2017-03-01
Our paper will use stochastic simulations to explore how EUV pattern roughness can cause device failure through rare events, so-called "black swans". We examine the impact of stochastic noise on the yield of simple wiring patterns with 36nm pitch, corresponding to 7nm node logic, using a local Critical Dimension (CD)-based fail criteria Contact hole failures are examined in a similar way. For our nominal EUV process, local CD uniformity variation and local Pattern Placement Error variation was observed, but no pattern failures were seen in the modest (few thousand) number of features simulated. We degraded the image quality by incorporating Moving Standard Deviation (MSD) blurring to degrade the Image Log-Slope (ILS), and were able to find conditions where pattern failures were observed. We determined the Line Width Roughness (LWR) value as a function of the ILS. By use of an artificial "step function" image degraded by various MSD blur, we were able to extend the LWR vs ILS curve into regimes that might be available for future EUV imagery. As we decreased the image quality, we observed LWR grow and also began to see pattern failures. For high image quality, we saw CD distributions that were symmetrical and close to Gaussian in shape. Lower image quality caused CD distributions that were asymmetric, with "fat tails" on the low CD side (under-exposed) which were associated with pattern failures. Similar non-Gaussian CD distributions were associated with image conditions that caused missing contact holes, i.e. CD=0.
Terrestrial Planet Finder Interferometer Technology Status and Plans
NASA Technical Reports Server (NTRS)
Lawson, Perter R.; Ahmed, A.; Gappinger, R. O.; Ksendzov, A.; Lay, O. P.; Martin, S. R.; Peters, R. D.; Scharf, D. P.; Wallace, J. K.; Ware, B.
2006-01-01
A viewgraph presentation on the technology status and plans for Terrestrial Planet Finder Interferometer is shown. The topics include: 1) The Navigator Program; 2) TPF-I Project Overview; 3) Project Organization; 4) Technology Plan for TPF-I; 5) TPF-I Testbeds; 6) Nulling Error Budget; 7) Nulling Testbeds; 8) Nulling Requirements; 9) Achromatic Nulling Testbed; 10) Single Mode Spatial Filter Technology; 11) Adaptive Nuller Testbed; 12) TPF-I: Planet Detection Testbed (PDT); 13) Planet Detection Testbed Phase Modulation Experiment; and 14) Formation Control Testbed.
Budget Studies of a Prefrontal Convective Rainband in Northern Taiwan Determined from TAMEX Data
1993-06-01
storm top accumulates less error in w calculation than an upward integration from the surface. Other Doppler studies, e.g., Chong and Testud (1983), Lin...contribute to the uncertainty of w is a result of the advection problem (Gal-Chen, 1982; Chong and Testud , 1983). Parsons et al (1983) employed an...Boulder, CO., 95-102. Chong, M., and J. Testud , 1983: Three-dimensional wind field analysis from dual-Doppler radar data. Part III: The boundary condition
Development of a sub-miniature rubidium oscillator for SEEKTALK application
NASA Technical Reports Server (NTRS)
Fruehauf, H.; Weidemann, W.; Jechart, E.
1981-01-01
Warm-up and size challenges to oscillator construction are presented as well as the problems involved in these tasks. The performance of M-100 military rubidium oscillator is compared to that of a subminiture rubididum oscillator (M-1000). Methods of achieving 1.5 minute warm-up are discussed as well as improvements in performance under adverse environmental conditions, including temperature, vibration, and magnetics. An attempt is made to construct an oscillator error budget under a set of arbitrary mission conditions.
Improving the Cost Estimation of Space Systems. Past Lessons and Future Recommendations
2008-01-01
a reasonable gauge for the relative propor- tions of cost growth attributable to errors, decisions, and other causes in any MDAP. Analysis of the...program. The program offices visited were the Defense Metrological Satellite Pro- gram (DMSP), Evolved Expendable Launch Vehicle (EELV), Advanced...3 years 1.8 0.9 3–8 years 1.8 0.9 8+ years 3.7 1.8 Staffing Requirement 7.4 3.7 areas represent earned value and budget drills ; the tan area on top
Implementing DRGs at Silas B. Hays Army Community Hospital: Enhancement of Utilization Review
1990-12-01
valuable assistance in creating this wordperfect document from both ASCII and ENABLE files. I thank them for their patience. Lastly, I wish to thank COL Jack...34error" predicate is called from a trap. A longmenu should eventually be used to assist in locating the RCMAS file. rcrnas-file:-not(existfile...B. Hays U.S. Army Community Hospital, Fort Ord, California has the potential to lose over $900 thousand in the supply budget category starting in
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2014-01-01
Based on 30 years of optical testing experience, a lot of mistakes, a lot of learning and a lot of experience, I have defined seven guiding principles for optical testing - regardless of how small or how large the optical testing or metrology task: Fully Understand the Task, Develop an Error Budget, Continuous Metrology Coverage, Know where you are, Test like you fly, Independent Cross-Checks, Understand All Anomalies. These rules have been applied with great success to the inprocess optical testing and final specification compliance testing of the JWST mirrors.
Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data
NASA Technical Reports Server (NTRS)
Larden, D. R.; Bender, P. L.
1983-01-01
The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm. Previously announced in STAR as N83-14605
Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao
2017-12-26
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.
NASA Astrophysics Data System (ADS)
Ke, Chih-Ming; Hu, Jimmy; Wang, Willie; Huang, Jacky; Chung, H. L.; Liang, C. R.; Shih, Victor; Liu, H. H.; Lee, H. J.; Lin, John; Fan, Y. D.; Yen, Tony; Wright, Noelle; Alvarez Sanchez, Ruben; Coene, Wim; Noot, Marc; Yuan, Kiwi; Wang, Vivien; Bhattacharyya, Kaustuve; van der Mast, Karel
2009-03-01
A brand new CD metrology technique that can address the need for accuracy, precision and speed in near future lithography is probably one of the most challenging items. CDSEMs have served this need for a long time, however, a change of or an addition to this traditional approach is inevitable as the increase in the need for better precision (tight CDU budget) and speed (driven by the demand for increase in sampling) continues to drive the need for advanced nodes. The success of CD measurement with scatterometry remains in the capability to model the resist grating, such as, CD and shape (side wall angle), as well as the under-lying layers (thickness and material property). Things are relatively easier for the cases with isotropic under-lying layers (that consists of single refractive or absorption indices). However, a real challenge to such a technique becomes evident when one or more of the under-lying layers are anisotropic. In this technical presentation the authors would like to evaluate such CD reconstruction technology, a new scatterometry based platform under development at ASML, which can handle bi-refringent non-patterned layers with uniaxial anisotropy in the underlying stack. In the RCWA code for the bi-refringent case, the elegant formalism of the enhanced transmittance matrix can still be used. In this paper, measurement methods and data will be discussed from several complex production stacks (layers). With inclusion of the bi-refringent modeling, the in-plane and perpendicular n and k values can be treated as floating parameters for the bi-refringent layer, so that very robust CD-reconstruction is achieved with low reconstruction residuals. As a function of position over the wafer, significant variations of the perpendicular n and k values are observed, with a typical radial fingerprint on the wafer, whereas the variations in the in-plane n and k values are seen to be considerably lower.
Matching OPC and masks on 300-mm lithography tools utilizing variable illumination settings
NASA Astrophysics Data System (ADS)
Palitzsch, Katrin; Kubis, Michael; Schroeder, Uwe P.; Schumacher, Karl; Frangen, Andreas
2004-05-01
CD control is crucial to maximize product yields on 300mm wafers. This is particularly true for DRAM frontend lithography layers, like gate level, and deep trench (capacitor) level. In the DRAM process, large areas of the chip are taken up by array structures, which are difficult to structure due to aggressive pitch requirements. Consequently, the lithography process is centered such that the array structures are printed on target. Optical proximity correction is applied to print gate level structures in the periphery circuitry on target. Only slight differences of the different Zernike terms can cause rather large variations of the proximity curves, resulting in a difference of isolated and semi-isolated lines printed on different tools. If the deviations are too large, tool specific OPC is needed. The same is true for deep trench level, where the length to width ratio of elongated contact-like structures is an important parameter to adjust the electrical properties of the chip. Again, masks with specific biases for tools with different Zernikes are needed to optimize product yield. Additionally, mask making contributes to the CD variation of the process. Theoretically, the CD deviation caused by an off-centered mask process can easily eat up the majority of the CD budget of a lithography process. In practice, masks are very often distributed intelligently among production tools, such that lens and mask effects cancel each other. However, only dose adjusting and mask allocation may still result in a high CD variation with large systematical contributions. By adjusting the illumination settings, we have successfully implemented a method to reduce CD variation on our advanced processes. Especially inner and outer sigma for annular illumination, and the numerical aperture, can be optimized to match mask and stepper properties. This process will be shown to overcome slight lens and mask differences effectively. The effects on lithography process windows have to be considered, nonetheless.
[Budget impact analysis of antiretroviral therapy. A reflection based on the GESIDA guidelines].
2012-01-01
The latest version of the Spanish clinical practice guidelines on antiretroviral therapy (ART) in HIV-infected adults, developed by the Spanish AIDS Study Group (GESIDA) and the National AIDS Plan, recommends initiating ART early in certain circumstances. The aim of this study was to estimate the budget impact of this recommendation by using the data from the VACH cohort. We considered a scenario in which all naïve asymptomatic patients would initiate ART if they had <500 lymphocytes, or a CD4/μL count >500/μL if they were older than 55 years, or had high viral load, liver disease, chronic kidney disease or high cardiovascular risk. The study was designed as a cost analysis in terms of annual pharmaceutical expenditure. The only costs included were those relating to the ART combinations analyzed. To estimate these costs, we assumed that this guideline had a penetration of 80%, an adherence of 95% and 12% dropouts. A total of 12,500 patients were reviewed. Of these, 1,127 (10%) had not initiated ART; CD4 lymphocyte count was 350-500 in 294 (26.1%) and > 500 in 685 (60.8%). If the new clinical practice guideline were applied, 45.2% of naïve patients (95% CI: 42.4%-48.2%) would be advised to start ART. Carrying out this recommendation in hospitals of the VACH cohort would require an additional annual investment of € 3,270,975 and would increase the overall cost of antiretroviral drugs by 3%. In the framework of health economics, incorporating economic impact estimates - such as those performed in this study - into clinical practice guidelines would be advisable to increase their feasibility. Copyright © 2011 SESPAS. Published by Elsevier Espana. All rights reserved.
Cost and availability of gluten-free food in the UK: in store and online.
Burden, Mitchell; Mooney, Peter D; Blanshard, Rebecca J; White, William L; Cambray-Deakin, David R; Sanders, David S
2015-11-01
Coeliac disease (CD) is a lifelong condition requiring strict adherence to a gluten-free (GF) diet and good availability of GF foods is critical to this. Patients with CD from lower socioeconomic groups are recognised to have higher treatment burden and higher food costs may impact this. Therefore, we aimed to assess the availability and cost of GF food in supermarkets and via the internet. Supermarkets and internet shops delivering to homes in a single city (UK) were analysed between February and March 2014. Stores were identified with comprehensive internet searches. Ten commonly purchased items were analysed for cost and compared with standard non-GF alternatives. Direct measurement of the number of GF foods available was compared between stores which were categorised according to previously published work. Supermarkets covering the whole of Sheffield, UK. None of the budget supermarkets surveyed stocked any GF foods. Quality and regular supermarkets stocked the greatest range, each stocking a median of 22 (IQR 39) items (p<0.0001). All GF foods were at least four times more expensive than non-GF alternatives (p<0.0001). GF products are prevalent online, but 5/10 of the surveyed products were significantly more expensive than equivalents in supermarkets. There is good availability of GF food in regular and quality supermarkets as well as online, but it remains significantly more expensive. Budget supermarkets which tend to be frequented by patients from lower socioeconomic classes stocked no GF foods. This poor availability and added cost is likely to impact on adherence in deprived groups. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Hanekom, W A; Hussey, G D; Hughes, E J; Potgieter, S; Yogev, R; Check, I J
1999-03-01
Plasma-soluble CD30 (sCD30) is the result of proteolytic splicing from the membrane-bound form of CD30, a putative marker of type 2 cytokine-producing cells. We measured sCD30 levels in children with tuberculosis, a disease characterized by prominent type 1 lymphocyte cytokine responses. We postulated that disease severity and nutritional status would alter cytokine responses and therefore sCD30 levels. Samples from South African children enrolled prospectively at the time of diagnosis of tuberculosis were analyzed. (Patients were originally enrolled in a randomized, double-blind placebo-controlled study of the effects of oral vitamin A supplementation on prognosis of tuberculosis.) Plasma samples collected at the time of diagnosis and 6 and 12 weeks later (during antituberculosis therapy) were analyzed. sCD30 levels were measured by enzyme immunoassay. The 91 children included in the study demonstrated high levels of sCD30 at diagnosis (median, 98 U/liter; range, 11 to 1,569 U/liter). Although there was a trend toward higher sCD30 levels in more severe disease (e.g., culture-positive disease or miliary disease), this was not statistically significant. Significantly higher sCD30 levels were demonstrated in the presence of nutritional compromise: the sCD30 level was higher in patients with a weight below the third percentile for age, in those with clinical signs of kwashiorkor, and in those with a low hemoglobin content. There was minimal change in the sCD30 level after 12 weeks of therapy, even though patients improved clinically. However, changes in sCD30 after 12 weeks differed significantly when 46 patients (51%) who received vitamin A were compared with those who had received a placebo. Vitamin A-supplemented children demonstrated a mean (+/- standard error of the mean) decrease in sCD30 by a factor of 0.99 +/- 0.02 over 12 weeks, whereas a factor increase of 1.05 +/- 0.02 was demonstrated in the placebo group (P = 0.02). We conclude that children with tuberculosis had high sCD30 levels, which may reflect the presence of a type 2 cytokine response. Nutritional compromise was associated with higher sCD30 levels. Vitamin A therapy resulted in modulation of sCD30 levels over time.
Hanekom, W. A.; Hussey, G. D.; Hughes, E. J.; Potgieter, S.; Yogev, R.; Check, I. J.
1999-01-01
Plasma-soluble CD30 (sCD30) is the result of proteolytic splicing from the membrane-bound form of CD30, a putative marker of type 2 cytokine-producing cells. We measured sCD30 levels in children with tuberculosis, a disease characterized by prominent type 1 lymphocyte cytokine responses. We postulated that disease severity and nutritional status would alter cytokine responses and therefore sCD30 levels. Samples from South African children enrolled prospectively at the time of diagnosis of tuberculosis were analyzed. (Patients were originally enrolled in a randomized, double-blind placebo-controlled study of the effects of oral vitamin A supplementation on prognosis of tuberculosis.) Plasma samples collected at the time of diagnosis and 6 and 12 weeks later (during antituberculosis therapy) were analyzed. sCD30 levels were measured by enzyme immunoassay. The 91 children included in the study demonstrated high levels of sCD30 at diagnosis (median, 98 U/liter; range, 11 to 1,569 U/liter). Although there was a trend toward higher sCD30 levels in more severe disease (e.g., culture-positive disease or miliary disease), this was not statistically significant. Significantly higher sCD30 levels were demonstrated in the presence of nutritional compromise: the sCD30 level was higher in patients with a weight below the third percentile for age, in those with clinical signs of kwashiorkor, and in those with a low hemoglobin content. There was minimal change in the sCD30 level after 12 weeks of therapy, even though patients improved clinically. However, changes in sCD30 after 12 weeks differed significantly when 46 patients (51%) who received vitamin A were compared with those who had received a placebo. Vitamin A-supplemented children demonstrated a mean (± standard error of the mean) decrease in sCD30 by a factor of 0.99 ± 0.02 over 12 weeks, whereas a factor increase of 1.05 ± 0.02 was demonstrated in the placebo group (P = 0.02). We conclude that children with tuberculosis had high sCD30 levels, which may reflect the presence of a type 2 cytokine response. Nutritional compromise was associated with higher sCD30 levels. Vitamin A therapy resulted in modulation of sCD30 levels over time. PMID:10066655
EUV via hole pattern fidelity enhancement through novel resist and post-litho plasma treatment
NASA Astrophysics Data System (ADS)
Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Yamashita, Fumiko; Kaushik, Kumar; Morikita, Shinya; Ito, Kiyohito; Yoshimura, Shota; Timoshkov, Vadim; Maslow, Mark; Jee, Tae Kwon; Reijnen, Liesbeth; Choi, Peter; Feng, Mu; Spence, Chris; Schoofs, Stijn
2018-03-01
Extreme UV(EUV) technology must be potential solution for sustainable scaling, and its adoption in high volume manufacturing(HVM) is getting realistic more and more. This technology has a wide capability to mitigate various technical problem in Multi-patterning (LELELE) for via hole patterning with 193-i. It induced local pattern fidelity error such like CDU, CER, Pattern placement error. Exactly, EUV must be desirable scaling-driving tool, however, specific technical issue, named RLS (Resolution-LER-Sensitivity) triangle, obvious remaining issue. In this work, we examined hole patterning sensitizing (Lower dose approach) utilizing hole patterning restoration technique named "CD-Healing" as post-Litho. treatment.
Harada, Kazuto; Dong, Xiaochuan; Estrella, Jeannelyn S; Correa, Arlene M; Xu, Yan; Hofstetter, Wayne L; Sudo, Kazuki; Onodera, Hisashi; Suzuki, Koyu; Suzuki, Akihiro; Johnson, Randy L; Wang, Zhenning; Song, Shumei; Ajani, Jaffer A
2018-01-01
Programmed death ligand 1 (PD-L1) is a key protein upregulated by tumor cells to suppress immune responses. Tumor-associated macrophages (TAMs) play a major role in this immunosuppression, but the relationship between PD-L1 expression and TAMs remains unclear in gastric adenocarcinoma (GAC). We simultaneously examined expression of PD-L1 and TAMs in GAC. We performed immunohistochemical staining for PD-L1, CD68 (pan-macrophage), and CD163 (M2-like macrophage) in 217 GAC samples using a tissue microarray. Expression of PD-L1 and CD68- and CD163-positive cells was evaluated using the Cytoplasmic V2.0 algorithm in Aperio ImageScope software, and logistic regression analysis was used to compare expression patterns between groups. Thirty-one samples (14%) were positive for PD-L1 expression. The mean (± standard error) rates of infiltration were 6.83 ± 0.38% for CD68-positive cells and 6.16 ± 0.29% for CD163-positive cells. The mean rate of CD163-positive cell infiltration was significantly higher in diffuse GAC than in intestinal GAC (diffuse n = 111, 6.91%; intestinal n = 91, 5.26%; p = 0.006), but the mean rate of CD68-positive cell infiltration was similar between these types (p = 0.38). The mean infiltration rates of CD68- and CD163-positive cells in PD-L1-positive GAC were significantly higher than in PD-L1-negative GAC (CD68 p = 0.0002; CD163 p < 0.0001). In multivariate logistic regression analyses, CD163-positive cell infiltration was associated with PD-L1 expression (odds ratio 1.13; 95% confidence interval 1.02-1.25; p = 0.021). M2-like macrophage infiltration is highly associated with PD-L1 expression in GAC cells, suggesting that macrophage infiltration can serve as a potential therapeutic target.
Murphy, Karen E; Vetter, Thomas W
2013-05-01
The potential effect of spectral interference on the accurate measurement of the cadmium (Cd) mass fraction in fortified breakfast cereal and a variety of dietary supplement materials using inductively coupled plasma quadrupole mass spectrometry was studied. The materials were two new standard reference materials (SRMs)--SRM 3233 Fortified Breakfast Cereal and SRM 3532 Calcium Dietary Supplement--as well as several existing materials--SRM 3258 Bitter Orange Fruit, SRM 3259 Bitter Orange Extract, SRM 3260 Bitter Orange-containing Solid Oral Dosage Form, and SRM 3280 Multivitamin/Multielement Tablets. Samples were prepared for analysis using the method of isotope dilution and measured using various operating and sample introduction configurations including standard mode, collision cell with kinetic energy discrimination mode, and standard mode with sample introduction via a desolvating nebulizer system. Three isotope pairs, (112)Cd/(111)Cd, (113)Cd/(111)Cd, and (114)Cd/(111)Cd, were measured. Cadmium mass fraction results for the unseparated samples of each material, measured using the three instrument configurations and isotope pairs, were compared to the results obtained after the matrix was removed via chemical separation using anion exchange chromatography. In four of the six materials studied, measurements using the standard mode with sample introduction via the desolvating nebulizer gave results for the unseparated samples quantified with the (112)Cd/(111)Cd isotope pair that showed a positive bias relative to the matrix-separated samples, which indicated a persistent inference at m/z112 with this configuration. Use of the standard mode, without the desolvating nebulizer, also gave results that showed a positive bias for the unseparated samples quantified with the (112)Cd/(111)Cd isotope pair in three of the materials studied. Collision cell/kinetic energy discrimination mode, however, was very effective for reducing spectral interference for Cd in all of the materials and isotope pairs studied, except in the multivitamin/multielement matrix (SRM 3280) where the large corrections for known isobaric interferences or unidentified interferences compromised the accuracy. For SRM 3280, matrix separation provided the best method to achieve accurate measurement of Cd.
Differential tracking data types for accurate and efficient Mars planetary navigation
NASA Technical Reports Server (NTRS)
Edwards, C. D., Jr.; Kahn, R. D.; Folkner, W. M.; Border, J. S.
1991-01-01
Ways in which high-accuracy differential observations of two or more deep space vehicles can dramatically extend the power of earth-based tracking over conventional range and Doppler tracking are discussed. Two techniques - spacecraft-spacecraft differential very long baseline interferometry (S/C-S/C Delta(VLBI)) and same-beam interferometry (SBI) - are discussed. The tracking and navigation capabilities of conventional range, Doppler, and quasar-relative Delta(VLBI) are reviewed, and the S/C-S/C Delta (VLBI) and SBI types are introduced. For each data type, the formation of the observable is discussed, an error budget describing how physical error sources manifest themselves in the observable is presented, and potential applications of the technique for Space Exploration Initiative scenarios are examined. Requirements for spacecraft and ground systems needed to enable and optimize these types of observations are discussed.
Improving global CD uniformity by optimizing post-exposure bake and develop sequences
NASA Astrophysics Data System (ADS)
Osborne, Stephen P.; Mueller, Mark; Lem, Homer; Reyland, David; Baik, KiHo
2003-12-01
Improvements in the final uniformity of masks can be shrouded by error contributions from many sources. The final Global CD Uniformity (GCDU) of a mask is degraded by individual contributions of the writing tool, the Post Applied Bake (PAB), the Post Exposure Bake (PEB), the Develop sequence and the Etch step. Final global uniformity will improve by isolating and minimizing the variability of the PEB and Develop. We achieved this de-coupling of the PEB and Develop process from the whole process stream by using "dark loss" which is the loss of unexposed resist during the develop process. We confirmed a correspondence between Angstroms of dark loss and nanometer sized deviations in the chrome CD. A plate with a distinctive dark loss pattern was related to a nearly identical pattern in the chrome CD. This pattern was verified to have originated during the PEB process and displayed a [Δ(Final CD)/Δ(Dark Loss)] ratio of 6 for TOK REAP200 resist. Previous papers have reported a sensitive linkage between Angstroms of dark loss and nanometers in the final uniformity of the written plate. These initial studies reported using this method to improve the PAB of resists for greater uniformity of sensitivity and contrast. Similarly, this paper demonstrates an outstanding optimization of PEB and Develop processes.
NASA Astrophysics Data System (ADS)
Zocchi, Fabio E.
2017-10-01
One of the approaches that is being tested for the integration of the mirror modules of the advanced telescope for high-energy astrophysics x-ray mission of the European Space Agency consists in aligning each module on an optical bench operated at an ultraviolet wavelength. The mirror module is illuminated by a plane wave and, in order to overcome diffraction effects, the centroid of the image produced by the module is used as a reference to assess the accuracy of the optical alignment of the mirror module itself. Among other sources of uncertainty, the wave-front error of the plane wave also introduces an error in the position of the centroid, thus affecting the quality of the mirror module alignment. The power spectral density of the position of the point spread function centroid is here derived from the power spectral density of the wave-front error of the plane wave in the framework of the scalar theory of Fourier diffraction. This allows the defining of a specification on the collimator quality used for generating the plane wave starting from the contribution to the error budget allocated for the uncertainty of the centroid position. The theory generally applies whenever Fourier diffraction is a valid approximation, in which case the obtained result is identical to that derived by geometrical optics considerations.
Nimbus-7 Earth radiation budget calibration history. Part 1: The solar channels
NASA Technical Reports Server (NTRS)
Kyle, H. Lee; Hoyt, Douglas V.; Hickey, John R.; Maschhoff, Robert H.; Vallette, Brenda J.
1993-01-01
The Earth Radiation Budget (ERB) experiment on the Nimbus-7 satellite measured the total solar irradiance plus broadband spectral components on a nearly daily basis from 16 Nov. 1978, until 16 June 1992. Months of additional observations were taken in late 1992 and in 1993. The emphasis is on the electrically self calibrating cavity radiometer, channel 10c, which recorded accurate total solar irradiance measurements over the whole period. The spectral channels did not have inflight calibration adjustment capabilities. These channels can, with some additional corrections, be used for short-term studies (one or two solar rotations - 27 to 60 days), but not for long-term trend analysis. For channel 10c, changing radiometer pointing, the zero offsets, the stability of the gain, the temperature sensitivity, and the influences of other platform instruments are all examined and their effects on the measurements considered. Only the question of relative accuracy (not absolute) is examined. The final channel 10c product is also compared with solar measurements made by independent experiments on other satellites. The Nimbus experiment showed that the mean solar energy was about 0.1 percent (1.4 W/sqm) higher in the excited Sun years of 1979 and 1991 than in the quiet Sun years of 1985 and 1986. The error analysis indicated that the measured long-term trends may be as accurate as +/- 0.005 percent. The worse-case error estimate is +/- 0.03 percent.
Evaluating Micrometeorological Estimates of Groundwater Discharge from Great Basin Desert Playas.
Jackson, Tracie R; Halford, Keith J; Gardner, Philip M
2018-03-06
Groundwater availability studies in the arid southwestern United States traditionally have assumed that groundwater discharge by evapotranspiration (ET g ) from desert playas is a significant component of the groundwater budget. However, desert playa ET g rates are poorly constrained by Bowen ratio energy budget (BREB) and eddy-covariance (EC) micrometeorological measurement approaches. Best attempts by previous studies to constrain ET g from desert playas have resulted in ET g rates that are within the measurement error of micrometeorological approaches. This study uses numerical models to further constrain desert playa ET g rates that are within the measurement error of BREB and EC approaches, and to evaluate the effect of hydraulic properties and salinity-based groundwater density contrasts on desert playa ET g rates. Numerical models simulated ET g rates from desert playas in Death Valley, California and Dixie Valley, Nevada. Results indicate that actual ET g rates from desert playas are significantly below the uncertainty thresholds of BREB- and EC-based micrometeorological measurements. Discharge from desert playas likely contributes less than 2% of total groundwater discharge from Dixie and Death Valleys, which suggests discharge from desert playas also is negligible in other basins. Simulation results also show that ET g from desert playas primarily is limited by differences in hydraulic properties between alluvial fan and playa sediments and, to a lesser extent, by salinity-based groundwater density contrasts. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2016-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.
NASA Astrophysics Data System (ADS)
Griffiths, Ronald E.; Topping, David J.
2017-11-01
Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a river reach is in a state of sediment accumulation, deficit or stasis. Many sediment-budget studies have estimated the sediment loads of ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of loads in regions where rainfall events, contributing geology, and vegetation have large spatial and/or temporal variability. Previous estimates of the combined mean-annual sediment load of all ungaged tributaries to the Colorado River downstream from Glen Canyon Dam vary by over a factor of three; this range in estimated sediment loads has resulted in different researchers reaching opposite conclusions on the sign (accumulation or deficit) of the sediment budget for particular reaches of the Colorado River. To better evaluate the supply of fine sediment (sand, silt, and clay) from these tributaries to the Colorado River, eight gages were established on previously ungaged tributaries in Glen, Marble, and Grand canyons. Results from this sediment-monitoring network show that previous estimates of the annual sediment loads of these tributaries were too high and that the sediment budget for the Colorado River below Glen Canyon Dam is more negative than previously calculated by most researchers. As a result of locally intense rainfall events with footprints smaller than the receiving basin, floods from a single tributary in semi-arid regions can have large (≥ 10 ×) differences in sediment concentrations between equal magnitude flows. Because sediment loads do not necessarily correlate with drainage size, and may vary by two orders of magnitude on an annual basis, using techniques such as sediment-yield equations to estimate the sediment loads of ungaged tributaries may lead to large errors in sediment budgets.
Griffiths, Ronald; Topping, David
2017-01-01
Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a river reach is in a state of sediment accumulation, deficit or stasis. Many sediment-budget studies have estimated the sediment loads of ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of loads in regions where rainfall events, contributing geology, and vegetation have large spatial and/or temporal variability.Previous estimates of the combined mean-annual sediment load of all ungaged tributaries to the Colorado River downstream from Glen Canyon Dam vary by over a factor of three; this range in estimated sediment loads has resulted in different researchers reaching opposite conclusions on the sign (accumulation or deficit) of the sediment budget for particular reaches of the Colorado River. To better evaluate the supply of fine sediment (sand, silt, and clay) from these tributaries to the Colorado River, eight gages were established on previously ungaged tributaries in Glen, Marble, and Grand canyons. Results from this sediment-monitoring network show that previous estimates of the annual sediment loads of these tributaries were too high and that the sediment budget for the Colorado River below Glen Canyon Dam is more negative than previously calculated by most researchers. As a result of locally intense rainfall events with footprints smaller than the receiving basin, floods from a single tributary in semi-arid regions can have large (≥ 10 ×) differences in sediment concentrations between equal magnitude flows. Because sediment loads do not necessarily correlate with drainage size, and may vary by two orders of magnitude on an annual basis, using techniques such as sediment-yield equations to estimate the sediment loads of ungaged tributaries may lead to large errors in sediment budgets.
NASA Astrophysics Data System (ADS)
Huang, K.; Oppo, D.; Curry, W. B.
2012-12-01
Reconstruction of changes in Antarctic Intermediate Water (AAIW) circulation across the last deglaciation is critical in constraining the links between AAIW and Atlantic Meridional Overturning Circulation (AMOC) and understanding how AAIW influences oceanic heat transport and carbon budget across abrupt climate events. Here we systematically establish in situ calibrations for carbonate saturation state (B/Ca), nutrient (Cd/Ca and δ13C) and watermass proxies (ɛNd) in foraminifera using multicore tops and ambient seawater samples collected from the Demerara Rise, western tropical Atlantic. Through the multi-proxy reconstructions, deglacial variability of intermediate water circulation in the western tropical Atlantic can be further constrained. The reconstructed seawater Cd record from the Demerara Rise sediment core (KNR197-3-46CDH, at 947 m water depth) over the last 21 kyrs suggests reduced presence of AAIW during the cold intervals (LGM, H1 and YD) when AMOC was reduced. Down-core B/Ca record shows elevated intermediate water Δ[CO32-] during these cold intervals, further indicating a weaker influence of AAIW in the western tropical Atlantic. The δ13C record exhibits a pronounced deglacial minimum and a clear decoupling between δ13C and Cd/Ca after the AMOC completely recovered at around 8 kyr BP. This could be due to the carbonate ion effect on benthic Cd/Ca or the influence of organic matter remineralization on benthic δ13C. A new ɛNd record for the last deglaciation will be provided to evaluate the relative proportions of southern and northern waters at this intermediate site in the western tropical Atlantic.
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1989-01-01
Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.
Solar adaptive optics with the DKIST: status report
NASA Astrophysics Data System (ADS)
Johnson, Luke C.; Cummings, Keith; Drobilek, Mark; Gregory, Scott; Hegwer, Steve; Johansson, Erik; Marino, Jose; Richards, Kit; Rimmele, Thomas; Sekulic, Predrag; Wöger, Friedrich
2014-08-01
The DKIST wavefront correction system will be an integral part of the telescope, providing active alignment control, wavefront correction, and jitter compensation to all DKIST instruments. The wavefront correction system will operate in four observing modes, diffraction-limited, seeing-limited on-disk, seeing-limited coronal, and limb occulting with image stabilization. Wavefront correction for DKIST includes two major components: active optics to correct low-order wavefront and alignment errors, and adaptive optics to correct wavefront errors and high-frequency jitter caused by atmospheric turbulence. The adaptive optics system is built around a fast tip-tilt mirror and a 1600 actuator deformable mirror, both of which are controlled by an FPGA-based real-time system running at 2 kHz. It is designed to achieve on-axis Strehl of 0.3 at 500 nm in median seeing (r0 = 7 cm) and Strehl of 0.6 at 630 nm in excellent seeing (r0 = 20 cm). We present the current status of the DKIST high-order adaptive optics, focusing on system design, hardware procurements, and error budget management.
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-06-01
The US General Accounting Office and executive agency Inspectors General have reported losses of millions of dollars in government funds resulting from fraud, waste and error. The Administration and the Congress have initiated determined efforts to eliminate such losses from government programs and activities. Primary emphasis in this effort is on the strengthening of accounting and administrative controls. Accordingly, the Office of Management and Budget (OMB) issued Circular No. A-123, Internal Control Systems, on October 28, 1981. The campaign to improve internal controls was endorsed by the Secretary of Energy in a memorandum to Heads of Departmental Components, dated Marchmore » 13, 1981, Subject: Internal Control as a Deterrent to Fraud, Waste and Error. A vulnerability assessment is a review of the susceptibility of a program or function to unauthorized use of resources, errors in reports and information, and illegal or unethical acts. It is based on considerations of the environment in which the program or function is carried out, the inherent riskiness of the program or function, and a preliminary evaluation as to whether adequate safeguards exist and are functioning.« less
NASA Technical Reports Server (NTRS)
Kirstetter, Pierre-Emmanuel; Hong, Y.; Gourley, J. J.; Schwaller, M.; Petersen, W; Zhang, J.
2012-01-01
Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving spaceborne passive and active microwave measurements for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem was addressed in a previous paper by comparison of 2A25 version 6 (V6) product with reference values derived from NOAA/NSSL's ground radar-based National Mosaic and QPE system (NMQ/Q2). The primary contribution of this study is to compare the new 2A25 version 7 (V7) products that were recently released as a replacement of V6. This new version is considered superior over land areas. Several aspects of the two versions are compared and quantified including rainfall rate distributions, systematic biases, and random errors. All analyses indicate V7 is an improvement over V6.
Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)
NASA Astrophysics Data System (ADS)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng
2018-06-01
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.
NASA Astrophysics Data System (ADS)
Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.
2014-02-01
An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier Transform Spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information is not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs are typically within 60 m of those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (Collision-Induced Absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC, CONTRAIL and HIPPO, yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS dataset is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.5 ± 0.7 ppm yr-1, in agreement with the currently accepted global growth rate based on ground-based measurements.
Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) Data Products User's Guide
NASA Technical Reports Server (NTRS)
McPeters, Richard D.; Bhartia, P. K.; Krueger, Arlin J.; Herman, Jay R.; Schlesinger, Barry M.; Wellemeyer, Charles G.; Seftor, Colin J.; Jaross, Glen; Taylor, Steven L.; Swissler, Tom;
1996-01-01
Two data products from the Total Ozone Mapping Spectrometer (TOMS) onboard Nimbus-7 have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio is used in ozone retrievals. Changes in the instrument sensitivity are monitored by a spectral discrimination technique using measurements of the intrinsically stable wavelength dependence of derived surface reflectivity. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and drift is less than 1.0 percent per decade. The Level-2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level-3 product contains daily total ozone amount and reflectivity in a I - degree latitude by 1.25 degrees longitude grid. The Level-3 product also is available on CD-ROM. Detailed descriptions of both HDF data files and the CD-ROM product are provided.
Cheng, Ching-Yu; Schache, Maria; Ikram, M. Kamran; Young, Terri L.; Guggenheim, Jeremy A.; Vitart, Veronique; MacGregor, Stuart; Verhoeven, Virginie J.M.; Barathi, Veluchamy A.; Liao, Jiemin; Hysi, Pirro G.; Bailey-Wilson, Joan E.; St. Pourcain, Beate; Kemp, John P.; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Montgomery, Grant W.; Mishra, Aniket; Wang, Ya Xing; Wang, Jie Jin; Rochtchina, Elena; Polasek, Ozren; Wright, Alan F.; Amin, Najaf; van Leeuwen, Elisabeth M.; Wilson, James F.; Pennell, Craig E.; van Duijn, Cornelia M.; de Jong, Paulus T.V.M.; Vingerling, Johannes R.; Zhou, Xin; Chen, Peng; Li, Ruoying; Tay, Wan-Ting; Zheng, Yingfeng; Chew, Merwyn; Rahi, Jugnoo S.; Hysi, Pirro G.; Yoshimura, Nagahisa; Yamashiro, Kenji; Miyake, Masahiro; Delcourt, Cécile; Maubaret, Cecilia; Williams, Cathy; Guggenheim, Jeremy A.; Northstone, Kate; Ring, Susan M.; Davey-Smith, George; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Igo, Robert P.; Chew, Emily; Janmahasathian, Sarayut; Iyengar, Sudha K.; Igo, Robert P.; Chew, Emily; Janmahasathian, Sarayut; Stambolian, Dwight; Wilson, Joan E. Bailey; MacGregor, Stuart; Lu, Yi; Jonas, Jost B.; Xu, Liang; Saw, Seang-Mei; Baird, Paul N.; Rochtchina, Elena; Mitchell, Paul; Wang, Jie Jin; Jonas, Jost B.; Nangia, Vinay; Hayward, Caroline; Wright, Alan F.; Vitart, Veronique; Polasek, Ozren; Campbell, Harry; Vitart, Veronique; Rudan, Igor; Vatavuk, Zoran; Vitart, Veronique; Paterson, Andrew D.; Hosseini, S. Mohsen; Iyengar, Sudha K.; Igo, Robert P.; Fondran, Jeremy R.; Young, Terri L.; Feng, Sheng; Verhoeven, Virginie J.M.; Klaver, Caroline C.; van Duijn, Cornelia M.; Metspalu, Andres; Haller, Toomas; Mihailov, Evelin; Pärssinen, Olavi; Wedenoja, Juho; Wilson, Joan E. Bailey; Wojciechowski, Robert; Baird, Paul N.; Schache, Maria; Pfeiffer, Norbert; Höhn, René; Pang, Chi Pui; Chen, Peng; Meitinger, Thomas; Oexle, Konrad; Wegner, Aharon; Yoshimura, Nagahisa; Yamashiro, Kenji; Miyake, Masahiro; Pärssinen, Olavi; Yip, Shea Ping; Ho, Daniel W.H.; Pirastu, Mario; Murgia, Federico; Portas, Laura; Biino, Genevra; Wilson, James F.; Fleck, Brian; Vitart, Veronique; Stambolian, Dwight; Wilson, Joan E. Bailey; Hewitt, Alex W.; Ang, Wei; Verhoeven, Virginie J.M.; Klaver, Caroline C.; van Duijn, Cornelia M.; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Tai, E-Shyong; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Mackey, David A.; MacGregor, Stuart; Hammond, Christopher J.; Hysi, Pirro G.; Deangelis, Margaret M.; Morrison, Margaux; Zhou, Xiangtian; Chen, Wei; Paterson, Andrew D.; Hosseini, S. Mohsen; Mizuki, Nobuhisa; Meguro, Akira; Lehtimäki, Terho; Mäkelä, Kari-Matti; Raitakari, Olli; Kähönen, Mika; Burdon, Kathryn P.; Craig, Jamie E.; Iyengar, Sudha K.; Igo, Robert P.; Lass, Jonathan H.; Reinhart, William; Belin, Michael W.; Schultze, Robert L.; Morason, Todd; Sugar, Alan; Mian, Shahzad; Soong, Hunson Kaz; Colby, Kathryn; Jurkunas, Ula; Yee, Richard; Vital, Mark; Alfonso, Eduardo; Karp, Carol; Lee, Yunhee; Yoo, Sonia; Hammersmith, Kristin; Cohen, Elisabeth; Laibson, Peter; Rapuano, Christopher; Ayres, Brandon; Croasdale, Christopher; Caudill, James; Patel, Sanjay; Baratz, Keith; Bourne, William; Maguire, Leo; Sugar, Joel; Tu, Elmer; Djalilian, Ali; Mootha, Vinod; McCulley, James; Bowman, Wayne; Cavanaugh, H. Dwight; Verity, Steven; Verdier, David; Renucci, Ann; Oliva, Matt; Rotkis, Walter; Hardten, David R.; Fahmy, Ahmad; Brown, Marlene; Reeves, Sherman; Davis, Elizabeth A.; Lindstrom, Richard; Hauswirth, Scott; Hamilton, Stephen; Lee, W. Barry; Price, Francis; Price, Marianne; Kelly, Kathleen; Peters, Faye; Shaughnessy, Michael; Steinemann, Thomas; Dupps, B.J.; Meisler, David M.; Mifflin, Mark; Olson, Randal; Aldave, Anthony; Holland, Gary; Mondino, Bartly J.; Rosenwasser, George; Gorovoy, Mark; Dunn, Steven P.; Heidemann, David G.; Terry, Mark; Shamie, Neda; Rosenfeld, Steven I.; Suedekum, Brandon; Hwang, David; Stone, Donald; Chodosh, James; Galentine, Paul G.; Bardenstein, David; Goddard, Katrina; Chin, Hemin; Mannis, Mark; Varma, Rohit; Borecki, Ingrid; Chew, Emily Y.; Haller, Toomas; Mihailov, Evelin; Metspalu, Andres; Wedenoja, Juho; Simpson, Claire L.; Wojciechowski, Robert; Höhn, René; Mirshahi, Alireza; Zeller, Tanja; Pfeiffer, Norbert; Lackner, Karl J.; Donnelly, Peter; Barroso, Ines; Blackwell, Jenefer M.; Bramon, Elvira; Brown, Matthew A.; Casas, Juan P.; Corvin, Aiden; Deloukas, Panos; Duncanson, Audrey; Jankowski, Janusz; Markus, Hugh S.; Mathew, Christopher G.; Palmer, Colin N.A.; Plomin, Robert; Rautanen, Anna; Sawcer, Stephen J.; Trembath, Richard C.; Viswanathan, Ananth C.; Wood, Nicholas W.; Spencer, Chris C.A.; Band, Gavin; Bellenguez, Céline; Freeman, Colin; Hellenthal, Garrett; Giannoulatou, Eleni; Pirinen, Matti; Pearson, Richard; Strange, Amy; Su, Zhan; Vukcevic, Damjan; Donnelly, Peter; Langford, Cordelia; Hunt, Sarah E.; Edkins, Sarah; Gwilliam, Rhian; Blackburn, Hannah; Bumpstead, Suzannah J.; Dronov, Serge; Gillman, Matthew; Gray, Emma; Hammond, Naomi; Jayakumar, Alagurevathi; McCann, Owen T.; Liddle, Jennifer; Potter, Simon C.; Ravindrarajah, Radhi; Ricketts, Michelle; Waller, Matthew; Weston, Paul; Widaa, Sara; Whittaker, Pamela; Barroso, Ines; Deloukas, Panos; Mathew, Christopher G.; Blackwell, Jenefer M.; Brown, Matthew A.; Corvin, Aiden; Spencer, Chris C.A.; Bettecken, Thomas; Meitinger, Thomas; Oexle, Konrad; Pirastu, Mario; Portas, Laura; Nag, Abhishek; Williams, Katie M.; Yonova-Doing, Ekaterina; Klein, Ronald; Klein, Barbara E.; Hosseini, S. Mohsen; Paterson, Andrew D.; Genuth, S.; Nathan, D.M.; Zinman, B.; Crofford, O.; Crandall, J.; Reid, M.; Brown-Friday, J.; Engel, S.; Sheindlin, J.; Martinez, H.; Shamoon, H.; Engel, H.; Phillips, M.; Gubitosi-Klug, R.; Mayer, L.; Pendegast, S.; Zegarra, H.; Miller, D.; Singerman, L.; Smith-Brewer, S.; Novak, M.; Quin, J.; Dahms, W.; Genuth, Saul; Palmert, M.; Brillon, D.; Lackaye, M.E.; Kiss, S.; Chan, R.; Reppucci, V.; Lee, T.; Heinemann, M.; Whitehouse, F.; Kruger, D.; Jones, J.K.; McLellan, M.; Carey, J.D.; Angus, E.; Thomas, A.; Galprin, A.; Bergenstal, R.; Johnson, M.; Spencer, M.; Morgan, K.; Etzwiler, D.; Kendall, D.; Aiello, Lloyd Paul; Golden, E.; Jacobson, A.; Beaser, R.; Ganda, O.; Hamdy, O.; Wolpert, H.; Sharuk, G.; Arrigg, P.; Schlossman, D.; Rosenzwieg, J.; Rand, L.; Nathan, D.M.; Larkin, M.; Ong, M.; Godine, J.; Cagliero, E.; Lou, P.; Folino, K.; Fritz, S.; Crowell, S.; Hansen, K.; Gauthier-Kelly, C.; Service, J.; Ziegler, G.; Luttrell, L.; Caulder, S.; Lopes-Virella, M.; Colwell, J.; Soule, J.; Fernandes, J.; Hermayer, K.; Kwon, S.; Brabham, M.; Blevins, A.; Parker, J.; Lee, D.; Patel, N.; Pittman, C.; Lindsey, P.; Bracey, M.; Lee, K.; Nutaitis, M.; Farr, A.; Elsing, S.; Thompson, T.; Selby, J.; Lyons, T.; Yacoub-Wasef, S.; Szpiech, M.; Wood, D.; Mayfield, R.; Molitch, M.; Schaefer, B.; Jampol, L.; Lyon, A.; Gill, M.; Strugula, Z.; Kaminski, L.; Mirza, R.; Simjanoski, E.; Ryan, D.; Kolterman, O.; Lorenzi, G.; Goldbaum, M.; Sivitz, W.; Bayless, M.; Counts, D.; Johnsonbaugh, S.; Hebdon, M.; Salemi, P.; Liss, R.; Donner, T.; Gordon, J.; Hemady, R.; Kowarski, A.; Ostrowski, D.; Steidl, S.; Jones, B.; Herman, W.H.; Martin, C.L.; Pop-Busui, R.; Sarma, A.; Albers, J.; Feldman, E.; Kim, K.; Elner, S.; Comer, G.; Gardner, T.; Hackel, R.; Prusak, R.; Goings, L.; Smith, A.; Gothrup, J.; Titus, P.; Lee, J.; Brandle, M.; Prosser, L.; Greene, D.A.; Stevens, M.J.; Vine, A.K.; Bantle, J.; Wimmergren, N.; Cochrane, A.; Olsen, T.; Steuer, E.; Rath, P.; Rogness, B.; Hainsworth, D.; Goldstein, D.; Hitt, S.; Giangiacomo, J.; Schade, D.S.; Canady, J.L.; Chapin, J.E.; Ketai, L.H.; Braunstein, C.S.; Bourne, P.A.; Schwartz, S.; Brucker, A.; Maschak-Carey, B.J.; Baker, L.; Orchard, T.; Silvers, N.; Ryan, C.; Songer, T.; Doft, B.; Olson, S.; Bergren, R.L.; Lobes, L.; Rath, P. Paczan; Becker, D.; Rubinstein, D.; Conrad, P.W.; Yalamanchi, S.; Drash, A.; Morrison, A.; Bernal, M.L.; Vaccaro-Kish, J.; Malone, J.; Pavan, P.R.; Grove, N.; Iyer, M.N.; Burrows, A.F.; Tanaka, E.A.; Gstalder, R.; Dagogo-Jack, S.; Wigley, C.; Ricks, H.; Kitabchi, A.; Murphy, M.B.; Moser, S.; Meyer, D.; Iannacone, A.; Chaum, E.; Yoser, S.; Bryer-Ash, M.; Schussler, S.; Lambeth, H.; Raskin, P.; Strowig, S.; Zinman, B.; Barnie, A.; Devenyi, R.; Mandelcorn, M.; Brent, M.; Rogers, S.; Gordon, A.; Palmer, J.; Catton, S.; Brunzell, J.; Wessells, H.; de Boer, I.H.; Hokanson, J.; Purnell, J.; Ginsberg, J.; Kinyoun, J.; Deeb, S.; Weiss, M.; Meekins, G.; Distad, J.; Van Ottingham, L.; Dupre, J.; Harth, J.; Nicolle, D.; Driscoll, M.; Mahon, J.; Canny, C.; May, M.; Lipps, J.; Agarwal, A.; Adkins, T.; Survant, L.; Pate, R.L.; Munn, G.E.; Lorenz, R.; Feman, S.; White, N.; Levandoski, L.; Boniuk, I.; Grand, G.; Thomas, M.; Joseph, D.D.; Blinder, K.; Shah, G.; Boniuk; Burgess; Santiago, J.; Tamborlane, W.; Gatcomb, P.; Stoessel, K.; Taylor, K.; Goldstein, J.; Novella, S.; Mojibian, H.; Cornfeld, D.; Lima, J.; Bluemke, D.; Turkbey, E.; van der Geest, R.J.; Liu, C.; Malayeri, A.; Jain, A.; Miao, C.; Chahal, H.; Jarboe, R.; Maynard, J.; Gubitosi-Klug, R.; Quin, J.; Gaston, P.; Palmert, M.; Trail, R.; Dahms, W.; Lachin, J.; Cleary, P.; Backlund, J.; Sun, W.; Braffett, B.; Klumpp, K.; Chan, K.; Diminick, L.; Rosenberg, D.; Petty, B.; Determan, A.; Kenny, D.; Rutledge, B.; Younes, Naji; Dews, L.; Hawkins, M.; Cowie, C.; Fradkin, J.; Siebert, C.; Eastman, R.; Danis, R.; Gangaputra, S.; Neill, S.; Davis, M.; Hubbard, L.; Wabers, H.; Burger, M.; Dingledine, J.; Gama, V.; Sussman, R.; Steffes, M.; Bucksa, J.; Nowicki, M.; Chavers, B.; O’Leary, D.; Polak, J.; Harrington, A.; Funk, L.; Crow, R.; Gloeb, B.; Thomas, S.; O’Donnell, C.; Soliman, E.; Zhang, Z.M.; Prineas, R.; Campbell, C.; Ryan, C.; Sandstrom, D.; Williams, T.; Geckle, M.; Cupelli, E.; Thoma, F.; Burzuk, B.; Woodfill, T.; Low, P.; Sommer, C.; Nickander, K.; Budoff, M.; Detrano, R.; Wong, N.; Fox, M.; Kim, L.; Oudiz, R.; Weir, G.; Espeland, M.; Manolio, T.; Rand, L.; Singer, D.; Stern, M.; Boulton, A.E.; Clark, C.; D’Agostino, R.; Lopes-Virella, M.; Garvey, W.T.; Lyons, T.J.; Jenkins, A.; Virella, G.; Jaffa, A.; Carter, Rickey; Lackland, D.; Brabham, M.; McGee, D.; Zheng, D.; Mayfield, R.K.; Boright, A.; Bull, S.; Sun, L.; Scherer, S.; Zinman, B.; Natarajan, R.; Miao, F.; Zhang, L.; Chen;, Z.; Nathan, D.M.; Makela, Kari-Matti; Lehtimaki, Terho; Kahonen, Mika; Raitakari, Olli; Yoshimura, Nagahisa; Matsuda, Fumihiko; Chen, Li Jia; Pang, Chi Pui; Yip, Shea Ping; Yap, Maurice K.H.; Meguro, Akira; Mizuki, Nobuhisa; Inoko, Hidetoshi; Foster, Paul J.; Zhao, Jing Hua; Vithana, Eranga; Tai, E-Shyong; Fan, Qiao; Xu, Liang; Campbell, Harry; Fleck, Brian; Rudan, Igor; Aung, Tin; Hofman, Albert; Uitterlinden, André G.; Bencic, Goran; Khor, Chiea-Chuen; Forward, Hannah; Pärssinen, Olavi; Mitchell, Paul; Rivadeneira, Fernando; Hewitt, Alex W.; Williams, Cathy; Oostra, Ben A.; Teo, Yik-Ying; Hammond, Christopher J.; Stambolian, Dwight; Mackey, David A.; Klaver, Caroline C.W.; Wong, Tien-Yin; Saw, Seang-Mei; Baird, Paul N.
2013-01-01
Refractive errors are common eye disorders of public health importance worldwide. Ocular axial length (AL) is the major determinant of refraction and thus of myopia and hyperopia. We conducted a meta-analysis of genome-wide association studies for AL, combining 12,531 Europeans and 8,216 Asians. We identified eight genome-wide significant loci for AL (RSPO1, C3orf26, LAMA2, GJD2, ZNRF3, CD55, MIP, and ALPPL2) and confirmed one previously reported AL locus (ZC3H11B). Of the nine loci, five (LAMA2, GJD2, CD55, ALPPL2, and ZC3H11B) were associated with refraction in 18 independent cohorts (n = 23,591). Differential gene expression was observed for these loci in minus-lens-induced myopia mouse experiments and human ocular tissues. Two of the AL genes, RSPO1 and ZNRF3, are involved in Wnt signaling, a pathway playing a major role in the regulation of eyeball size. This study provides evidence of shared genes between AL and refraction, but importantly also suggests that these traits may have unique pathways. PMID:24144296
Novel EUV mask black border suppressing EUV and DUV OoB light reflection
NASA Astrophysics Data System (ADS)
Ito, Shin; Kodera, Yutaka; Fukugami, Norihito; Komizo, Toru; Maruyama, Shingo; Watanabe, Genta; Yoshida, Itaru; Kotani, Jun; Konishi, Toshio; Haraguchi, Takashi
2016-05-01
EUV lithography is the most promising technology for semiconductor device manufacturing of the 10nm node and beyond. The image border is a pattern free dark area around the die on the photomask serving as transition area between the parts of the mask that is shielded from the exposure light by the Reticle Masking (REMA) blades and the die. When printing a die at dense spacing on an EUV scanner, the reflection from the image border overlaps edges of neighboring dies, affecting CD and contrast in this area. This is related to the fact that EUV absorber stack reflects 1-3% of actinic EUV light. To reduce this effect several types of image border with reduced EUV reflectance (<0.05%) have been proposed; such an image border is referred to as a black border. In particular, an etched multilayer type black border was developed; it was demonstrated that CD impact at the edge of a die is strongly reduced with this type of the black border (BB). However, wafer printing result still showed some CD change in the die influenced by the black border reflection. It was proven that the CD shift was caused by DUV Out of Band (OOB) light from the EUV light source. New types of a multilayer etched BB were evaluated and showed a good potential for DUV light suppression. In this study, a novel BB called `Hybrid Black Border' (HBB) has been developed to eliminate EUV and DUV OOB light reflection by applying optical design technique and special micro-fabrication technique. A new test mask with HBB is fabricated without any degradation of mask quality according to the result of CD performance in the main pattern, defectivity and cleaning durability. The imaging performance for N10 imaging structures is demonstrated on NXE:3300B in collaboration with ASML. This result is compared to the imaging results obtained for a mask with the earlier developed BB, and HBB has achieved ~3x improvement; less than 0.2 nm CD changes are observed in the corners of the die. A CD uniformity budget including impact of OOB light in the die edge area is evaluated which shows that the OOB impact from HBB becomes comparable with other CDU contributors in this area. Finally, we state that HBB is a promising technology allowing for CD control at die edges.
Manzello, Derek P; Enochs, Ian C; Kolodziej, Graham; Carlton, Renée; Valentino, Lauren
2018-01-01
The persistence of coral reef frameworks requires that calcium carbonate (CaCO 3 ) production by corals and other calcifiers outpaces CaCO 3 loss via physical, chemical, and biological erosion. Coral bleaching causes declines in CaCO 3 production, but this varies with bleaching severity and the species impacted. We conducted census-based CaCO 3 budget surveys using the established ReefBudget approach at Cheeca Rocks, an inshore patch reef in the Florida Keys, annually from 2012 to 2016. This site experienced warm-water bleaching in 2011, 2014, and 2015. In 2017, we obtained cores of the dominant calcifying coral at this site, Orbicella faveolata , to understand how calcification rates were impacted by bleaching and how they affected the reef-wide CaCO 3 budget. Bleaching depressed O. faveolata growth and the decline of this one species led to an overestimation of mean (± std. error) reef-wide CaCO 3 production by + 0.68 (± 0.167) to + 1.11 (± 0.236) kg m -2 year -1 when using the static ReefBudget coral growth inputs. During non-bleaching years, the ReefBudget inputs slightly underestimated gross production by - 0.10 (± 0.022) to - 0.43 (± 0.100) kg m -2 year -1 . Carbonate production declined after the first year of back-to-back bleaching in 2014, but then increased after 2015 to values greater than the initial surveys in 2012. Cheeca Rocks is an outlier in the Caribbean and Florida Keys in terms of coral cover, carbonate production, and abundance of O. faveolata , which is threatened under the Endangered Species Act. Given the resilience of this site to repeated bleaching events, it may deserve special management attention.
Statistical analysis of the surface figure of the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Lightsey, Paul A.; Chaney, David; Gallagher, Benjamin B.; Brown, Bob J.; Smith, Koby; Schwenker, John
2012-09-01
The performance of an optical system is best characterized by either the point spread function (PSF) or the optical transfer function (OTF). However, for system budgeting purposes, it is convenient to use a single scalar metric, or a combination of a few scalar metrics to track performance. For the James Webb Space Telescope, the Observatory level requirements were expressed in metrics of Strehl Ratio, and Encircled Energy. These in turn were converted to the metrics of total rms WFE and rms WFE within spatial frequency domains. The 18 individual mirror segments for the primary mirror segment assemblies (PMSA), the secondary mirror (SM), tertiary mirror (TM), and Fine Steering Mirror have all been fabricated. They are polished beryllium mirrors with a protected gold reflective coating. The statistical analysis of the resulting Surface Figure Error of these mirrors has been analyzed. The average spatial frequency distribution and the mirror-to-mirror consistency of the spatial frequency distribution are reported. The results provide insight to system budgeting processes for similar optical systems.
Huang, Shengli; Young, Claudia; Abdul-Aziz, Omar I.; Dahal, Devendra; Feng, Min; Liu, Shuguang
2013-01-01
Hydrological processes of the wetland complex in the Prairie Pothole Region (PPR) are difficult to model, partly due to a lack of wetland morphology data. We used Light Detection And Ranging (LiDAR) data sets to derive wetland features; we then modelled rainfall, snowfall, snowmelt, runoff, evaporation, the “fill-and-spill” mechanism, shallow groundwater loss, and the effect of wet and dry conditions. For large wetlands with a volume greater than thousands of cubic metres (e.g. about 3000 m3), the modelled water volume agreed fairly well with observations; however, it did not succeed for small wetlands (e.g. volume less than 450 m3). Despite the failure for small wetlands, the modelled water area of the wetland complex coincided well with interpretation of aerial photographs, showing a linear regression with R2 of around 0.80 and a mean average error of around 0.55 km2. The next step is to improve the water budget modelling for small wetlands.
Bi, Meihua; Xiao, Shilin; He, Hao; Yi, Lilin; Li, Zhengxuan; Li, Jun; Yang, Xuelin; Hu, Weisheng
2013-07-15
We propose a symmetric 40-Gb/s aggregate rate time and wavelength division multiplexed passive optical network (TWDM-PON) system with the capability of simultaneous downstream differential phase shift keying (DPSK) signal demodulation and upstream signal chirp management based on delay interferometer (DI). With the bi-pass characteristic of DI, we experimentally demonstrate the bidirectional transmission of signals at 10-Gb/s per wavelength, and achieve negligible power penalties after 50-km single mode fiber (SMF). For the uplink transmission with DI, a ~11-dB optical power budget improvement at a bit error ratio of 1e-3 is obtained and the extinction ratio (ER) of signal is also improved from 3.4 dB to 13.75 dB. Owing to this high ER, the upstream burst-mode transmitting is successfully presented in term of time-division multiplexing. Moreover, in our experiment, a ~38-dB power budget is obtained to support 256 users with 50-km SMF transmission.
Reducing uncertainties in decadal variability of the global carbon budget with multiple datasets
Li, Wei; Ciais, Philippe; Wang, Yilong; Peng, Shushi; Broquet, Grégoire; Ballantyne, Ashley P.; Canadell, Josep G.; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Peters, Glen P.; Piao, Shilong; Pongratz, Julia
2016-01-01
Conventional calculations of the global carbon budget infer the land sink as a residual between emissions, atmospheric accumulation, and the ocean sink. Thus, the land sink accumulates the errors from the other flux terms and bears the largest uncertainty. Here, we present a Bayesian fusion approach that combines multiple observations in different carbon reservoirs to optimize the land (B) and ocean (O) carbon sinks, land use change emissions (L), and indirectly fossil fuel emissions (F) from 1980 to 2014. Compared with the conventional approach, Bayesian optimization decreases the uncertainties in B by 41% and in O by 46%. The L uncertainty decreases by 47%, whereas F uncertainty is marginally improved through the knowledge of natural fluxes. Both ocean and net land uptake (B + L) rates have positive trends of 29 ± 8 and 37 ± 17 Tg C⋅y−2 since 1980, respectively. Our Bayesian fusion of multiple observations reduces uncertainties, thereby allowing us to isolate important variability in global carbon cycle processes. PMID:27799533
Fan, Mingyi; Li, Tongjun; Hu, Jiwei; Cao, Rensheng; Wei, Xionghui; Shi, Xuedan; Ruan, Wenqian
2017-01-01
Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites were synthesized in the present study by chemical deposition method and were then characterized by various methods, such as Fourier-transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). The nZVI/rGO composites prepared were utilized for Cd(II) removal from aqueous solutions in batch mode at different initial Cd(II) concentrations, initial pH values, contact times, and operating temperatures. Response surface methodology (RSM) and artificial neural network hybridized with genetic algorithm (ANN-GA) were used for modeling the removal efficiency of Cd(II) and optimizing the four removal process variables. The average values of prediction errors for the RSM and ANN-GA models were 6.47% and 1.08%. Although both models were proven to be reliable in terms of predicting the removal efficiency of Cd(II), the ANN-GA model was found to be more accurate than the RSM model. In addition, experimental data were fitted to the Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherms. It was found that the Cd(II) adsorption was best fitted to the Langmuir isotherm. Examination on thermodynamic parameters revealed that the removal process was spontaneous and exothermic in nature. Furthermore, the pseudo-second-order model can better describe the kinetics of Cd(II) removal with a good R2 value than the pseudo-first-order model. PMID:28772901
Fan, Mingyi; Li, Tongjun; Hu, Jiwei; Cao, Rensheng; Wei, Xionghui; Shi, Xuedan; Ruan, Wenqian
2017-05-17
Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites were synthesized in the present study by chemical deposition method and were then characterized by various methods, such as Fourier-transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). The nZVI/rGO composites prepared were utilized for Cd(II) removal from aqueous solutions in batch mode at different initial Cd(II) concentrations, initial pH values, contact times, and operating temperatures. Response surface methodology (RSM) and artificial neural network hybridized with genetic algorithm (ANN-GA) were used for modeling the removal efficiency of Cd(II) and optimizing the four removal process variables. The average values of prediction errors for the RSM and ANN-GA models were 6.47% and 1.08%. Although both models were proven to be reliable in terms of predicting the removal efficiency of Cd(II), the ANN-GA model was found to be more accurate than the RSM model. In addition, experimental data were fitted to the Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherms. It was found that the Cd(II) adsorption was best fitted to the Langmuir isotherm. Examination on thermodynamic parameters revealed that the removal process was spontaneous and exothermic in nature. Furthermore, the pseudo-second-order model can better describe the kinetics of Cd(II) removal with a good R² value than the pseudo-first-order model.
CD and defect improvement challenges for immersion processes
NASA Astrophysics Data System (ADS)
Ehara, Keisuke; Ema, Tatsuhiko; Yamasaki, Toshinari; Nakagawa, Seiji; Ishitani, Seiji; Morita, Akihiko; Kim, Jeonghun; Kanaoka, Masashi; Yasuda, Shuichi; Asai, Masaya
2009-03-01
The intention of this study is to develop an immersion lithography process using advanced track solutions to achieve world class critical dimension (CD) and defectivity performance in a state of the art manufacturing facility. This study looks at three important topics for immersion lithography: defectivity, CD control, and wafer backside contamination. The topic of defectivity is addressed through optimization of coat, develop, and rinse processes as well as implementation of soak steps and bevel cleaning as part of a comprehensive defect solution. Develop and rinse processing techniques are especially important in the effort to achieve a zero defect solution. Improved CD control is achieved using a biased hot plate (BHP) equipped with an electrostatic chuck. This electrostatic chuck BHP (eBHP) is not only able to operate at a very uniform temperature, but it also allows the user to bias the post exposure bake (PEB) temperature profile to compensate for systematic within-wafer (WiW) CD non-uniformities. Optimized CD results, pre and post etch, are presented for production wafers. Wafer backside particles can cause focus spots on an individual wafer or migrate to the exposure tool's wafer stage and cause problems for a multitude of wafers. A basic evaluation of the cleaning efficiency of a backside scrubber unit located on the track was performed as a precursor to a future study examining the impact of wafer backside condition on scanner focus errors as well as defectivity in an immersion scanner.
Gold diffusion in mercury cadmium telluride grown by molecular beam epitaxy
NASA Astrophysics Data System (ADS)
Selamet, Yusuf; Singh, Rasdip; Zhao, Jun; Zhou, Yong D.; Sivananthan, Sivalingam; Dhar, Nibir K.
2003-12-01
The growth and characterization of Au-doped HgCdTe layers on (211)B CdTe/Si substrates grown by molecular beam epitaxy reported. The electrical properties of these layers studied for diffusion are presented. For ex-situ experiments, thin Au layers were deposited by evaporation and annealed at various temperatures and times to investigate the p-type doping properties and diffusion of Au in HgCdTe. The atomic distribution of the diffused Au was determined by secondary ion mass spectroscopy. We found clear evidence for p-type doping of HgCdTe:Au by in-situ and ex-situ methods. For in-situ doped layers, we found that, the Au cell temperature needs to be around 900°C to get p-type behavior. The diffusion coefficient of Au in HgCdTe was calculated by fitting SIMS profiles after annealing. Both complementary error functions and gaussian fittings were used, and were in full agreement. Diffusion coefficient as low as 8x10-14cm2/s observed for a sample annealed at 250°C and slow component of a diffusion coefficient as low as 2x10-15 cm2/s observed for a sample annealed at 300°C. Our preliminary results indicate no appreciable diffusion of Au in HgCdTe under the conditions used in these studies. Further work is in progress to confirm these results and to quantify our SIMS profiles.
SMOS: a satellite mission to measure ocean surface salinity
NASA Astrophysics Data System (ADS)
Font, Jordi; Kerr, Yann H.; Srokosz, Meric A.; Etcheto, Jacqueline; Lagerloef, Gary S.; Camps, Adriano; Waldteufel, Philippe
2001-01-01
The ESA's SMOS (Soil Moisture and Ocean Salinity) Earth Explorer Opportunity Mission will be launched by 2005. Its baseline payload is a microwave L-band (21 cm, 1.4 GHz) 2D interferometric radiometer, Y shaped, with three arms 4.5 m long. This frequency allows the measurement of brightness temperature (Tb) under the best conditions to retrieve soil moisture and sea surface salinity (SSS). Unlike other oceanographic variables, until now it has not been possible to measure salinity from space. However, large ocean areas lack significant salinity measurements. The 2D interferometer will measure Tb at large and different incidence angles, for two polarizations. It is possible to obtain SSS from L-band passive microwave measurements if the other factors influencing Tb (SST, surface roughness, foam, sun glint, rain, ionospheric effects and galactic/cosmic background radiation) can be accounted for. Since the radiometric sensitivity is low, SSS cannot be recovered to the required accuracy from a single measurement as the error is about 1-2 psu. If the errors contributing to the uncertainty in Tb are random, averaging the independent data and views along the track, and considering a 200 km square, allow the error to be reduced to 0.1-0.2 pus, assuming all ancillary errors are budgeted.
Traceability of On-Machine Tool Measurement: A Review.
Mutilba, Unai; Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor; Yagüe-Fabra, Jose A
2017-07-11
Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand.
Zhou, Shenglu; Su, Quanlong; Yi, Haomin
2017-01-01
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution. PMID:29278363
Extension of optical lithography by mask-litho integration with computational lithography
NASA Astrophysics Data System (ADS)
Takigawa, T.; Gronlund, K.; Wiley, J.
2010-05-01
Wafer lithography process windows can be enlarged by using source mask co-optimization (SMO). Recently, SMO including freeform wafer scanner illumination sources has been developed. Freeform sources are generated by a programmable illumination system using a micro-mirror array or by custom Diffractive Optical Elements (DOE). The combination of freeform sources and complex masks generated by SMO show increased wafer lithography process window and reduced MEEF. Full-chip mask optimization using source optimized by SMO can generate complex masks with small variable feature size sub-resolution assist features (SRAF). These complex masks create challenges for accurate mask pattern writing and low false-defect inspection. The accuracy of the small variable-sized mask SRAF patterns is degraded by short range mask process proximity effects. To address the accuracy needed for these complex masks, we developed a highly accurate mask process correction (MPC) capability. It is also difficult to achieve low false-defect inspections of complex masks with conventional mask defect inspection systems. A printability check system, Mask Lithography Manufacturability Check (M-LMC), is developed and integrated with 199-nm high NA inspection system, NPI. M-LMC successfully identifies printable defects from all of the masses of raw defect images collected during the inspection of a complex mask. Long range mask CD uniformity errors are compensated by scanner dose control. A mask CD uniformity error map obtained by mask metrology system is used as input data to the scanner. Using this method, wafer CD uniformity is improved. As reviewed above, mask-litho integration technology with computational lithography is becoming increasingly important.
Lithographic performance comparison with various RET for 45-nm node with hyper NA
NASA Astrophysics Data System (ADS)
Adachi, Takashi; Inazuki, Yuichi; Sutou, Takanori; Kitahata, Yasuhisa; Morikawa, Yasutaka; Toyama, Nobuhito; Mohri, Hiroshi; Hayashi, Naoya
2006-05-01
In order to realize 45 nm node lithography, strong resolution enhancement technology (RET) and water immersion will be needed. In this research, we discussed about various RET performance comparison for 45 nm node using 3D rigorous simulation. As a candidate, we chose binary mask (BIN), several kinds of attenuated phase-shifting mask (att-PSM) and chrome-less phase-shifting lithography mask (CPL). The printing performance was evaluated and compared for each RET options, after the optimizing illumination conditions, mask structure and optical proximity correction (OPC). The evaluation items of printing performance were CD-DOF, contrast-DOF, conventional ED-window and MEEF, etc. It's expected that effect of mask 3D topography becomes important at 45 nm node, so we argued about not only the case of ideal structures, but also the mask topography error effects. Several kinds of mask topography error were evaluated and we confirmed how these errors affect to printing performance.
THE CD ISOTOPE SIGNATURE OF THE SOUTHERN OCEAN
NASA Astrophysics Data System (ADS)
Abouchami, W.; Galer, S. J.; Middag, R.; de Baar, H.; Andreae, M. O.; Feldmann, H.; Raczek, I.
2009-12-01
The availability of micronutrients can limit and control plankton ecosystems, notably in the Southern Ocean which plays a major role in regulating the CO2 biological pump. Cadmium has a nutrient-like distribution in seawater - it is directly incorporated into living plankton in the upper water column and re-mineralised at depth. The nutritional role of Cd (Price and Morel, 1990) makes it a potentially useful tracer of biological productivity. We report Cd concentration and Cd stable isotope data obtained using a double-spike TIMS method on seawater samples collected during the Zero and Drake Passage cruise (ANTXXIV-III, IPY-GEOTRACES 2008). Four vertical profiles were collected from 40 to 70°S across the Polar Front using the ultra-clean Titan frame (De Baar et al., 2008), providing a record of changes in biological productivity from the Subantarctic to the Antarctic region. Data from two profiles from the SE Atlantic (47.66°S, 4.28W) and Drake Passage (55.13°S, 65.53°W) obtained on 1 litre-sized samples are presented. Both profiles show a increase in Cd concentration with depth, with noticeably higher concentrations in the SE Atlantic. Cd and PO4 are positively correlated with distinct slopes for the two profiles. The Cd isotope data are expressed as ɛ112/110Cd relative to our JMC Mainz standard (± 8ppm, 2SD, N=17). ɛ112/110Cd values show a continuous decrease with increasing depth and a significant shift towards heavier values in the upper 400m at both stations resolvable outside analytical error (2SE ≤ 20ppm). The sense of Cd isotope fractionation confirms previous findings of uptake of “light” Cd by phytoplankton in the upper water column (Lacan et al., 2006; Ripperger et al., 2007; Schmidt et al., 2009). Most important is the evidence for a distinctive heavier Cd isotope signature in AASW relative to AAIW. This result demonstrates that different water masses carry distinct Cd isotopic compositions reflecting changes in Cd uptake by phytoplankton. Price and Morel(1990) Nature 344, 658-660. De Baar et al.(2008) Mar. Chem. 111, 4-21 Lacan et al.(2006) Geochim. Cosmochim. Acta 70, 5104-5118. Ripperger et al.(2007) Earth Planet. Sci. Lett 261,670-684. Schmidt et al.(2009)Earth Planet. Sci. Lett 277, 262-272.
Effect of bariatric surgery on peripheral blood lymphocyte subsets in women.
Merhi, Zaher O; Durkin, Helen G; Feldman, Joseph; Macura, Jerzy; Rodriguez, Carlos; Minkoff, Howard
2009-01-01
The use of bariatric surgery to treat refractory obesity is increasingly common. The great weight loss that can result from these procedures has been shown to ameliorate certain deleterious effects of obesity. However, the effect of surgery on immune status is unclear. We investigated the relationship between surgical weight loss and peripheral blood lymphocyte percentages in women. Women (n=20, age range 25-59 years, body mass index [BMI] range 36.4-68.2 kg/m2) who had undergone either gastric banding (n=14) or gastric bypass (n=6) were enrolled in a prospective study to determine the percentages of their peripheral blood T cells (CD3+, CD4+, and CD8+), CD19+ B cells, and CD3-/CD16+CD56+ natural killer precursor cells before and 85+/-7 days (3 months) postoperatively using flow cytometry. The data are expressed as the percentage of total lymphocytes+/-the standard error of the mean. A decrease in the BMI at 3 months postoperatively was 12% in the overall study population and 8% and 20% in the banding and bypass groups, respectively. No significant changes were found in the CD4+ or CD8+ T cells (P=.9 and P=.5, respectively), CD19+ B cells (P=.6), or natural killer precursor cells (P=.25) in the overall population or among the patients when stratified by surgical procedure (gastric banding or bypass). The change in CD3+ T cells approached significance (P=.06). A "same direction" (negative) correlation was found between the decrease in BMI and changes in the CD4+ T cell percentages between the pre- and postoperative levels in all the participants, and in the bypass and banding groups separately. However, it only reached statistical significance in the bypass group (r=-.96, P=.002). When studying the correlation between the decrease in BMI and the changes in CD3+ T cell percentages between the pre- and postoperative levels, a borderline significant negative correlation was found for all participants (r=-.44, P=.05) and in the bypass group (r=-.76, P=.08). The rate of change in the CD4+ and CD3+ T cells was greatest among those with the least weight loss and decreased with greater weight loss. An inverse relationship exists between the change in certain T cells (CD4+ and CD3+) and the amount of weight lost after bariatric surgery, mainly gastric bypass surgery. The greater the decrease in BMI, the lower the change in these T cells.
NASA Astrophysics Data System (ADS)
Mendillo, Christopher B.; Howe, Glenn A.; Hewawasam, Kuravi; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya
2017-09-01
The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. Four leakage sources owing to the optical fabrication tolerances and optical coatings are: electric field conjugation (EFC) residuals, beam walk on the secondary and tertiary mirrors, optical surface scattering, and polarization aberration. Simulations and analysis of these four leakage sources for the PICTUREC optical design are presented here.
How noise affects quantum detector tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Q., E-mail: wang@physics.leidenuniv.nl; Renema, J. J.; Exter, M. P.van
2015-10-07
We determine the full photon number response of a NbN superconducting nanowire single photon detector via quantum detector tomography, and the results show the separation of linear, effective absorption efficiency from the internal detection efficiencies. In addition, we demonstrate an error budget for the complete quantum characterization of the detector. We find that for short times, the dominant noise source is shot noise, while laser power fluctuations limit the accuracy for longer timescales. The combined standard uncertainty of the internal detection efficiency derived from our measurements is about 2%.
1982-12-01
RELATIONSHIP OF POOP AND HOOP WITH A PRIORI ALTITUDE UNCERTAINTY IN 3 DIMENSIONAL NAVIGATION. 4Satellite configuration ( AZEL ), (00,100), (900,10O), (180,10O...RELATIONSHIP OF HOOP WITH A PRIORI ALTITUDE UNCERTAINTY IN 2 DIMENSIONAL NAVIGATION. Satellite configuration ( AZEL ), (°,lO), (90,10), (180,lOO), (27o8...UNCERTAINTY IN 2 DIMENSIONAL NAVIGATION. Satellite configuration ( AZEL ), (00,100), (909,200), (l80*,30*), (270*,40*) 4.4-12 4.t 78 " 70 " 30F 20F 4S, a
A new model for yaw attitude of Global Positioning System satellites
NASA Technical Reports Server (NTRS)
Bar-Sever, Y. E.
1995-01-01
Proper modeling of the Global Positioning System (GPS) satellite yaw attitude is important in high-precision applications. A new model for the GPS satellite yaw attitude is introduced that constitutes a significant improvement over the previously available model in terms of efficiency, flexibility, and portability. The model is described in detail, and implementation issues, including the proper estimation strategy, are addressed. The performance of the new model is analyzed, and an error budget is presented. This is the first self-contained description of the GPS yaw attitude model.
NASA Technical Reports Server (NTRS)
Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.
1990-01-01
The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.
1990-05-01
CLASSIFICATION AUTPOVITY 3. DISTRIBUTION IAVAILABILITY OF REPORT 2b. P OCLASSIFICATION/OOWNGRADING SC14DULE Approved for public release; distribution 4...in the Red Book should obtain a copy of the Engineering Design Handbook, Army Weapon System Analysis, Part One, DARCOM- P 706-101, November 1977; a...companion volume: Army Weapon System Analysis, Part Two, DARCOM- P 706-102, October 1979, also makes worthwhile study. Both of these documents, written by
The Global Energy Balance of Titan
NASA Technical Reports Server (NTRS)
Li, Liming; Nixon, Conor A.; Achterberg, Richard K.; Smith, Mark A.; Gorius, Nicolas J. P.; Jiang, Xun; Conrath, Barney J.; Gierasch, Peter J.; Simon-Miller, Amy A.; Flasar, F. Michael;
2011-01-01
We report the first measurement of the global emitted power of Titan. Longterm (2004-2010) observations conducted by the Composite Infrared Spectrometer (CIRS) onboard Cassini reveal that the total emitted power by Titan is (2.84 plus or minus 0.01) x 10(exp 8) watts. Together with previous measurements of the global absorbed solar power of Titan, the CIRS measurements indicate that the global energy budget of Titan is in equilibrium within measurement error. The uncertainty in the absorbed solar energy places an upper limit on the energy imbalance of 5.3%.
Assessment of meteorological uncertainties as they apply to the ASCENDS mission
NASA Astrophysics Data System (ADS)
Snell, H. E.; Zaccheo, S.; Chase, A.; Eluszkiewicz, J.; Ott, L. E.; Pawson, S.
2011-12-01
Many environment-oriented remote sensing and modeling applications require precise knowledge of the atmospheric state (temperature, pressure, water vapor, surface pressure, etc.) on a fine spatial grid with a comprehensive understanding of the associated errors. Coincident atmospheric state measurements may be obtained via co-located remote sensing instruments or by extracting these data from ancillary models. The appropriate technique for a given application depends upon the required accuracy. State-of-the-art mesoscale/regional numerical weather prediction (NWP) models operate on spatial scales of a few kilometers resolution, and global scale NWP models operate on scales of tens of kilometers. Remote sensing measurements may be made on spatial scale comparable to the measurement of interest. These measurements normally require a separate sensor, which increases the overall size, weight, power and complexity of the satellite payload. Thus, a comprehensive understanding of the errors associated with each of these approaches is a critical part of the design/characterization of a remote-sensing system whose measurement accuracy depends on knowledge of the atmospheric state. One of the requirements as part of the overall ASCENDS (Active Sensing of CO2 Emissions over Nights, Days, and Seasons) mission development is to develop a consistent set of atmospheric state variables (vertical temperature and water vapor profiles, and surface pressure) for use in helping to constrain overall retrieval error budget. If the error budget requires tighter uncertainties on ancillary atmospheric parameters than can be provided by NWP models and analyses, additional sensors may be required to reduce the overall measurement error and meet mission requirements. To this end we have used NWP models and reanalysis information to generate a set of atmospheric profiles which contain reasonable variability. This data consists of a "truth" set and a companion "measured" set of profiles. The truth set contains climatologically-relevant profiles of pressure, temperature and humidity with an accompanying surface pressure. The measured set consists of some number of instances of the truth set which have been perturbed to represent realistic measurement uncertainty for the truth profile using measurement error covariance matrices. The primary focus has been to develop matrices derived using information about the profile retrieval accuracy as documented for on-orbit sensor systems including AIRS, AMSU, ATMS, and CrIS. Surface pressure variability and uncertainty was derived from globally-compiled station pressure information. We generated an additional measurement set of profiles which represent the overall error within NWP models. These profile sets will allow for comprehensive trade studies for sensor system design and provide a basis for setting measurement requirements for co-located temperature, humidity sounders, determine the utility of NWP data to either replace or supplement collocated measurements, and to assess the overall end-to-end system performance of the sensor system. In this presentation we discuss the process by which we created these data sets and show their utility in performing trade studies for sensor system concepts and designs.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Interactions between moist heating and dynamics in atmospheric predictability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straus, D.M.; Huntley, M.A.
1994-02-01
The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less
Severns, Paul M.
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190
Breed, Greg A; Severns, Paul M
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.
65nm OPC and design optimization by using simple electrical transistor simulation
NASA Astrophysics Data System (ADS)
Trouiller, Yorick; Devoivre, Thierry; Belledent, Jerome; Foussadier, Franck; Borjon, Amandine; Patterson, Kyle; Lucas, Kevin; Couderc, Christophe; Sundermann, Frank; Urbani, Jean-Christophe; Baron, Stanislas; Rody, Yves; Chapon, Jean-Damien; Arnaud, Franck; Entradas, Jorge
2005-05-01
In the context of 65nm logic technology where gate CD control budget requirements are below 5nm, it is mandatory to properly quantify the impact of the 2D effects on the electrical behavior of the transistor [1,2]. This study uses the following sequence to estimate the impact on transistor performance: 1) A lithographic simulation is performed after OPC (Optical Proximity Correction) of active and poly using a calibrated model at best conditions. Some extrapolation of this model can also be used to assess marginalities due to process window (focus, dose, mask errors, and overlay). In our case study, we mainly checked the poly to active misalignment effects. 2) Electrical behavior of the transistor (Ion, Ioff, Vt) is calculated based on a derivative spice model using the simulated image of the gate as an input. In most of the cases Ion analysis, rather than Vt or leakage, gives sufficient information for patterning optimization. We have demonstrated the benefit of this approach with two different examples: -design rule trade-off : we estimated the impact with and without misalignment of critical rules like poly corner to active distance, active corner to poly distance or minimum space between small transistor and big transistor. -Library standard cell debugging: we applied this methodology to the most critical one hundred transistors of our standard cell libraries and calculate Ion behavior with and without misalignment between active and poly. We compared two scanner illumination modes and two OPC versions based on the behavior of the one hundred transistors. We were able to see the benefits of one illumination, and also the improvement in the OPC maturity.
1991-02-01
CL a . 0. a.I CL 97C. -4 . =- mW~ = m- r. :v V) r-4cvcvcv o w 0 cd 0 0 m Voo 0, C4 0 E .* 0 czt cv 0"’ cv -o 0 ed bCa ~ O cv wc w cv bo m- c 0. m. cv...C)C - -44 Un % $4, W.. C) )C 4 CL 0 1-4 CD co co C>0 r- 00 ++ W af .~). 4-)4 ~4 ~ nEnS4X I- Q) ED Lo~ - 0 a) r- 41 0) u c 0 -P ua 0 -4 0 ’a r ( - - a...va 6, 48 .- ,0.0 00 000 0 0 O - 00 00 0 0 .0 0 0 00 10 oa U vi I a ’ o 1 1 40 a W-, o0 -a- o 4 .O c 8u oA 44 l*2. 4 * a c u u " : Cl n. a. 00 1 Af Im
NASA Technical Reports Server (NTRS)
Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.
2011-01-01
Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.
A new nondestructive instrument for bulk residual stress measurement using tungsten kα1 X-ray.
Ma, Ce; Dou, Zuo-Yong; Chen, Li; Li, Yun; Tan, Xiao; Dong, Ping; Zhang, Jin; Zheng, Lin; Zhang, Peng-Cheng
2016-11-01
We describe an experimental instrument used for measuring nondestructively the residual stress using short wavelength X-ray, tungsten k α1 . By introducing a photon energy screening technology, the monochromatic X-ray diffraction of tungsten k α1 was realized using a CdTe detector. A high precision Huber goniometer is utilized in order to reduce the error in residual stress measurement. This paper summarizes the main performance of this instrument, measurement depth, stress error, as opposed to the neutron diffraction measurements of residual stress. Here, we demonstrate an application on the determination of residual stress in an aluminum alloy welded by the friction stir welding.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Illinois
Mades, D.M.; Oberg, K.A.
1984-01-01
Data uses and funding sources were identified for 138 continuous-record discharge-gaging stations currently (1983) operated as part of the stream-gaging program in Illinois. Streamflow data from five of those stations are used only for regional hydrology studies. Most streamflow data are used for defining regional hydrology, defining rainfall-runoff relations, flood forecasting, regulating navigation systems, and water-quality sampling. Based on the evaluations of data use and of alternative methods for determining streamflow in place of stream gaging, no stations in the 1983 stream-gaging program should be deactivated. The current budget (in 1983 dollars) for operating the 138-station program is $768,000 per year. The average standard error of instantaneous discharge for the current practice for visiting the gaging stations is 36.5 percent. Missing stage record accounts for one-third of the 36.5 percent average standard error. (USGS)
Sensitivity analysis for high-contrast missions with segmented telescopes
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Sauvage, Jean-François; Pueyo, Laurent; Fusco, Thierry; Soummer, Rémi; N'Diaye, Mamadou; St. Laurent, Kathryn
2017-09-01
Segmented telescopes enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures, and segment gaps, makes high-contrast imaging very challenging. In this context, we present an analytical model that will enable to establish a comprehensive error budget to evaluate the constraints on the segments and the influence of the error terms on the final image and contrast. Indeed, the target contrast of 1010 to image Earth-like planets requires drastic conditions, both in term of segment alignment and telescope stability. Despite space telescopes evolving in a more friendly environment than ground-based telescopes, remaining vibrations and resonant modes on the segments can still deteriorate the contrast. In this communication, we develop and validate the analytical model, and compare its outputs to images issued from end-to-end simulations.
Internal robustness: systematic search for systematic bias in SN Ia data
NASA Astrophysics Data System (ADS)
Amendola, Luca; Marra, Valerio; Quartin, Miguel
2013-04-01
A great deal of effort is currently being devoted to understanding, estimating and removing systematic errors in cosmological data. In the particular case of Type Ia supernovae, systematics are starting to dominate the error budget. Here we propose a Bayesian tool for carrying out a systematic search for systematic contamination. This serves as an extension to the standard goodness-of-fit tests and allows not only to cross-check raw or processed data for the presence of systematics but also to pin-point the data that are most likely contaminated. We successfully test our tool with mock catalogues and conclude that the Union2.1 data do not possess a significant amount of systematics. Finally, we show that if one includes in Union2.1 the supernovae that originally failed the quality cuts, our tool signals the presence of systematics at over 3.8σ confidence level.
Focus control enhancement and on-product focus response analysis methodology
NASA Astrophysics Data System (ADS)
Kim, Young Ki; Chen, Yen-Jen; Hao, Xueli; Samudrala, Pavan; Gomez, Juan-Manuel; Mahoney, Mark O.; Kamalizadeh, Ferhad; Hanson, Justin K.; Lee, Shawn; Tian, Ye
2016-03-01
With decreasing CDOF (Critical Depth Of Focus) for 20/14nm technology and beyond, focus errors are becoming increasingly critical for on-product performance. Current on product focus control techniques in high volume manufacturing are limited; It is difficult to define measurable focus error and optimize focus response on product with existing methods due to lack of credible focus measurement methodologies. Next to developments in imaging and focus control capability of scanners and general tool stability maintenance, on-product focus control improvements are also required to meet on-product imaging specifications. In this paper, we discuss focus monitoring, wafer (edge) fingerprint correction and on-product focus budget analysis through diffraction based focus (DBF) measurement methodology. Several examples will be presented showing better focus response and control on product wafers. Also, a method will be discussed for a focus interlock automation system on product for a high volume manufacturing (HVM) environment.
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George
2010-01-01
The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.
Data vs. information: A system paradigm
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1982-01-01
The data system designer requires data parameters, and is dependent on the user to convert information needs to these data parameters. This conversion will be done with more or less accuracy, beginning a chain of inaccuracies which propagate through the system, and which, in the end, may prevent the user from converting the data received into the information required. The concept to be pursued is that errors occur in various parts of the system, and, having occurred, propagate to the end. Modeling of the system may allow an estimation of the effects at any point and the final accumulated effect, and may prove a method of allocating an error budget among the system components. The selection of the various technical parameters which a data system must meet must be done in relation to the ability of the user to turn the cold, impersonal data into a live, personal decision or piece of information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinyard, Natalia Sergeevna; Perry, Theodore Sonne; Usov, Igor Olegovich
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk =more » $$\\partial k$$\\ $$\\partial T$$ ΔT + $$\\partial k$$\\ $$\\partial (pL)$$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B 0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB 0/B 0, and consequently Δk/k = 1/T (ΔB/B + ΔB$$_0$$/B$$_0$$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2« less
Precise and Scalable Static Program Analysis of NASA Flight Software
NASA Technical Reports Server (NTRS)
Brat, G.; Venet, A.
2005-01-01
Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.
Forecasting Construction Cost Index based on visibility graph: A network approach
NASA Astrophysics Data System (ADS)
Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong
2018-03-01
Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.
NASA Astrophysics Data System (ADS)
Kamikubo, Takashi; Ohnishi, Takayuki; Hara, Shigehiro; Anze, Hirohito; Hattori, Yoshiaki; Tamamushi, Shuichi; Bai, Shufeng; Wang, Jen-Shiang; Howell, Rafael; Chen, George; Li, Jiangwei; Tao, Jun; Wiley, Jim; Kurosawa, Terunobu; Saito, Yasuko; Takigawa, Tadahiro
2010-09-01
In electron beam writing on EUV mask, it has been reported that CD linearity does not show simple signatures as observed with conventional COG (Cr on Glass) masks because they are caused by scattered electrons form EUV mask itself which comprises stacked heavy metals and thick multi-layers. To resolve this issue, Mask Process Correction (MPC) will be ideally applicable. Every pattern is reshaped in MPC. Therefore, the number of shots would not increase and writing time will be kept within reasonable range. In this paper, MPC is extended to modeling for correction of CD linearity errors on EUV mask. And its effectiveness is verified with simulations and experiments through actual writing test.
Mazaheri, H; Ghaedi, M; Ahmadi Azqhandi, M H; Asfaram, A
2017-05-10
Analytical chemists apply statistical methods for both the validation and prediction of proposed models. Methods are required that are adequate for finding the typical features of a dataset, such as nonlinearities and interactions. Boosted regression trees (BRTs), as an ensemble technique, are fundamentally different to other conventional techniques, with the aim to fit a single parsimonious model. In this work, BRT, artificial neural network (ANN) and response surface methodology (RSM) models have been used for the optimization and/or modeling of the stirring time (min), pH, adsorbent mass (mg) and concentrations of MB and Cd 2+ ions (mg L -1 ) in order to develop respective predictive equations for simulation of the efficiency of MB and Cd 2+ adsorption based on the experimental data set. Activated carbon, as an adsorbent, was synthesized from walnut wood waste which is abundant, non-toxic, cheap and locally available. This adsorbent was characterized using different techniques such as FT-IR, BET, SEM, point of zero charge (pH pzc ) and also the determination of oxygen containing functional groups. The influence of various parameters (i.e. pH, stirring time, adsorbent mass and concentrations of MB and Cd 2+ ions) on the percentage removal was calculated by investigation of sensitive function, variable importance rankings (BRT) and analysis of variance (RSM). Furthermore, a central composite design (CCD) combined with a desirability function approach (DFA) as a global optimization technique was used for the simultaneous optimization of the effective parameters. The applicability of the BRT, ANN and RSM models for the description of experimental data was examined using four statistical criteria (absolute average deviation (AAD), mean absolute error (MAE), root mean square error (RMSE) and coefficient of determination (R 2 )). All three models demonstrated good predictions in this study. The BRT model was more precise compared to the other models and this showed that BRT could be a powerful tool for the modeling and optimizing of removal of MB and Cd(ii). Sensitivity analysis (calculated from the weight of neurons in ANN) confirmed that the adsorbent mass and pH were the essential factors affecting the removal of MB and Cd(ii), with relative importances of 28.82% and 38.34%, respectively. A good agreement (R 2 > 0.960) between the predicted and experimental values was obtained. Maximum removal (R% > 99) was achieved at an initial dye concentration of 15 mg L -1 , a Cd 2+ concentration of 20 mg L -1 , a pH of 5.2, an adsorbent mass of 0.55 g and a time of 35 min.
Parmar, Indu; Sharma, Sowmya; Rupasinghe, H P Vasantha
2015-04-01
The present study investigated five cyclodextrins (CDs) for the extraction of flavonols from apple pomace powder and optimized β-CD based extraction of total flavonols using response surface methodology. A 2(3) central composite design with β-CD concentration (0-5 g 100 mL(-1)), extraction temperature (20-72 °C), extraction time (6-48 h) and second-order quadratic model for the total flavonol yield (mg 100 g(-1) DM) was selected to generate the response surface curves. The optimal conditions obtained were: β-CD concentration, 2.8 g 100 mL(-1); extraction temperature, 45 °C and extraction time, 25.6 h that predicted the extraction of 166.6 mg total flavonols 100 g(-1) DM. The predicted amount was comparable to the experimental amount of 151.5 mg total flavonols 100 g(-1) DM obtained from optimal β-CD based parameters, thereby giving a low absolute error and adequacy of fitted model. In addition, the results from optimized extraction conditions showed values similar to those obtained through previously established solvent based sonication assisted flavonol extraction procedure. To the best of our knowledge, this is the first study to optimize aqueous β-CD based flavonol extraction which presents an environmentally safe method for value-addition to under-utilized bio resources.
Photoluminescence of patterned CdSe quantum dot for anti-counterfeiting label on paper
NASA Astrophysics Data System (ADS)
Isnaeni, Yulianto, Nursidik; Suliyanti, Maria Margaretha
2016-03-01
We successfully developed a method utilizing colloidal CdSe nanocrystalline quantum dot for anti-counterfeiting label on a piece of glossy paper. We deposited numbers and lines patterns of toluene soluble CdSe quantum dot using rubber stamper on a glossy paper. The width of line pattern was about 1-2 mm with 1-2 mm separation between lines. It required less than one minute for deposited CdSe quantum dot on glossy paper to dry and become invisible by naked eyes. However, patterned quantum dot become visible using long-pass filter glasses upon excitation of UV lamp or blue laser. We characterized photoluminescence of line patterns of quantum dot, and we found that emission boundaries of line patterns were clearly observed. The error of line size and shape were mainly due to defect of the original stamper. The emission peak wavelength of CdSe quantum dot was 629 nm. The emission spectrum of deposited quantum dot has full width at half maximum (FWHM) of 30-40 nm. The spectra similarity between deposited quantum dot and the original quantum dot in solution proved that our stamping method can be simply applied on glossy paper without changing basic optical property of the quantum dot. Further development of this technique is potential for anti-counterfeiting label on very important documents or objects.
Photoluminescence of patterned CdSe quantum dot for anti-counterfeiting label on paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isnaeni,, E-mail: isnaeni@lipi.go.id; Yulianto, Nursidik; Suliyanti, Maria Margaretha
We successfully developed a method utilizing colloidal CdSe nanocrystalline quantum dot for anti-counterfeiting label on a piece of glossy paper. We deposited numbers and lines patterns of toluene soluble CdSe quantum dot using rubber stamper on a glossy paper. The width of line pattern was about 1-2 mm with 1-2 mm separation between lines. It required less than one minute for deposited CdSe quantum dot on glossy paper to dry and become invisible by naked eyes. However, patterned quantum dot become visible using long-pass filter glasses upon excitation of UV lamp or blue laser. We characterized photoluminescence of line patterns of quantummore » dot, and we found that emission boundaries of line patterns were clearly observed. The error of line size and shape were mainly due to defect of the original stamper. The emission peak wavelength of CdSe quantum dot was 629 nm. The emission spectrum of deposited quantum dot has full width at half maximum (FWHM) of 30-40 nm. The spectra similarity between deposited quantum dot and the original quantum dot in solution proved that our stamping method can be simply applied on glossy paper without changing basic optical property of the quantum dot. Further development of this technique is potential for anti-counterfeiting label on very important documents or objects.« less
Ghammraoui, Bahaa; Badal, Andreu; Glick, Stephen J
2018-06-03
Mammographic density of glandular breast tissue has a masking effect that can reduce lesion detection accuracy and is also a strong risk factor for breast cancer. Therefore, accurate quantitative estimation of breast density is clinically important. In this study, we investigate experimentally the feasibility of quantifying volumetric breast density with spectral mammography using a CdTe-based photon-counting detector. To demonstrate proof-of-principle, this study was carried out using the single pixel Amptek XR-100T-CdTe detector. The total number of x rays recorded by the detector from a single pencil-beam projection through 50%/50% of adipose/glandular mass fraction-equivalent phantoms was measured. Material decomposition assuming two, four, and eight energy bins was then applied to characterize the inspected phantom into adipose and glandular using log-likelihood estimation, taking into account the polychromatic source, the detector response function, and the energy-dependent attenuation. Measurement tests were carried out for different doses, kVp settings, and different breast sizes. For dose of 1 mGy and above, the percent relative root mean square (RMS) errors of the estimated breast density was measured below 7% for all three phantom studies. It was also observed that some decrease in RMS errors was achieved using eight energy bins. For 3 and 4 cm thick phantoms, performance at 40 and 45 kVp showed similar performance. However, it was observed that 45 kVp showed better performance for a phantom thickness of 6 cm at low dose levels due to increased statistical variation at lower photon count levels with 40 kVp. The results of the current study suggest that photon-counting spectral mammography systems using CdTe detectors have the potential to be used for accurate quantification of volumetric breast density on a pixel-to-pixel basis, with an RMS error of less than 7%. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
Impacts of snow cover fraction data assimilation on modeled energy and moisture budgets
NASA Astrophysics Data System (ADS)
Arsenault, Kristi R.; Houser, Paul R.; De Lannoy, Gabriëlle J. M.; Dirmeyer, Paul A.
2013-07-01
Two data assimilation (DA) methods, a simple rule-based direct insertion (DI) approach and a one-dimensional ensemble Kalman filter (EnKF) method, are evaluated by assimilating snow cover fraction observations into the Community Land surface Model. The ensemble perturbation needed for the EnKF resulted in negative snowpack biases. Therefore, a correction is made to the ensemble bias using an approach that constrains the ensemble forecasts with a single unperturbed deterministic LSM run. This is shown to improve the final snow state analyses. The EnKF method produces slightly better results in higher elevation locations, whereas results indicate that the DI method has a performance advantage in lower elevation regions. In addition, the two DA methods are evaluated in terms of their overall impacts on the other land surface state variables (e.g., soil moisture) and fluxes (e.g., latent heat flux). The EnKF method is shown to have less impact overall than the DI method and causes less distortion of the hydrological budget. However, the land surface model adjusts more slowly to the smaller EnKF increments, which leads to smaller but slightly more persistent moisture budget errors than found with the DI updates. The DI method can remove almost instantly much of the modeled snowpack, but this also allows the model system to quickly revert to hydrological balance for nonsnowpack conditions.
Kjelstrom, L.C.
1995-01-01
Many individual springs and groups of springs discharge water from volcanic rocks that form the north canyon wall of the Snake River between Milner Dam and King Hill. Previous estimates of annual mean discharge from these springs have been used to understand the hydrology of the eastern part of the Snake River Plain. Four methods that were used in previous studies or developed to estimate annual mean discharge since 1902 were (1) water-budget analysis of the Snake River; (2) correlation of water-budget estimates with discharge from 10 index springs; (3) determination of the combined discharge from individual springs or groups of springs by using annual discharge measurements of 8 springs, gaging-station records of 4 springs and 3 sites on the Malad River, and regression equations developed from 5 of the measured springs; and (4) a single regression equation that correlates gaging-station records of 2 springs with historical water-budget estimates. Comparisons made among the four methods of estimating annual mean spring discharges from 1951 to 1959 and 1963 to 1980 indicated that differences were about equivalent to a measurement error of 2 to 3 percent. The method that best demonstrates the response of annual mean spring discharge to changes in ground-water recharge and discharge is method 3, which combines the measurements and regression estimates of discharge from individual springs.
Moscoso-Mártir, Alvaro; Müller, Juliana; Islamova, Elmira; Merget, Florian; Witzens, Jeremy
2017-09-20
Based on the single channel characterization of a Silicon Photonics (SiP) transceiver with Semiconductor Optical Amplifier (SOA) and semiconductor Mode-Locked Laser (MLL), we evaluate the optical power budget of a corresponding Wavelength Division Multiplexed (WDM) link in which penalties associated to multi-channel operation and the management of polarization diversity are introduced. In particular, channel cross-talk as well as Cross Gain Modulation (XGM) and Four Wave Mixing (FWM) inside the SOA are taken into account. Based on these link budget models, the technology is expected to support up to 12 multiplexed channels without channel pre-emphasis or equalization. Forward Error Correction (FEC) does not appear to be required at 14 Gbps if the SOA is maintained at 25 °C and MLL-to-SiP as well as SiP-to-SOA interface losses can be maintained below 3 dB. In semi-cooled operation with an SOA temperature below 55 °C, multi-channel operation is expected to be compatible with standard 802.3bj Reed-Solomon FEC at 14 Gbps provided interface losses are maintained below 4.5 dB. With these interface losses and some improvements to the Transmitter (Tx) and Receiver (Rx) electronics, 25 Gbps multi-channel operation is expected to be compatible with 7% overhead hard decision FEC.
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Rowlands, David D.; Luthcke, Scott B.; Zelensky, Nikita P.; Chinn, Douglas S.; Pavlis, Despina E.; Marr, Gregory
2001-01-01
The US Navy's GEOSAT Follow-On Spacecraft was launched on February 10, 1998 with the primary objective of the mission to map the oceans using a radar altimeter. Following an extensive set of calibration campaigns in 1999 and 2000, the US Navy formally accepted delivery of the satellite on November 29, 2000. Satellite laser ranging (SLR) and Doppler (Tranet-style) beacons track the spacecraft. Although limited amounts of GPS data were obtained, the primary mode of tracking remains satellite laser ranging. The GFO altimeter measurements are highly precise, with orbit error the largest component in the error budget. We have tuned the non-conservative force model for GFO and the gravity model using SLR, Doppler and altimeter crossover data sampled over one year. Gravity covariance projections to 70x70 show the radial orbit error on GEOSAT was reduced from 2.6 cm in EGM96 to 1.3 cm with the addition of SLR, GFO/GFO and TOPEX/GFO crossover data. Evaluation of the gravity fields using SLR and crossover data support the covariance projections and also show a dramatic reduction in geographically-correlated error for the tuned fields. In this paper, we report on progress in orbit determination for GFO using GFO/GFO and TOPEX/GFO altimeter crossovers. We will discuss improvements in satellite force modeling and orbit determination strategy, which allows reduction in GFO radial orbit error from 10-15 cm to better than 5 cm.
Diffraction-based overlay metrology for double patterning technologies
NASA Astrophysics Data System (ADS)
Dasari, Prasad; Korlahalli, Rahul; Li, Jie; Smith, Nigel; Kritsun, Oleg; Volkman, Cathy
2009-03-01
The extension of optical lithography to 32nm and beyond is made possible by Double Patterning Techniques (DPT) at critical levels of the process flow. The ease of DPT implementation is hindered by increased significance of critical dimension uniformity and overlay errors. Diffraction-based overlay (DBO) has shown to be an effective metrology solution for accurate determination of the overlay errors associated with double patterning [1, 2] processes. In this paper we will report its use in litho-freeze-litho-etch (LFLE) and spacer double patterning technology (SDPT), which are pitch splitting solutions that reduce the significance of overlay errors. Since the control of overlay between various mask/level combinations is critical for fabrication, precise and accurate assessment of errors by advanced metrology techniques such as spectroscopic diffraction based overlay (DBO) and traditional image-based overlay (IBO) using advanced target designs will be reported. A comparison between DBO, IBO and CD-SEM measurements will be reported. . A discussion of TMU requirements for 32nm technology and TMU performance data of LFLE and SDPT targets by different overlay approaches will be presented.
Splendidly blended: a machine learning set up for CDU control
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2017-06-01
As the concepts of machine learning and artificial intelligence continue to grow in importance in the context of internet related applications it is still in its infancy when it comes to process control within the semiconductor industry. Especially the branch of mask manufacturing presents a challenge to the concepts of machine learning since the business process intrinsically induces pronounced product variability on the background of small plate numbers. In this paper we present the architectural set up of a machine learning algorithm which successfully deals with the demands and pitfalls of mask manufacturing. A detailed motivation of this basic set up followed by an analysis of its statistical properties is given. The machine learning set up for mask manufacturing involves two learning steps: an initial step which identifies and classifies the basic global CD patterns of a process. These results form the basis for the extraction of an optimized training set via balanced sampling. A second learning step uses this training set to obtain the local as well as global CD relationships induced by the manufacturing process. Using two production motivated examples we show how this approach is flexible and powerful enough to deal with the exacting demands of mask manufacturing. In one example we show how dedicated covariates can be used in conjunction with increased spatial resolution of the CD map model in order to deal with pathological CD effects at the mask boundary. The other example shows how the model set up enables strategies for dealing tool specific CD signature differences. In this case the balanced sampling enables a process control scheme which allows usage of the full tool park within the specified tight tolerance budget. Overall, this paper shows that the current rapid developments off the machine learning algorithms can be successfully used within the context of semiconductor manufacturing.
Introduction of pre-etch deposition techniques in EUV patterning
NASA Astrophysics Data System (ADS)
Xiang, Xun; Beique, Genevieve; Sun, Lei; Labonte, Andre; Labelle, Catherine; Nagabhirava, Bhaskar; Friddle, Phil; Schmitz, Stefan; Goss, Michael; Metzler, Dominik; Arnold, John
2018-04-01
The thin nature of EUV (Extreme Ultraviolet) resist has posed significant challenges for etch processes. In particular, EUV patterning combined with conventional etch approaches suffers from loss of pattern fidelity in the form of line breaks. A typical conventional etch approach prevents the etch process from having sufficient resist margin to control the trench CD (Critical Dimension), minimize the LWR (Line Width Roughness), LER (Line Edge Roughness) and reduce the T2T (Tip-to-Tip). Pre-etch deposition increases the resist budget by adding additional material to the resist layer, thus enabling the etch process to explore a wider set of process parameters to achieve better pattern fidelity. Preliminary tests with pre-etch deposition resulted in blocked isolated trenches. In order to mitigate these effects, a cyclic deposition and etch technique is proposed. With optimization of deposition and etch cycle time as well as total number of cycles, it is possible to open the underlying layers with a beneficial over etch and simultaneously keep the isolated trenches open. This study compares the impact of no pre-etch deposition, one time deposition and cyclic deposition/etch techniques on 4 aspects: resist budget, isolated trench open, LWR/LER and T2T.
Manufacturability: from design to SPC limits through "corner-lot" characterization
NASA Astrophysics Data System (ADS)
Hogan, Timothy J.; Baker, James C.; Wesneski, Lisa; Black, Robert S.; Rothenbury, Dave
2004-12-01
Texas Instruments" Digital Micro-mirror Device, is used in a wide variety of optical display applications ranging from fixed and portable projectors to high-definition television (HDTV) to digital cinema projection systems. A new DMD pixel architecture, called "FTP", was designed and qualified by Texas Instruments DLPTMTM Group in 2003 to meet increased performance objectives for brightness and contrast ratio. Coordination between design, test and fabrication groups was required to balance pixel performance requirements and manufacturing capability. "Corner Lot" designed experiments (DOE) were used to verify "fabrication space" available for the pixel design. The corner lot technique allows confirmation of manufacturability projections early in the design/qualification cycle. Through careful design and analysis of the corner-lot DOE, a balance of critical dimension (cd) "budgets" is possible so that specification and process control limits can be established that meet both customer and factory requirements. The application of corner-lot DOE is illustrated in a case history of the DMD "FTP" pixel. The process for balancing test parameter requirements with multiple critical dimension budgets is shown. MEMS/MOEMS device design and fabrication can use similar techniques to achieve agressive design-to-qualification goals.
Manufacturability: from design to SPC limits through "corner-lot" characterization
NASA Astrophysics Data System (ADS)
Hogan, Timothy J.; Baker, James C.; Wesneski, Lisa; Black, Robert S.; Rothenbury, Dave
2005-01-01
Texas Instruments" Digital Micro-mirror Device, is used in a wide variety of optical display applications ranging from fixed and portable projectors to high-definition television (HDTV) to digital cinema projection systems. A new DMD pixel architecture, called "FTP", was designed and qualified by Texas Instruments DLPTMTM Group in 2003 to meet increased performance objectives for brightness and contrast ratio. Coordination between design, test and fabrication groups was required to balance pixel performance requirements and manufacturing capability. "Corner Lot" designed experiments (DOE) were used to verify "fabrication space" available for the pixel design. The corner lot technique allows confirmation of manufacturability projections early in the design/qualification cycle. Through careful design and analysis of the corner-lot DOE, a balance of critical dimension (cd) "budgets" is possible so that specification and process control limits can be established that meet both customer and factory requirements. The application of corner-lot DOE is illustrated in a case history of the DMD "FTP" pixel. The process for balancing test parameter requirements with multiple critical dimension budgets is shown. MEMS/MOEMS device design and fabrication can use similar techniques to achieve agressive design-to-qualification goals.
1985-02-01
Et COMMUNICATIONS, U G ET A T V T :lT A NNlM E I A G ABUDGET ACTIVITY 9: ADMINISTRATION ft ASSOC ACTS BUDGET ACTIVITY 10: SUPPORT TO OTHER NATIONS...85 628 0318 I r- m~ -Cc. Vc MuL ,n aI I I I8 I 8I1I1. ,. C 0’. CD ODC~ V . q* mY 00t M CO Io -~,r- M~ r- en 0 CO -" C%)0 (%.k LL v koO C)O~ 4n C%. ____r...Property 109 f. Base Operations 131 - ~ ~ V .. O&M.N 3 -4 *• ". .’. i
Sealed aerospace metal-hydride batteries
NASA Technical Reports Server (NTRS)
Coates, Dwaine
1992-01-01
Nickel metal hydride and silver metal hydride batteries are being developed for aerospace applications. There is a growing market for smaller, lower cost satellites which require higher energy density power sources than aerospace nickel-cadmium at a lower cost than space nickel-hydrogen. These include small LEO satellites, tactical military satellites and satellite constellation programs such as Iridium and Brilliant Pebbles. Small satellites typically do not have the spacecraft volume or the budget required for nickel-hydrogen batteries. NiCd's do not have adequate energy density as well as other problems such as overcharge capability and memory effort. Metal hydride batteries provide the ideal solution for these applications. Metal hydride batteries offer a number of advantages over other aerospace battery systems.
1989-01-01
DATZ. An exemption to requirements of in eccnomic analysis .s :equested in icccrance with provisions AR 1,-28, Para. - 3d (3). Regulat.o references axe...provisions of XR 1I-28 Para !- 3d (3). :"gloos are required by AR 190-111, Ch. 4 and DOD 5100.76-M. Ch. 4-& 5. 12. SUPPLEMNTAL DATA: A. sti.mated Design...rdance witn AR ll-23, Para. D- 3d (2 Ths orolecm .z eu~_ - compiv .Ltn :onst=uction cr.:eria of AR0-_’:, ca. 4 and :CD 5i2 -.-. 21 4 A. Estzmated Design
Lin, Wei-Chen; Chou, Jen-Wei; Yen, Hsu-Heng; Hsu, Wen-Hung; Lin, Hung-Hsin; Lin, Jen-Kou; Chuang, Chiao-Hsiung; Huang, Tien-Yu; Wang, Horng-Yuan; Wong, Jau-Min
2017-01-01
Background/Aims In Taiwan, due to budget limitations, the National Health Insurance only allows for a limited period of biologics use in treating moderate to severe Crohn's disease (CD). We aimed to access the outcomes of CD patients following a limited period use of biologics, specifically focusing on the relapse rate and remission duration; also the response rate to second use when applicable. Methods This was a multicenter, retrospective, observational study and we enrolled CD patients who had been treated with adalimumab (ADA) according to the insurance guidelines from 2009 to 2015. Results A total of 54 CD patients, with follow-up of more than 6 months after the withdrawal of ADA, were enrolled. The average period of treatment with ADA was 16.7±9.7 months. After discontinuing ADA, 59.3% patients suffered a clinical relapse. In the univariate analysis, the reason for withdrawal was a risk factor for relapse (P=0.042). In the multivariate analysis, current smoker became an important risk factor for relapse (OR, 3.9; 95% CI, 1.2−14.8; P=0.044) and male sex was another risk factor (OR, 2.9; 95% CI, 1.1−8.6; P=0.049). For those 48 patients who received a second round of biologics, the clinical response was seen in 60.4%, and 1 anaphylaxis occurred. Conclusions Fifty-nine percent of patients experienced a relapse after discontinuing the limited period of ADA treatment, and most of them occurred within 1 year following cessation. Male sex and current smoker were risk factors for relapse. Though 60.4% of the relapse patients responded to ADA again. PMID:29142516
Banerjee, Sohini; Sar, Abhijit; Misra, Arijit; Pal, Srikanta; Chakraborty, Arindom; Dam, Bomba
2018-02-01
Antibiotics are widely used at sub-lethal concentrations as a feed supplement to enhance poultry productivity. To understand antibiotic-induced temporal changes in the structure and function of gut microbiota of chicken, two flocks were maintained for six weeks on a carbohydrate- and protein-rich diet. The feed in the conventional diet (CD) group was supplemented with sub-lethal doses of chlorotetracycline, virginiamycin and amoxicillin, while the organic diet (OD) had no such addition. Antibiotic-fed birds were more productive, with a lower feed conversion ratio (FCR). Their faecal samples also had higher total heterotrophic bacterial load and antibiotic resistance capability. Deep sequencing of 16S rDNA V1-V2 amplicons revealed Firmicutes as the most dominant phylum at all time points, with the predominant presence of Lactobacillales members in the OD group. The productivity indicator, i.e. higher Firmicutes:Bacteroidetes ratio, particularly in the late growth phase, was more marked in CD amplicon sequences, which was supported by culture-based enumerations on selective media. CD datasets also showed the prevalence of known butyrate-producing genera such as Faecalibacterium, Ruminococcus, Blautia, Coprococcus and Bacteroides, which correlates closely with their higher PICRUSt-based in silico predicted 'glycan biosynthesis and metabolism'-related Kyoto Encyclopedia of Genes and Genomes (KEGG) orthologues. Semi-quantitative end-point PCR targeting of the butyryl-CoA: acetate CoA-transferase gene also confirmed butyrate producers as being late colonizers, particularly in antibiotic-fed birds in both the CD flocks and commercial rearing farms. Thus, antibiotics preferentially enrich bacterial populations, particularly short-chain fatty acid producers that can efficiently metabolize hitherto undigestable feed material such as glycans, thereby increasing the energy budget of the host and its productivity.