Trends in shuttle entry heating from the correction of flight test maneuvers
NASA Technical Reports Server (NTRS)
Hodge, J. K.
1983-01-01
A new technique was developed to systematically expand the aerothermodynamic envelope of the Space Shuttle Protection System (TPS). The technique required transient flight test maneuvers which were performed on the second, fourth, and fifth Shuttle reentries. Kalman filtering and parameter estimation were used for the reduction of embedded thermocouple data to obtain best estimates of aerothermal parameters. Difficulties in reducing the data were overcome or minimized. Thermal parameters were estimated to minimize uncertainties, and heating rate parameters were estimated to correlate with angle of attack, sideslip, deflection angle, and Reynolds number changes. Heating trends from the maneuvers allow for rapid and safe envelope expansion needed for future missions, except for some local areas.
Safety envelope for load tolerance of structural element design based on multi-stage testing
Park, Chanyoung; Kim, Nam H.
2016-09-06
Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Chanyoung; Kim, Nam H.
Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less
Interior thermal insulation systems for historical building envelopes
NASA Astrophysics Data System (ADS)
Jerman, Miloš; Solař, Miloš; Černý, Robert
2017-11-01
The design specifics of interior thermal insulation systems applied for historical building envelopes are described. The vapor-tight systems and systems based on capillary thermal insulation materials are taken into account as two basic options differing in building-physical considerations. The possibilities of hygrothermal analysis of renovated historical envelopes including laboratory methods, computer simulation techniques, and in-situ tests are discussed. It is concluded that the application of computational models for hygrothermal assessment of interior thermal insulation systems should always be performed with a particular care. On one hand, they present a very effective tool for both service life assessment and possible planning of subsequent reconstructions. On the other, the hygrothermal analysis of any historical building can involve quite a few potential uncertainties which may affect negatively the accuracy of obtained results.
The impact of heat blanketing envelopes on neutron stars cooling
NASA Astrophysics Data System (ADS)
Beznogov, M. V.; Yakovlev, D. G.; Fortin, M.; Haensel, P.; Zdunik, J. L.
2017-11-01
The goal of this work is to investigate the effects of chemical composition of heat blanketing envelopes of neutron stars on their thermal states and thermal evolution. To this purpose, we employ newly constructed models of the envelopes composed of binary ion mixtures (H-He, He-C, C-Fe) varying the mass of lighter ions (H, He or C) in the envelope. The results are compared with those calculated using the standard “onion-like” envelope. For illustration, we apply these results to estimate the internal temperature of the Vela pulsar and to study cooling of neutron stars. We show that uncertainties in the chemical composition of the envelopes can lead up to ~ 2.5 times uncertainty of the internal temperature of the star which significantly complicates theoretical reconstruction of the internal structure of cooling neutron stars from observations of their thermal surface emission.
Stability and control flight test results of the space transportation system's orbiter
NASA Technical Reports Server (NTRS)
Culp, M. A.; Cooke, D. R.
1982-01-01
Flight testing of the Space Shuttle Orbiter is in progress and current results of the post-flight aerodynamic analyses are discussed. The purpose of these analyses is to reduce the pre-flight aerodynamic uncertainties, thereby leading to operational certification of the Orbiter flight envelope relative to the integrated airframe and flight control system. Primary data reduction is accomplished with a well documented maximum likelihood system identification techniques.
Garcia, Raquel A; Burgess, Neil D; Cabeza, Mar; Rahbek, Carsten; Araújo, Miguel B
2012-01-01
Africa is predicted to be highly vulnerable to 21st century climatic changes. Assessing the impacts of these changes on Africa's biodiversity is, however, plagued by uncertainties, and markedly different results can be obtained from alternative bioclimatic envelope models or future climate projections. Using an ensemble forecasting framework, we examine projections of future shifts in climatic suitability, and their methodological uncertainties, for over 2500 species of mammals, birds, amphibians and snakes in sub-Saharan Africa. To summarize a priori the variability in the ensemble of 17 general circulation models, we introduce a consensus methodology that combines co-varying models. Thus, we quantify and map the relative contribution to uncertainty of seven bioclimatic envelope models, three multi-model climate projections and three emissions scenarios, and explore the resulting variability in species turnover estimates. We show that bioclimatic envelope models contribute most to variability, particularly in projected novel climatic conditions over Sahelian and southern Saharan Africa. To summarize agreements among projections from the bioclimatic envelope models we compare five consensus methodologies, which generally increase or retain projection accuracy and provide consistent estimates of species turnover. Variability from emissions scenarios increases towards late-century and affects southern regions of high species turnover centred in arid Namibia. Twofold differences in median species turnover across the study area emerge among alternative climate projections and emissions scenarios. Our ensemble of projections underscores the potential bias when using a single algorithm or climate projection for Africa, and provides a cautious first approximation of the potential exposure of sub-Saharan African vertebrates to climatic changes. The future use and further development of bioclimatic envelope modelling will hinge on the interpretation of results in the light of methodological as well as biological uncertainties. Here, we provide a framework to address methodological uncertainties and contextualize results.
Determining the metallicity of the solar envelope using seismic inversion techniques
NASA Astrophysics Data System (ADS)
Buldgen, G.; Salmon, S. J. A. J.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.
2017-11-01
The solar metallicity issue is a long-lasting problem of astrophysics, impacting multiple fields and still subject to debate and uncertainties. While spectroscopy has mostly been used to determine the solar heavy elements abundance, helioseismologists attempted providing a seismic determination of the metallicity in the solar convective envelope. However, the puzzle remains since two independent groups provided two radically different values for this crucial astrophysical parameter. We aim at providing an independent seismic measurement of the solar metallicity in the convective envelope. Our main goal is to help provide new information to break the current stalemate amongst seismic determinations of the solar heavy element abundance. We start by presenting the kernels, the inversion technique and the target function of the inversion we have developed. We then test our approach in multiple hare-and-hounds exercises to assess its reliability and accuracy. We then apply our technique to solar data using calibrated solar models and determine an interval of seismic measurements for the solar metallicity. We show that our inversion can indeed be used to estimate the solar metallicity thanks to our hare-and-hounds exercises. However, we also show that further dependencies in the physical ingredients of solar models lead to a low accuracy. Nevertheless, using various physical ingredients for our solar models, we determine metallicity values between 0.008 and 0.014.
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine
2008-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.
Bucklin, David N.; Watling, James I.; Speroterra, Carolina; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.
2013-01-01
High-resolution (downscaled) projections of future climate conditions are critical inputs to a wide variety of ecological and socioeconomic models and are created using numerous different approaches. Here, we conduct a sensitivity analysis of spatial predictions from climate envelope models for threatened and endangered vertebrates in the southeastern United States to determine whether two different downscaling approaches (with and without the use of a regional climate model) affect climate envelope model predictions when all other sources of variation are held constant. We found that prediction maps differed spatially between downscaling approaches and that the variation attributable to downscaling technique was comparable to variation between maps generated using different general circulation models (GCMs). Precipitation variables tended to show greater discrepancies between downscaling techniques than temperature variables, and for one GCM, there was evidence that more poorly resolved precipitation variables contributed relatively more to model uncertainty than more well-resolved variables. Our work suggests that ecological modelers requiring high-resolution climate projections should carefully consider the type of downscaling applied to the climate projections prior to their use in predictive ecological modeling. The uncertainty associated with alternative downscaling methods may rival that of other, more widely appreciated sources of variation, such as the general circulation model or emissions scenario with which future climate projections are created.
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
Uncertainty in predictions of oil spill trajectories in a coastal zone
NASA Astrophysics Data System (ADS)
Sebastião, P.; Guedes Soares, C.
2006-12-01
A method is introduced to determine the uncertainties in the predictions of oil spill trajectories using a classic oil spill model. The method considers the output of the oil spill model as a function of random variables, which are the input parameters, and calculates the standard deviation of the output results which provides a measure of the uncertainty of the model as a result of the uncertainties of the input parameters. In addition to a single trajectory that is calculated by the oil spill model using the mean values of the parameters, a band of trajectories can be defined when various simulations are done taking into account the uncertainties of the input parameters. This band of trajectories defines envelopes of the trajectories that are likely to be followed by the spill given the uncertainties of the input. The method was applied to an oil spill that occurred in 1989 near Sines in the southwestern coast of Portugal. This model represented well the distinction between a wind driven part that remained offshore, and a tide driven part that went ashore. For both parts, the method defined two trajectory envelopes, one calculated exclusively with the wind fields, and the other using wind and tidal currents. In both cases reasonable approximation to the observed results was obtained. The envelope of likely trajectories that is obtained with the uncertainty modelling proved to give a better interpretation of the trajectories that were simulated by the oil spill model.
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
Robust, Decoupled, Flight Control Design with Rate Saturating Actuators
NASA Technical Reports Server (NTRS)
Snell, S. A.; Hess, R. A.
1997-01-01
Techniques for the design of control systems for manually controlled, high-performance aircraft must provide the following: (1) multi-input, multi-output (MIMO) solutions, (2) acceptable handling qualities including no tendencies for pilot-induced oscillations, (3) a tractable approach for compensator design, (4) performance and stability robustness in the presence of significant plant uncertainty, and (5) performance and stability robustness in the presence actuator saturation (particularly rate saturation). A design technique built upon Quantitative Feedback Theory is offered as a candidate methodology which can provide flight control systems meeting these requirements, and do so over a considerable part of the flight envelope. An example utilizing a simplified model of a supermaneuverable fighter aircraft demonstrates the proposed design methodology.
A double-gaussian, percentile-based method for estimating maximum blood flow velocity.
Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D
2013-11-01
Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.
Dong, Xin; Zhang, Xinyi; Zeng, Siyu
2017-04-01
In the context of sustainable development, there has been an increasing requirement for an eco-efficiency assessment of wastewater treatment plants (WWTPs). Data envelopment analysis (DEA), a technique that is widely applied for relative efficiency assessment, is used in combination with the tolerances approach to handle WWTPs' multiple inputs and outputs as well as their uncertainty. The economic cost, energy consumption, contaminant removal, and global warming effect during the treatment processes are integrated to interpret the eco-efficiency of WWTPs. A total of 736 sample plants from across China are assessed, and large sensitivities to variations in inputs and outputs are observed for most samples, with only three WWTPs identified as being stably efficient. Size of plant, overcapacity, climate type, and influent characteristics are proven to have a significant influence on both the mean efficiency and performance sensitivity of WWTPs, while no clear relationships were found between eco-efficiency and technology under the framework of uncertainty analysis. The incorporation of uncertainty quantification and environmental impact consideration has improved the liability and applicability of the assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Validating an Air Traffic Management Concept of Operation Using Statistical Modeling
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2013-01-01
Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis
Constraining convective regions with asteroseismic linear structural inversions
NASA Astrophysics Data System (ADS)
Buldgen, G.; Reese, D. R.; Dupret, M. A.
2018-01-01
Context. Convective regions in stellar models are always associated with uncertainties, for example, due to extra-mixing or the possible inaccurate position of the transition from convective to radiative transport of energy. Such inaccuracies have a strong impact on stellar models and the fundamental parameters we derive from them. The most promising method to reduce these uncertainties is to use asteroseismology to derive appropriate diagnostics probing the structural characteristics of these regions. Aims: We wish to use custom-made integrated quantities to improve the capabilities of seismology to probe convective regions in stellar interiors. By doing so, we hope to increase the number of indicators obtained with structural seismic inversions to provide additional constraints on stellar models and the fundamental parameters we determine from theoretical modeling. Methods: First, we present new kernels associated with a proxy of the entropy in stellar interiors. We then show how these kernels can be used to build custom-made integrated quantities probing convective regions inside stellar models. We present two indicators suited to probe convective cores and envelopes, respectively, and test them on artificial data. Results: We show that it is possible to probe both convective cores and envelopes using appropriate indicators obtained with structural inversion techniques. These indicators provide direct constraints on a proxy of the entropy of the stellar plasma, sensitive to the characteristics of convective regions. These constraints can then be used to improve the modeling of solar-like stars by providing an additional degree of selection of models obtained from classical forward modeling approaches. We also show that in order to obtain very accurate indicators, we need ℓ = 3 modes for the envelope but that the core-conditions indicator is more flexible in terms of the seismic data required for its use.
NASA Astrophysics Data System (ADS)
Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Lee, J.; Redemann, J.; Schmid, B.; Shinozuka, Y.
2016-05-01
Cases of absorbing aerosols above clouds (AACs), such as smoke or mineral dust, are omitted from most routinely processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar sensors, for incorporation into a future version of the "Deep Blue" AOD data product. Detailed retrieval simulations suggest that these sensors should be able to determine AAC AOD with a typical level of uncertainty ˜25-50% (with lower uncertainties for more strongly absorbing aerosol types) and COD with an uncertainty ˜10-20%, if an appropriate aerosol optical model is known beforehand. Errors are larger, particularly if the aerosols are only weakly absorbing, if the aerosol optical properties are not known, and the appropriate model to use must also be retrieved. Actual retrieval errors are also compared to uncertainty envelopes obtained through the optimal estimation (OE) technique; OE-based uncertainties are found to be generally reasonable for COD but larger than actual retrieval errors for AOD, due in part to difficulties in quantifying the degree of spectral correlation of forward model error. The algorithm is also applied to two MODIS scenes (one smoke and one dust) for which near-coincident NASA Ames Airborne Tracking Sun photometer (AATS) data were available to use as a ground truth AOD data source, and found to be in good agreement, demonstrating the validity of the technique with real observations.
NASA Technical Reports Server (NTRS)
Boothroyd, Arnold I.; Sackmann, I.-Juliana
2001-01-01
Helioseismic frequency observations provide an extremely accurate window into the solar interior; frequencies from the Michaelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO) spacecraft, enable the adiabatic sound speed and adiabatic index to be inferred with an accuracy of a few parts in 10(exp 4) and the density with an accuracy of a few parts in 10(exp 3). This has become a Serious challenge to theoretical models of the Sun. Therefore, we have undertaken a self-consistent, systematic study of the sources of uncertainties in the standard solar models. We found that the largest effect on the interior structure arises from the observational uncertainties in the photospheric abundances of the elements, which affect the sound speed profile at the level of 3 parts in 10(exp 3). The estimated 4% uncertainty in the OPAL opacities could lead to effects of 1 part in 10(exp 3); the approximately 5%, uncertainty in the basic pp nuclear reaction rate would have a similar effect, as would uncertainties of approximately 15% in the diffusion constants for the gravitational settling of helium. The approximately 50% uncertainties in diffusion constants for the heavier elements would have nearly as large an effect. Different observational methods for determining the solar radius yield results differing by as much as 7 parts in 10(exp 4); we found that this leads to uncertainties of a few parts in 10(exp 3) in the sound speed int the solar convective envelope, but has negligible effect on the interior. Our reference standard solar model yielded a convective envelope position of 0.7135 solar radius, in excellent agreement with the observed value of 0.713 +/- 0.001 solar radius and was significantly affected only by Z/X, the pp rate, and the uncertainties in helium diffusion constants. Our reference model also yielded envelope helium abundance of 0.2424, in good agreement with the approximate range of 0.24 to 0.25 inferred from helioseismic observations; only extreme Z/X values yielded envelope helium abundance outside this range. We found that other current uncertainties, namely, in the solar age and luminosity, in nuclear rates other than the pp reaction, in the low-temperature molecular opacities, and in the low-density equation of state, have no significant effect on the quantities that can be inferred from helioseismic observations. The predicted pre-main-sequence lithium depletion is uncertain by a factor of 2. The predicted neutrino capture rate is uncertain by approximately 30% for the Cl-27 experiment and by approximately 3% for Ga-71 experiments, while the B-8 neutrino flux is uncertain by approximately 30%.
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
Smith, Ryan P.; Roos, Peter A.; Wahlstrand, Jared K.; Pipis, Jessica A.; Rivas, Maria Belmonte; Cundiff, Steven T.
2007-01-01
We perform optical frequency metrology of an iodine-stabilized He-Ne laser using a mode-locked Ti:sapphire laser frequency comb that is stabilized using quantum interference of photocurrents in a semiconductor. Using this technique, we demonstrate carrier-envelope offset frequency fluctuations of less than 5 mHz using a 1 s gate time. With the resulting stable frequency comb, we measure the optical frequency of the iodine transition [127I2 R(127) 11-5 i component] to be 473 612 214 712.96 ± 0.66 kHz, well within the uncertainty of the CIPM recommended value. The stability of the quantum interference technique is high enough such that it does not limit the measurements. PMID:27110472
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan
2006-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.
NASA Technical Reports Server (NTRS)
Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Lee, J.; Redemann, J.; Schmid, B.; Shinozuka, Y.
2016-01-01
Cases of absorbing aerosols above clouds (AACs), such as smoke or mineral dust, are omitted from most routinely processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar sensors, for incorporation into a future version of the "Deep Blue" AOD data product. Detailed retrieval simulations suggest that these sensors should be able to determine AAC AOD with a typical level of uncertainty approximately 25-50 percent (with lower uncertainties for more strongly absorbing aerosol types) and COD with an uncertainty approximately10-20 percent, if an appropriate aerosol optical model is known beforehand. Errors are larger, particularly if the aerosols are only weakly absorbing, if the aerosol optical properties are not known, and the appropriate model to use must also be retrieved. Actual retrieval errors are also compared to uncertainty envelopes obtained through the optimal estimation (OE) technique; OE-based uncertainties are found to be generally reasonable for COD but larger than actual retrieval errors for AOD, due in part to difficulties in quantifying the degree of spectral correlation of forward model error. The algorithm is also applied to two MODIS scenes (one smoke and one dust) for which near-coincident NASA Ames Airborne Tracking Sun photometer (AATS) data were available to use as a ground truth AOD data source, and found to be in good agreement, demonstrating the validity of the technique with real observations.
NASA Astrophysics Data System (ADS)
Chatterjee, Niranjan D.; Miller, Klaus; Olbricht, Walter
1994-05-01
Internally consistent thermodynamic data, including their uncertainties and correlations, are reported for 22 phases of the quaternary system CaO-Al2O3-SiO2-H2O. These data have been derived by simultaneous evaluation of the appropriate phase properties (PP) and reaction properties (RP) by the novel technique of Bayes estimation (BE). The thermodynamic model used and the theory of BE was expounded in Part I of this paper. Part II is the follow-up study illustrating an application of BE. The input for BE comprised, among others, the a priori values for standard enthalpy of formation of the i-th phase, Δf H {/i 0}, and its standard entropy, S {/i 0}, in addition to the reaction reversal constraints for 33 equilibria involving the relevant phases. A total of 269 RP restrictions have been processed, of which 107 turned out to be non-redundant. The refined values for Δf H {/i 0}and S {/i 0}obtained by BE, including their 2σ-uncertainties, appear in Table 4; the Appendix reproduces the corresponding correlation matrix. These data permit generation of computed phase diagrams with 2σ-uncertainty envelopes based on conventional error propagation; Fig. 3 depicts such a phase diagram for the system CaO-Al2O3-SiO2. It shows that the refined dataset is capable of yielding phase diagrams with uncertainty envelopes narrow enough to be geologically useful. The results in Table 4 demonstrate that the uncertainties of the prior values for Δf H {/i Emphasis>0}, given in Table 1, have decreased by up to an order of magnitude, while those for S {/i 0}improved by a factor of up to two. For comparison, Table 4 also lists the refined Δf H {/i 0}and S {/i 0}data obtained by mathematical programming (MAP), minimizing a quadratic objective function used earlier by Berman (1988). Examples of calculated phase diagrams are given to demonstrate the advantages of BE for deriving internally consistent thermodynamic data. Although P-T curves generated from both MAP and BE databases will pass through the reversal restrictions, BE datasets appear to be better suited for extrapolations beyond the P-T range explored experimentally and for predicting equilibria not constrained by reversals.
Acoustic environments for JPL shuttle payloads based on early flight data
NASA Technical Reports Server (NTRS)
Oconnell, M. R.; Kern, D. L.
1983-01-01
Shuttle payload acoustic environmental predictions for the Jet Propulsion Laboratory's Galileo and Wide Field/Planetary Camera projects have been developed from STS-2 and STS-3 flight data. This evaluation of actual STS flight data resulted in reduced predicted environments for the JPL shuttle payloads. Shuttle payload mean acoustic levels were enveloped. Uncertainty factors were added to the mean envelope to provide confidence in the predicted environment.
NASA Technical Reports Server (NTRS)
Lind, Richard C. (Inventor); Brenner, Martin J.
2001-01-01
A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.
System identification for modeling for control of flexible structures
NASA Technical Reports Server (NTRS)
Mettler, Edward; Milman, Mark
1986-01-01
The major components of a design and operational flight strategy for flexible structure control systems are presented. In this strategy an initial distributed parameter control design is developed and implemented from available ground test data and on-orbit identification using sophisticated modeling and synthesis techniques. The reliability of this high performance controller is directly linked to the accuracy of the parameters on which the design is based. Because uncertainties inevitably grow without system monitoring, maintaining the control system requires an active on-line system identification function to supply parameter updates and covariance information. Control laws can then be modified to improve performance when the error envelopes are decreased. In terms of system safety and stability the covariance information is of equal importance as the parameter values themselves. If the on-line system ID function detects an increase in parameter error covariances, then corresponding adjustments must be made in the control laws to increase robustness. If the error covariances exceed some threshold, an autonomous calibration sequence could be initiated to restore the error enveloped to an acceptable level.
NASA Technical Reports Server (NTRS)
Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John
2013-01-01
This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.
Isokinetic TWC Evaporator Probe: Calculations and Systemic Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Davison, Craig R.; Strapp, J. Walter; Lilie, Lyle; Ratvasky, Thomas P.; Dumont, Christopher
2016-01-01
A new Isokinetic Total Water Content Evaporator (IKP2) was downsized from a prototype instrument, specifically to make airborne measurements of hydrometeor total water content (TWC) in deep tropical convective clouds to assess the new ice crystal Appendix D icing envelope. The probe underwent numerous laboratory and wind tunnel investigations to ensure reliable operation under the difficult high altitude/speed/TWC conditions under which other TWC instruments have been known to either fail, or have unknown performance characteristics and the results are presented in a companion paper. This paper presents the equations used to determine the total water content (TWC) of the sampled atmosphere from the values measured by the IKP2 or necessary ancillary data from other instruments. The uncertainty in the final TWC is determined by propagating the uncertainty in the measured values through the calculations to the final result. Two techniques were used and the results compared. The first is a typical analytical method of propagating uncertainty and the second performs a Monte Carlo simulation. The results are very similar with differences that are insignificant for practical purposes. The uncertainty is between 2 percent and 3 percent at most practical operating conditions. The capture efficiency of the IKP2 was also examined based on a computational fluid dynamic simulation of the original IKP and scaled down to the IKP2. Particles above 24 microns were found to have a capture efficiency greater than 99 percent at all operating conditions.
Isokinetic TWC Evaporator Probe: Calculations and Systemic Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Davison, Craig R.; Strapp, John W.; Lilie, Lyle E.; Ratvasky, Thomas P.; Dumont, Christopher
2016-01-01
A new Isokinetic Total Water Content Evaporator (IKP2) was downsized from a prototype instrument, specifically to make airborne measurements of hydrometeor total water content (TWC) in deep tropical convective clouds to assess the new ice crystal Appendix D icing envelope. The probe underwent numerous laboratory and wind tunnel investigations to ensure reliable operation under the difficult high altitude/speed/TWC conditions under which other TWC instruments have been known to either fail, or have unknown performance characteristics and the results are presented in a companion paper (Ref. 1). This paper presents the equations used to determine the total water content (TWC) of the sampled atmosphere from the values measured by the IKP2 or necessary ancillary data from other instruments. The uncertainty in the final TWC is determined by propagating the uncertainty in the measured values through the calculations to the final result. Two techniques were used and the results compared. The first is a typical analytical method of propagating uncertainty and the second performs a Monte Carlo simulation. The results are very similar with differences that are insignificant for practical purposes. The uncertainty is between 2 and 3 percent at most practical operating conditions. The capture efficiency of the IKP2 was also examined based on a computational fluid dynamic simulation of the original IKP and scaled down to the IKP2. Particles above 24 micrometers were found to have a capture efficiency greater than 99 percent at all operating conditions.
Safe Maneuvering Envelope Estimation Based on a Physical Approach
NASA Technical Reports Server (NTRS)
Lombaerts, Thomas J. J.; Schuet, Stefan R.; Wheeler, Kevin R.; Acosta, Diana; Kaneshige, John T.
2013-01-01
This paper discusses a computationally efficient algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. This approach differs from others since it is physically inspired. This more transparent approach allows interpreting data in each step, and it is assumed that these physical models based upon flight dynamics theory will therefore facilitate certification for future real life applications.
An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications
NASA Technical Reports Server (NTRS)
Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John
2014-01-01
A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.
Günther, Philipp; Kuschmierz, Robert; Pfister, Thorsten; Czarske, Jürgen W
2013-05-01
The precise distance measurement of fast-moving rough surfaces is important in several applications such as lathe monitoring. A nonincremental interferometer based on two mutually tilted interference fringe systems has been realized for this task. The distance is coded in the phase difference between the generated interference signals corresponding to the fringe systems. Large tilting angles between the interference fringe systems are necessary for a high sensitivity. However, due to the speckle effect at rough surfaces, different envelopes and phase jumps of the interference signals occur. At large tilting angles, these signals become dissimilar, resulting in a small correlation coefficient and a high measurement uncertainty. Based on a matching of illumination and receiving optics, the correlation coefficient and the phase difference estimation have been improved significantly. For axial displacement measurements of recurring rough surfaces, laterally moving with velocities of 5 m/s, an uncertainty of 110 nm has been attained. For nonrecurring surfaces, a distance measurement uncertainty of 830 nm has been achieved. Incorporating the additionally measured lateral velocity and the rotational speed, the two-dimensional shape of rotating objects results. Since the measurement uncertainty of the displacement, distance, and shape is nearly independent of the lateral surface velocity, this technique is predestined for fast-rotating objects, such as crankshafts, camshafts, vacuum pump shafts, or turning parts of lathes.
Atmospheric Science Data Center
2013-03-21
... The "Beta" designation means particle microphysical property validation is in progress, uncertainty envelopes on particle size distribution, ... UAE-2 campaign activities are part of the validation process, so two versions of the MISR aerosol products are included in this ...
Reducing structural uncertainty in conceptual hydrological modeling in the semi-arid Andes
NASA Astrophysics Data System (ADS)
Hublart, P.; Ruelland, D.; Dezetter, A.; Jourde, H.
2014-10-01
The use of lumped, conceptual models in hydrological impact studies requires placing more emphasis on the uncertainty arising from deficiencies and/or ambiguities in the model structure. This study provides an opportunity to combine a multiple-hypothesis framework with a multi-criteria assessment scheme to reduce structural uncertainty in the conceptual modeling of a meso-scale Andean catchment (1515 km2) over a 30 year period (1982-2011). The modeling process was decomposed into six model-building decisions related to the following aspects of the system behavior: snow accumulation and melt, runoff generation, redistribution and delay of water fluxes, and natural storage effects. Each of these decisions was provided with a set of alternative modeling options, resulting in a total of 72 competing model structures. These structures were calibrated using the concept of Pareto optimality with three criteria pertaining to streamflow simulations and one to the seasonal dynamics of snow processes. The results were analyzed in the four-dimensional space of performance measures using a fuzzy c-means clustering technique and a differential split sample test, leading to identify 14 equally acceptable model hypotheses. A filtering approach was then applied to these best-performing structures in order to minimize the overall uncertainty envelope while maximizing the number of enclosed observations. This led to retain 8 model hypotheses as a representation of the minimum structural uncertainty that could be obtained with this modeling framework. Future work to better consider model predictive uncertainty should include a proper assessment of parameter equifinality and data errors, as well as the testing of new or refined hypotheses to allow for the use of additional auxiliary observations.
Reducing structural uncertainty in conceptual hydrological modelling in the semi-arid Andes
NASA Astrophysics Data System (ADS)
Hublart, P.; Ruelland, D.; Dezetter, A.; Jourde, H.
2015-05-01
The use of lumped, conceptual models in hydrological impact studies requires placing more emphasis on the uncertainty arising from deficiencies and/or ambiguities in the model structure. This study provides an opportunity to combine a multiple-hypothesis framework with a multi-criteria assessment scheme to reduce structural uncertainty in the conceptual modelling of a mesoscale Andean catchment (1515 km2) over a 30-year period (1982-2011). The modelling process was decomposed into six model-building decisions related to the following aspects of the system behaviour: snow accumulation and melt, runoff generation, redistribution and delay of water fluxes, and natural storage effects. Each of these decisions was provided with a set of alternative modelling options, resulting in a total of 72 competing model structures. These structures were calibrated using the concept of Pareto optimality with three criteria pertaining to streamflow simulations and one to the seasonal dynamics of snow processes. The results were analyzed in the four-dimensional (4-D) space of performance measures using a fuzzy c-means clustering technique and a differential split sample test, leading to identify 14 equally acceptable model hypotheses. A filtering approach was then applied to these best-performing structures in order to minimize the overall uncertainty envelope while maximizing the number of enclosed observations. This led to retain eight model hypotheses as a representation of the minimum structural uncertainty that could be obtained with this modelling framework. Future work to better consider model predictive uncertainty should include a proper assessment of parameter equifinality and data errors, as well as the testing of new or refined hypotheses to allow for the use of additional auxiliary observations.
NASA Astrophysics Data System (ADS)
Smets, Quentin; Verreck, Devin; Verhulst, Anne S.; Rooyackers, Rita; Merckling, Clément; Van De Put, Maarten; Simoen, Eddy; Vandervorst, Wilfried; Collaert, Nadine; Thean, Voon Y.; Sorée, Bart; Groeseneken, Guido; Heyns, Marc M.
2014-05-01
Promising predictions are made for III-V tunnel-field-effect transistor (FET), but there is still uncertainty on the parameters used in the band-to-band tunneling models. Therefore, two simulators are calibrated in this paper; the first one uses a semi-classical tunneling model based on Kane's formalism, and the second one is a quantum mechanical simulator implemented with an envelope function formalism. The calibration is done for In0.53Ga0.47As using several p+/intrinsic/n+ diodes with different intrinsic region thicknesses. The dopant profile is determined by SIMS and capacitance-voltage measurements. Error bars are used based on statistical and systematic uncertainties in the measurement techniques. The obtained parameters are in close agreement with theoretically predicted values and validate the semi-classical and quantum mechanical models. Finally, the models are applied to predict the input characteristics of In0.53Ga0.47As n- and p-lineTFET, with the n-lineTFET showing competitive performance compared to MOSFET.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.
2010-01-01
The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind and solar power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation), and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the loadmore » and wind/solar forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. To improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. Currently, uncertainties associated with wind and load forecasts, as well as uncertainties associated with random generator outages and unexpected disconnection of supply lines, are not taken into account in power grid operation. Thus, operators have little means to weigh the likelihood and magnitude of upcoming events of power imbalance. In this project, funded by the U.S. Department of Energy (DOE), a framework has been developed for incorporating uncertainties associated with wind and load forecast errors, unpredicted ramps, and forced generation disconnections into the energy management system (EMS) as well as generation dispatch and commitment applications. A new approach to evaluate the uncertainty ranges for the required generation performance envelope including balancing capacity, ramping capability, and ramp duration has been proposed. The approach includes three stages: forecast and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence levels. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis, incorporating all sources of uncertainties of both continuous (wind and load forecast errors) and discrete (forced generator outages and start-up failures) nature. A new method called the “flying brick” technique has been developed to evaluate the look-ahead required generation performance envelope for the worst case scenario within a user-specified confidence level. A self-validation algorithm has been developed to validate the accuracy of the confidence intervals.« less
NASA Technical Reports Server (NTRS)
Hopson, Charles B.
1987-01-01
The results of an analysis performed on seven successive Space Shuttle Main Engine (SSME) static test firings, utilizing envelope detection of external accelerometer data are discussed. The results clearly show the great potential for using envelope detection techniques in SSME incipient failure detection.
NASA Astrophysics Data System (ADS)
Abd-el-Malek, Mina; Abdelsalam, Ahmed K.; Hassan, Ola E.
2017-09-01
Robustness, low running cost and reduced maintenance lead Induction Motors (IMs) to pioneerly penetrate the industrial drive system fields. Broken rotor bars (BRBs) can be considered as an important fault that needs to be early assessed to minimize the maintenance cost and labor time. The majority of recent BRBs' fault diagnostic techniques focus on differentiating between healthy and faulty rotor cage. In this paper, a new technique is proposed for detecting the location of the broken bar in the rotor. The proposed technique relies on monitoring certain statistical parameters estimated from the analysis of the start-up stator current envelope. The envelope of the signal is obtained using Hilbert Transformation (HT). The proposed technique offers non-invasive, fast computational and accurate location diagnostic process. Various simulation scenarios are presented that validate the effectiveness of the proposed technique.
Robust Flutter Margin Analysis that Incorporates Flight Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Martin J.
1998-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
Assessing potential scour using the South Carolina bridge-scour envelope curves
Benedict, Stephen T.; Feaster, Toby D.; Caldwell, Andral W.
2016-09-30
SummaryBridge-scour equations presented in the Federal Highway Administration Hydraulic Engineering Circular No. 18 reflect the current state-of-the practice for predicting scour at bridges. Although these laboratory-derived equations provide an important resource for assessing scour potential, there is a measure of uncertainty when applying these equations to field conditions. The uncertainty and limitations have been acknowledged by laboratory researchers and confirmed in field investigations.Because of the uncertainty associated with bridge-scour equations, HEC-18 recommends that engineers evaluate the computed scour depths obtained from the equations and modify the resulting data if they appear unreasonable. Perhaps the best way to evaluate the reasonableness of predicted scour is to compare it to field measurements of historic scour. Historic field data show scour depths resulting from high flows and provide a reference for evaluating predicted scour. It is rare, however, that such data are available at or near a site of interest, making the evaluation of predicted scour as compared to field data difficult if not impossible. Realizing the value of historic scour measurements, the U.S. Geological Survey (USGS), in cooperation with the South Carolina Department of Transportation (SCDOT), conducted a series of three field investigations to collect historic scour data with the goal of understanding regional trends of scour at riverine bridges in South Carolina.Historic scour measurements, including measurements of clear-water abutment, contraction, and pier scour, as well as live-bed contraction and pier scour, were made at more than 200 bridges. These field investigations provided valuable insights into regional scour trends and yielded regional bridge-scour envelope curves that can be used as supplementary tools for assessing all components of scour at riverine bridges in South Carolina.The application and limitations of these envelope curves were documented in four reports. Because each report addresses different components of bridge scour, it was recognized that there was a need to develop an integrated procedure for applying the envelope curves to help assess scour potential at riverine bridges in South Carolina. The result of that effort is detailed in Benedict and others (2016) and summarized in this fact sheet.
Coronal Element Abundances of the Post-Common Envelope Binary V471 Tauri with ASCA
NASA Technical Reports Server (NTRS)
Still, Martin; Hussain, Gaitee; White, Nicholas E. (Technical Monitor)
2002-01-01
We report on ASCA observations of the coronally active companion star in the post-common envelope binary V471 Tau. While it would be prudent to check the following results with grating spectroscopy, we find that a single-temperature plasma model does not fit the data. Two temperature models with variable abundances indicate that Fe is underabundant compared to the Hyades photospheric mean, whereas, the high first ionization potential element Ne is overabundant. This is indicative of the inverse first ionization effect, believed to result from the fractionation of ionized material by the magnetic field in the upper atmosphere of the star. Evolutionary calculations indicate that there should be no peculiar abundances on the companion star resulting from the common envelope epoch. Indeed, we find no evidence for peculiar abundances, although uncertainties are high.
NASA Technical Reports Server (NTRS)
Garg, Sanjay
1993-01-01
Results are presented from an application of H-infinity control design methodology to a centralized integrated flight/propulsion control (IFPC) system design for a supersonic STOVL fighter aircraft in transition flight. The emphasis is on formulating the H-infinity optimal control synthesis problem such that the critical requirements for the flight and propulsion systems are adequately reflected within the linear, centralized control problem formulation and the resulting controller provides robustness to modeling uncertainties and model parameter variations with flight condition. Detailed evaluation results are presented for a reduced order controller obtained from the improved H-infinity control design showing that the control design meets the specified nominal performance objective as well as provides stability robustness for variations in plant system dynamics with changes in aircraft trim speed within the transition flight envelope.
NASA Technical Reports Server (NTRS)
Simoes, Fernando; Pfaff, Robert; Hamelin, Michel; Klenzing, Jeffrey; Freudenreich, Henry; Beghin, Christian; Berthelier, Jean-Jacques; Bromund, Kenneth; Grard, Rejean; Lebreton, Jean-Pierre;
2012-01-01
The formation and evolution of the Solar System is closely related to the abundance of volatiles, namely water, ammonia, and methane in the protoplanetary disk. Accurate measurement of volatiles in the Solar System is therefore important to understand not only the nebular hypothesis and origin of life but also planetary cosmogony as a whole. In this work, we propose a new, remote sensing technique to infer the outer planets water content by measuring Tremendously and Extremely Low Frequency (TLF-ELF) electromagnetic wave characteristics (Schumann resonances) excited by lightning in their gaseous envelopes. Schumann resonance detection can be potentially used for constraining the uncertainty of volatiles of the giant planets, mainly Uranus and Neptune, because such TLF-ELF wave signatures are closely related to the electric conductivity profile and water content.
NASA Astrophysics Data System (ADS)
Yoo, S. H.
2017-12-01
Monitoring seismologists have successfully used seismic coda for event discrimination and yield estimation for over a decade. In practice seismologists typically analyze long-duration, S-coda signals with high signal-to-noise ratios (SNR) at regional and teleseismic distances, since the single back-scattering model reasonably predicts decay of the late coda. However, seismic monitoring requirements are shifting towards smaller, locally recorded events that exhibit low SNR and short signal lengths. To be successful at characterizing events recorded at local distances, we must utilize the direct-phase arrivals, as well as the earlier part of the coda, which is dominated by multiple forward scattering. To remedy this problem, we have developed a new hybrid method known as full-waveform envelope template matching to improve predicted envelope fits over the entire waveform and account for direct-wave and early coda complexity. We accomplish this by including a multiple forward-scattering approximation in the envelope modeling of the early coda. The new hybrid envelope templates are designed to fit local and regional full waveforms and produce low-variance amplitude estimates, which will improve yield estimation and discrimination between earthquakes and explosions. To demonstrate the new technique, we applied our full-waveform envelope template-matching method to the six known North Korean (DPRK) underground nuclear tests and four aftershock events following the September 2017 test. We successfully discriminated the event types and estimated the yield for all six nuclear tests. We also applied the same technique to the 2015 Tianjin explosions in China, and another suspected low-yield explosion at the DPRK test site on May 12, 2010. Our results show that the new full-waveform envelope template-matching method significantly improves upon longstanding single-scattering coda prediction techniques. More importantly, the new method allows monitoring seismologists to extend coda-based techniques to lower magnitude thresholds and low-yield local explosions.
Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty
1997-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Tolson, Bryan; Shawn Matott, L.
2015-04-01
GLUE is one of the most commonly used informal methodologies for uncertainty estimation in hydrological modelling. Despite the ease-of-use of GLUE, it involves a number of subjective decisions such as the strategy for identifying the behavioural solutions. This study evaluates the impact of behavioural solution identification strategies in GLUE on the quality of model output uncertainty. Moreover, two new strategies are developed to objectively identify behavioural solutions. The first strategy considers Pareto-based ranking of parameter sets, while the second one is based on ranking the parameter sets based on an aggregated criterion. The proposed strategies, as well as the traditional strategies in the literature, are evaluated with respect to reliability (coverage of observations by the envelope of model outcomes) and sharpness (width of the envelope of model outcomes) in different numerical experiments. These experiments include multi-criteria calibration and uncertainty estimation of three rainfall-runoff models with different number of parameters. To demonstrate the importance of behavioural solution identification strategy more appropriately, GLUE is also compared with two other informal multi-criteria calibration and uncertainty estimation methods (Pareto optimization and DDS-AU). The results show that the model output uncertainty varies with the behavioural solution identification strategy, and furthermore, a robust GLUE implementation would require considering multiple behavioural solution identification strategies and choosing the one that generates the desired balance between sharpness and reliability. The proposed objective strategies prove to be the best options in most of the case studies investigated in this research. Implementing such an approach for a high-dimensional calibration problem enables GLUE to generate robust results in comparison with Pareto optimization and DDS-AU.
Kim, Eok Bong; Lee, Jae-hwan; Trung, Luu Tran; Lee, Wong-Kyu; Yu, Dai-Hyuk; Ryu, Han Young; Nam, Chang Hee; Park, Chang Yong
2009-11-09
We developed an optical frequency synthesizer (OFS) with the carrier-envelope-offset frequency locked to 0 Hz achieved using the "direct locking method." This method differs from a conventional phaselock method in that the interference signal from a self-referencing f-2f interferometer is directly fed back to the carrier-envelope-phase control of a femtosecond laser in the time domain. A comparison of the optical frequency of the new OFS to that of a conventional OFS stabilized by a phase-lock method showed that the frequency comb of the new OFS was not different to that of the conventional OFS within an uncertainty of 5.68x10(-16). As a practical application of this OFS, we measured the absolute frequency of an acetylene-stabilized diode laser serving as an optical frequency standard in optical communications.
ERIC Educational Resources Information Center
Hoover, Eric C.; Souza, Pamela E.; Gallun, Frederick J.
2012-01-01
Purpose: The benefits of amplitude compression in hearing aids may be limited by distortion resulting from rapid gain adjustment. To evaluate this, it is convenient to quantify distortion by using a metric that is sensitive to the changes in the processed signal that decrease consonant recognition, such as the Envelope Difference Index (EDI;…
Phase Time and Envelope Time in Time-Distance Analysis and Acoustic Imaging
NASA Technical Reports Server (NTRS)
Chou, Dean-Yi; Duvall, Thomas L.; Sun, Ming-Tsung; Chang, Hsiang-Kuang; Jimenez, Antonio; Rabello-Soares, Maria Cristina; Ai, Guoxiang; Wang, Gwo-Ping; Goode Philip; Marquette, William;
1999-01-01
Time-distance analysis and acoustic imaging are two related techniques to probe the local properties of solar interior. In this study, we discuss the relation of phase time and envelope time between the two techniques. The location of the envelope peak of the cross correlation function in time-distance analysis is identified as the travel time of the wave packet formed by modes with the same w/l. The phase time of the cross correlation function provides information of the phase change accumulated along the wave path, including the phase change at the boundaries of the mode cavity. The acoustic signals constructed with the technique of acoustic imaging contain both phase and intensity information. The phase of constructed signals can be studied by computing the cross correlation function between time series constructed with ingoing and outgoing waves. In this study, we use the data taken with the Taiwan Oscillation Network (TON) instrument and the Michelson Doppler Imager (MDI) instrument. The analysis is carried out for the quiet Sun. We use the relation of envelope time versus distance measured in time-distance analyses to construct the acoustic signals in acoustic imaging analyses. The phase time of the cross correlation function of constructed ingoing and outgoing time series is twice the difference between the phase time and envelope time in time-distance analyses as predicted. The envelope peak of the cross correlation function between constructed ingoing and outgoing time series is located at zero time as predicted for results of one-bounce at 3 mHz for all four data sets and two-bounce at 3 mHz for two TON data sets. But it is different from zero for other cases. The cause of the deviation of the envelope peak from zero is not known.
Radio Imaging of Envelopes of Evolved Stars
NASA Astrophysics Data System (ADS)
Cotton, Bill
2018-04-01
This talk will cover imaging of stellar envelopes using radio VLBI techniques; special attention will be paid to the technical differences between radio and optical/IR interferomery. Radio heterodyne receivers allow a straightforward way to derive spectral cubes and full polarization observations. Milliarcsecond resolution of very bright, i.e. non thermal, emission of molecular masers in the envelopes of evolved stars can be achieved using VLBI techniques with baselines of thousands of km. Emission from SiO, H2O and OH masers are commonly seen at increasing distance from the photosphere. The very narrow maser lines allow accurate measurements of the velocity field within the emitting region.
Palmer, Antony L; Nash, David; Kearton, John R; Jafari, Shakardokht M; Muscat, Sarah
2017-12-01
External dosimetry audit is valuable for the assurance of radiotherapy quality. However, motion management has not been rigorously audited, despite its complexity and importance for accuracy. We describe the first end-to-end dosimetry audit for non-SABR (stereotactic ablative body radiotherapy) lung treatments, measuring dose accumulation in a moving target, and assessing adequacy of target dose coverage. A respiratory motion lung-phantom with custom-designed insert was used. Dose was measured with radiochromic film, employing triple-channel dosimetry and uncertainty reduction. The host's 4DCT scan, outlining and planning techniques were used. Measurements with the phantom static and then moving at treatment delivery separated inherent treatment uncertainties from motion effects. Calculated and measured dose distributions were compared by isodose overlay, gamma analysis, and we introduce the concept of 'dose plane histograms' for clinically relevant interpretation of film dosimetry. 12 radiotherapy centres and 19 plans were audited: conformal, IMRT (intensity modulated radiotherapy) and VMAT (volumetric modulated radiotherapy). Excellent agreement between planned and static-phantom results were seen (mean gamma pass 98.7% at 3% 2 mm). Dose blurring was evident in the moving-phantom measurements (mean gamma pass 88.2% at 3% 2 mm). Planning techniques for motion management were adequate to deliver the intended moving-target dose coverage. A novel, clinically-relevant, end-to-end dosimetry audit of motion management strategies in radiotherapy is reported. Copyright © 2017 Elsevier B.V. All rights reserved.
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Ouzts, Peter J.
1991-01-01
Results are presented from an application of H-infinity control design methodology to a centralized integrated flight propulsion control (IFPC) system design for a supersonic Short Takeoff and Vertical Landing (STOVL) fighter aircraft in transition flight. The emphasis is on formulating the H-infinity control design problem such that the resulting controller provides robustness to modeling uncertainties and model parameter variations with flight condition. Experience gained from a preliminary H-infinity based IFPC design study performed earlier is used as the basis to formulate the robust H-infinity control design problem and improve upon the previous design. Detailed evaluation results are presented for a reduced order controller obtained from the improved H-infinity control design showing that the control design meets the specified nominal performance objectives as well as provides stability robustness for variations in plant system dynamics with changes in aircraft trim speed within the transition flight envelope. A controller scheduling technique which accounts for changes in plant control effectiveness with variation in trim conditions is developed and off design model performance results are presented.
2015-08-18
of Defense (DoD) to achieve cost-effective energy efficiency at much greater scale than other commercially available techniques of measuring energy...recommends specific energy conservation measures (ECMs), and quantifies significant potential return on investment. ERDC/CERL TR-15-18 iii...effective energy efficiency at much greater scale than other commercially available techniques of measuring energy loss due to envelope inefficien- cies
Fathollah Bayati, Mohsen; Sadjadi, Seyed Jafar
2017-01-01
In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model.
Sadjadi, Seyed Jafar
2017-01-01
In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model. PMID:28953900
Greenwood, Nigel J C; Gunton, Jenny E
2014-07-01
This study demonstrated the novel application of a "machine-intelligent" mathematical structure, combining differential game theory and Lyapunov-based control theory, to the artificial pancreas to handle dynamic uncertainties. Realistic type 1 diabetes (T1D) models from the literature were combined into a composite system. Using a mixture of "black box" simulations and actual data from diabetic medical histories, realistic sets of diabetic time series were constructed for blood glucose (BG), interstitial fluid glucose, infused insulin, meal estimates, and sometimes plasma insulin assays. The problem of underdetermined parameters was side stepped by applying a variant of a genetic algorithm to partial information, whereby multiple candidate-personalized models were constructed and then rigorously tested using further data. These formed a "dynamic envelope" of trajectories in state space, where each trajectory was generated by a hypothesis on the hidden T1D system dynamics. This dynamic envelope was then culled to a reduced form to cover observed dynamic behavior. A machine-intelligent autonomous algorithm then implemented game theory to construct real-time insulin infusion strategies, based on the flow of these trajectories through state space and their interactions with hypoglycemic or near-hyperglycemic states. This technique was tested on 2 simulated participants over a total of fifty-five 24-hour days, with no hypoglycemic or hyperglycemic events, despite significant uncertainties from using actual diabetic meal histories with 10-minute warnings. In the main case studies, BG was steered within the desired target set for 99.8% of a 16-hour daily assessment period. Tests confirmed algorithm robustness for ±25% carbohydrate error. For over 99% of the overall 55-day simulation period, either formal controller stability was achieved to the desired target or else the trajectory was within the desired target. These results suggest that this is a stable, high-confidence way to generate closed-loop insulin infusion strategies. © 2014 Diabetes Technology Society.
Uncertainty Quantification of the FUN3D-Predicted NASA CRM Flutter Boundary
NASA Technical Reports Server (NTRS)
Stanford, Bret K.; Massey, Steven J.
2017-01-01
A nonintrusive point collocation method is used to propagate parametric uncertainties of the flexible Common Research Model, a generic transport configuration, through the unsteady aeroelastic CFD solver FUN3D. A range of random input variables are considered, including atmospheric flow variables, structural variables, and inertial (lumped mass) variables. UQ results are explored for a range of output metrics (with a focus on dynamic flutter stability), for both subsonic and transonic Mach numbers, for two different CFD mesh refinements. A particular focus is placed on computing failure probabilities: the probability that the wing will flutter within the flight envelope.
Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines
NASA Astrophysics Data System (ADS)
Singh, Dheeraj Sharan; Zhao, Qing
2016-12-01
This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.
Roos, P A; Li, Xiaoqin; Smith, R P; Pipis, Jessica A; Fortier, T M; Cundiff, S T
2005-04-01
We demonstrate carrier-envelope phase stabilization of a mode-locked Ti:sapphire laser by use of quantum interference control of injected photocurrents in a semiconductor. No harmonic generation is required for this stabilization technique. Instead, interference between coexisting single- and two-photon absorption pathways in the semiconductor provides a phase comparison between different spectral components. The phase comparison, and the detection of the photocurrent that it produces, both occur within a single low-temperature-grown gallium arsenide sample. The carrier-envelope offset beat note fidelity is 30 dB in a 10-kHz resolution bandwidth. The out-of-loop phase-noise level is essentially identical to the best previous measurements with the standard self-referencing technique.
NASA Astrophysics Data System (ADS)
Yan, Ping; Kalscheuer, Thomas; Hedin, Peter; Garcia Juanatey, Maria A.
2017-04-01
We present a novel 2-D magnetotelluric (MT) inversion scheme, in which the local weights of the regularizing smoothness constraints are based on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. Successful application of the inversion to MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using the envelope attribute of the COSC reflection seismic profile helped to reduce the uncertainty of the interpretation of the main décollement by demonstrating that the associated alum shales may be much thinner than suggested by a previous inversion model. Thus, the new model supports the proposed location of a future borehole COSC-2 which is hoped to penetrate the main décollement and the underlying Precambrian basement.
Non-destructive inspections of illicit drugs in envelope using terahertz time-domain spectroscopy
NASA Astrophysics Data System (ADS)
Li, Ning; Shen, Jingling; Lu, Meihong; Jia, Yan; Sun, Jinhai; Liang, Laishun; Shi, Yanning; Xu, Xiaoyu; Zhang, Cunlin
2006-09-01
The absorption spectra of two illicit drugs, methylenedioxyamphetarnine (MDA) and methamphetamine (MA), within and without two conventional envelopes are studied using terahertz time-domain spectroscopy technique. The characteristic absorption spectra of MDA and MA are obtained in the range of 0.2 THz to 2.5 THz. MDA has an obvious absorption peak at 1.41 THz while MA has obvious absorption peaks at 1.23 THz, 1.67 THz, 1.84 THz and 2.43 THz. We find that the absorption peaks of MDA and MA within the envelopes are almost the same as those without the envelopes respectively although the two envelopes have some different absorption in THz waveband. This result indicates that the type of illicit drugs in envelopes can be determined by identifying their characteristic absorption peaks, and THz time-domain spectroscopy is one of the most powerful candidates for illicit drugs inspection.
Properties of an eclipsing double white dwarf binary NLTT 11748
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, David L.; Walker, Arielle N.; Marsh, Thomas R.
2014-01-10
We present high-quality ULTRACAM photometry of the eclipsing detached double white dwarf binary NLTT 11748. This system consists of a carbon/oxygen white dwarf and an extremely low mass (<0.2 M {sub ☉}) helium-core white dwarf in a 5.6 hr orbit. To date, such extremely low-mass white dwarfs, which can have thin, stably burning outer layers, have been modeled via poorly constrained atmosphere and cooling calculations where uncertainties in the detailed structure can strongly influence the eventual fates of these systems when mass transfer begins. With precise (individual precision ≈1%), high-cadence (≈2 s), multicolor photometry of multiple primary and secondary eclipsesmore » spanning >1.5 yr, we constrain the masses and radii of both objects in the NLTT 11748 system to a statistical uncertainty of a few percent. However, we find that overall uncertainty in the thickness of the envelope of the secondary carbon/oxygen white dwarf leads to a larger (≈13%) systematic uncertainty in the primary He WD's mass. Over the full range of possible envelope thicknesses, we find that our primary mass (0.136-0.162 M {sub ☉}) and surface gravity (log (g) = 6.32-6.38; radii are 0.0423-0.0433 R {sub ☉}) constraints do not agree with previous spectroscopic determinations. We use precise eclipse timing to detect the Rømer delay at 7σ significance, providing an additional weak constraint on the masses and limiting the eccentricity to ecos ω = (– 4 ± 5) × 10{sup –5}. Finally, we use multicolor data to constrain the secondary's effective temperature (7600 ± 120 K) and cooling age (1.6-1.7 Gyr).« less
Oelze, Michael L.; Mamou, Jonathan
2017-01-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging techniques can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient, estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter and the effective acoustic concentration of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and pre-clinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy. PMID:26761606
What is the Uncertainty in MODIS Aerosol Optical Depth in the Vicinity of Clouds?
NASA Technical Reports Server (NTRS)
Patadia, Falguni; Levy, Rob; Mattoo, Shana
2017-01-01
MODIS dark-target (DT) algorithm retrieves aerosol optical depth (AOD) using a Look Up Table (LUT) approach. Global comparison of AOD (Collection 6 ) with ground-based sun photometer gives an Estimated Error (EE) of +/-(0.04 + 10%) over ocean. However, EE does not represent per-retrieval uncertainty. For retrievals that are biased high compared to AERONET, here we aim to closely examine the contribution of biases due to presence of clouds and per-pixel retrieval uncertainty. We have characterized AOD uncertainty at 550 nm, due to standard deviation of reflectance in 10 km retrieval region, uncertainty related to gas (H2O, O3) absorption, surface albedo, and aerosol models. The uncertainty in retrieved AOD seems to lie within the estimated over ocean error envelope of +/-(0.03+10%). Regions between broken clouds tend to have higher uncertainty. Compared to C6 AOD, a retrieval omitting observations in the vicinity of clouds (< or = 1 km) is biased by about +/- 0.05. For homogeneous aerosol distribution, clear sky retrievals show near zero bias. Close look at per-pixel reflectance histograms suggests retrieval possibility using median reflectance values.
Flight test techniques for the X-29A aircraft
NASA Technical Reports Server (NTRS)
Hicks, John W.; Cooper, James M., Jr.; Sefic, Walter J.
1987-01-01
The X-29A advanced technology demonstrator is a single-seat, single-engine aircraft with a forward-swept wing. The aircraft incorporates many advanced technologies being considered for this country's next generation of aircraft. This unusual aircraft configuration, which had never been flown before, required a precise approach to flight envelope expansion. This paper describes the real-time analysis methods and flight test techniques used during the envelope expansion of the x-29A aircraft, including new and innovative approaches.
NASA Astrophysics Data System (ADS)
Pritykin, F. N.; Nebritov, V. I.
2017-06-01
The structure of graphic database specifying the shape and the work envelope projection position of an android arm mechanism with various positions of the known in advance forbidden zones is proposed. The technique of analytical assignment of the work envelope based on the methods of analytical geometry and theory of sets is represented. The conducted studies can be applied in creation of knowledge bases for intellectual systems of android control functioning independently in the sophisticated environment.
NASA Technical Reports Server (NTRS)
Chen, W. T.
1972-01-01
Technology developed for signal and data processing was applied to diagnostic techniques in the area of phonocardiography (pcg), the graphic recording of the sounds of the heart generated by the functioning of the aortic and ventricular valves. The relatively broad bandwidth of the PCG signal (20 to 2000 Hz) was reduced to less than 100 Hz by the use of a heart sound envelope. The process involves full-wave rectification of the PCG signal, envelope detection of the rectified wave, and low pass filtering of the resultant envelope.
Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem
2015-06-01
Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.
Challenges in modeling the X-29 flight test performance
NASA Technical Reports Server (NTRS)
Hicks, John W.; Kania, Jan; Pearce, Robert; Mills, Glen
1987-01-01
Presented are methods, instrumentation, and difficulties associated with drag measurement of the X-29A aircraft. The initial performance objective of the X-29A program emphasized drag polar shapes rather than absolute drag levels. Priorities during the flight envelope expansion restricted the evaluation of aircraft performance. Changes in aircraft configuration, uncertainties in angle-of-attack calibration, and limitations in instrumentation complicated the analysis. Limited engine instrumentation with uncertainties in overall in-flight thrust accuracy made it difficult to obtain reliable values of coefficient of parasite drag. The aircraft was incapable of tracking the automatic camber control trim schedule for optimum wing flaperon deflection during typical dynamic performance maneuvers; this has also complicated the drag polar shape modeling. The X-29A was far enough off the schedule that the developed trim drag correction procedure has proven inadequate. However, good drag polar shapes have been developed throughout the flight envelope. Preliminary flight results have compared well with wind tunnel predictions. A more comprehensive analysis must be done to complete performance models. The detailed flight performance program with a calibrated engine will benefit from the experience gained during this preliminary performance phase.
Challenges in modeling the X-29A flight test performance
NASA Technical Reports Server (NTRS)
Hicks, John W.; Kania, Jan; Pearce, Robert; Mills, Glen
1987-01-01
The paper presents the methods, instrumentation, and difficulties associated with drag measurement of the X-29A aircraft. The initial performance objective of the X-29A program emphasized drag polar shapes rather than absolute drag levels. Priorities during the flight envelope expansion restricted the evaluation of aircraft performance. Changes in aircraft configuration, uncertainties in angle-of-attack calibration, and limitations in instrumentation complicated the analysis. Limited engine instrumentation with uncertainties in overall in-flight thrust accuracy made it difficult to obtain reliable values of coefficient of parasite drag. The aircraft was incapable of tracking the automatic camber control trim schedule for optimum wing flaperon deflection during typical dynamic performance maneuvers; this has also complicated the drag polar shape modeling. The X-29A was far enough off the schedule that the developed trim drag correction procedure has proven inadequate. Despite these obstacles, good drag polar shapes have been developed throughout the flight envelope. Preliminary flight results have compared well with wind tunnel predictions. A more comprehensive analysis must be done to complete the performance models. The detailed flight performance program with a calibrated engine will benefit from the experience gained during this preliminary performance phase.
Romañach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.
2014-01-01
Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.
Attosecond control of electronic processes by intense light fields.
Baltuska, A; Udem, Th; Uiberacker, M; Hentschel, M; Goulielmakis, E; Gohle, Ch; Holzwarth, R; Yakovlev, V S; Scrinzi, A; Hänsch, T W; Krausz, F
2003-02-06
The amplitude and frequency of laser light can be routinely measured and controlled on a femtosecond (10(-15) s) timescale. However, in pulses comprising just a few wave cycles, the amplitude envelope and carrier frequency are not sufficient to characterize and control laser radiation, because evolution of the light field is also influenced by a shift of the carrier wave with respect to the pulse peak. This so-called carrier-envelope phase has been predicted and observed to affect strong-field phenomena, but random shot-to-shot shifts have prevented the reproducible guiding of atomic processes using the electric field of light. Here we report the generation of intense, few-cycle laser pulses with a stable carrier envelope phase that permit the triggering and steering of microscopic motion with an ultimate precision limited only by quantum mechanical uncertainty. Using these reproducible light waveforms, we create light-induced atomic currents in ionized matter; the motion of the electronic wave packets can be controlled on timescales shorter than 250 attoseconds (250 x 10(-18) s). This enables us to control the attosecond temporal structure of coherent soft X-ray emission produced by the atomic currents--these X-ray photons provide a sensitive and intuitive tool for determining the carrier-envelope phase.
Nano-ranged low-energy ion-beam-induced DNA transfer in biological cells
NASA Astrophysics Data System (ADS)
Yu, L. D.; Wongkham, W.; Prakrajang, K.; Sangwijit, K.; Inthanon, K.; Thongkumkoon, P.; Wanichapichart, P.; Anuntalabhochai, S.
2013-06-01
Low-energy ion beams at a few tens of keV were demonstrated to be able to induce exogenous macromolecules to transfer into plant and bacterial cells. In the process, the ion beam with well controlled energy and fluence bombarded living cells to cause certain degree damage in the cell envelope in nanoscales to facilitate the macromolecules such as DNA to pass through the cell envelope and enter the cell. Consequently, the technique was applied for manipulating positive improvements in the biological species. This physical DNA transfer method was highly efficient and had less risk of side-effects compared with chemical and biological methods. For better understanding of mechanisms involved in the process, a systematic study on the mechanisms was carried out. Applications of the technique were also expanded from DNA transfer in plant and bacterial cells to DNA transfection in human cancer cells potentially for the stem cell therapy purpose. Low-energy nitrogen and argon ion beams that were applied in our experiments had ranges of 100 nm or less in the cell envelope membrane which was majorly composed of polymeric cellulose. The ion beam bombardment caused chain-scission dominant damage in the polymer and electrical property changes such as increase in the impedance in the envelope membrane. These nano-modifications of the cell envelope eventually enhanced the permeability of the envelope membrane to favor the DNA transfer. The paper reports details of our research in this direction.
A note on drillhole depths required for reliable heat flow determinations
Chapman, D.S.; Howell, J.; Sass, J.H.
1984-01-01
In general, there is a limiting depth in a drillhole above which the reliability of a single determination of heat flow decreases rapidly with decreasing depth and below which the statistical uncertainty of a heat flow determination does not change perceptibly with increasing depth. This feature has been established empirically for a test case comprising a group of twelve heat flow sites in the Republic of Zambia. The technique consists of constructing heat flow versus depth curves for individual sites by progressively discarding data from the lower part of the hole and recomputing heat flow from the remaining data. For the Zambian test case, the curves converge towards a uniform value of 67 ?? 3 mW m-2 when all available data are used, but values of heat flow calculated for shallow(< 100 m) parts of the same holes range from 45 to 95 mW m-2. The heat flow versus depth curves are enclosed by a perturbation envelope which has an amplitude of 40 mW m-2 at the surface and decreases linearly to the noise level at 190 m. For the test region of Zambia a depth of 170 m is needed to guarantee a heat flow measurement within ?? 10% of the background regional value. It is reasonable to expect that this depth will be shallower in some regions and deeper in others. Features of heat flow perturbation envelopes can be used as quantitative reliability indices for heat flow studies. ?? 1984.
An Economic Wellbeing Index for the Spanish Provinces: A Data Envelopment Analysis Approach
ERIC Educational Resources Information Center
Murias, Pilar; Martinez, Fidel; De Miguel, Carlos
2006-01-01
This article presents the estimation of a synthetic economic wellbeing index using Data Envelopment Analysis (DEA). The DEA is a multidimensional technique that has its origins in efficiency analysis, but its usage within the social indicators context is particularly appropriate. It allows the researcher to take advantage of the inherent…
Minimum envelope roughness pulse design for reduced amplifier distortion in parallel excitation.
Grissom, William A; Kerr, Adam B; Stang, Pascal; Scott, Greig C; Pauly, John M
2010-11-01
Parallel excitation uses multiple transmit channels and coils, each driven by independent waveforms, to afford the pulse designer an additional spatial encoding mechanism that complements gradient encoding. In contrast to parallel reception, parallel excitation requires individual power amplifiers for each transmit channel, which can be cost prohibitive. Several groups have explored the use of low-cost power amplifiers for parallel excitation; however, such amplifiers commonly exhibit nonlinear memory effects that distort radio frequency pulses. This is especially true for pulses with rapidly varying envelopes, which are common in parallel excitation. To overcome this problem, we introduce a technique for parallel excitation pulse design that yields pulses with smoother envelopes. We demonstrate experimentally that pulses designed with the new technique suffer less amplifier distortion than unregularized pulses and pulses designed with conventional regularization.
Sammarco, Angela; Konecny, Lynda M
2010-01-01
To examine the differences between Latina and Caucasian breast cancer survivors in perceived social support, uncertainty, and quality of life (QOL), and the differences between the cohorts in selected demographic variables. Descriptive, comparative study. Selected private hospitals and American Cancer Society units in a metropolitan area of the northeastern United States. 182 Caucasian and 98 Latina breast cancer survivors. Participants completed a personal data sheet, the Social Support Questionnaire, the Mishel Uncertainty in Illness Scale-Community Form, and the Ferrans and Powers QOL Index-Cancer Version III at home and returned the questionnaires to the investigators via postage-paid envelope. Perceived social support, uncertainty, and QOL. Caucasians reported significantly higher levels of total perceived social support and QOL than Latinas. Psychiatric illness comorbidity and lower level of education in Latinas were factors in the disparity of QOL. Nurses should be mindful of the essential association of perceived social support, uncertainty, and QOL in Latina breast cancer survivors and how Latinas differ from Caucasian breast cancer survivors. Factors such as cultural values, comorbidities, and education level likely influence perceived social support, uncertainty, and QOL.
Uncertainty Modeling for Robustness Analysis of Control Upset Prevention and Recovery Systems
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Khong, Thuan H.; Shin, Jong-Yeob; Kwatny, Harry; Chang, Bor-Chin; Balas, Gary J.
2005-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems (developed for failure detection, identification, and reconfiguration, as well as upset recovery) need to be evaluated over broad regions of the flight envelope and under extreme flight conditions, and should include various sources of uncertainty. However, formulation of linear fractional transformation (LFT) models for representing system uncertainty can be very difficult for complex parameter-dependent systems. This paper describes a preliminary LFT modeling software tool which uses a matrix-based computational approach that can be directly applied to parametric uncertainty problems involving multivariate matrix polynomial dependencies. Several examples are presented (including an F-16 at an extreme flight condition, a missile model, and a generic example with numerous crossproduct terms), and comparisons are given with other LFT modeling tools that are currently available. The LFT modeling method and preliminary software tool presented in this paper are shown to compare favorably with these methods.
Kyriakou, Adamos; Neufeld, Esra; Werner, Beat; Székely, Gábor; Kuster, Niels
2015-01-01
Transcranial focused ultrasound (tcFUS) is an attractive noninvasive modality for neurosurgical interventions. The presence of the skull, however, compromises the efficiency of tcFUS therapy, as its heterogeneous nature and acoustic characteristics induce significant distortion of the acoustic energy deposition, focal shifts, and thermal gain decrease. Phased-array transducers allow for partial compensation of skull-induced aberrations by application of precalculated phase and amplitude corrections. An integrated numerical framework allowing for 3D full-wave, nonlinear acoustic and thermal simulations has been developed and applied to tcFUS. Simulations were performed to investigate the impact of skull aberrations, the possibility of extending the treatment envelope, and adverse secondary effects. The simulated setup comprised an idealized model of the ExAblate Neuro and a detailed MR-based anatomical head model. Four different approaches were employed to calculate aberration corrections (analytical calculation of the aberration corrections disregarding tissue heterogeneities; a semi-analytical ray-tracing approach compensating for the presence of the skull; two simulation-based time-reversal approaches with and without pressure amplitude corrections which account for the entire anatomy). These impact of these approaches on the pressure and temperature distributions were evaluated for 22 brain-targets. While (semi-)analytical approaches failed to induced high pressure or ablative temperatures in any but the targets in the close vicinity of the geometric focus, simulation-based approaches indicate the possibility of considerably extending the treatment envelope (including targets below the transducer level and locations several centimeters off the geometric focus), generation of sharper foci, and increased targeting accuracy. While the prediction of achievable aberration correction appears to be unaffected by the detailed bone-structure, proper consideration of inhomogeneity is required to predict the pressure distribution for given steering parameters. Simulation-based approaches to calculate aberration corrections may aid in the extension of the tcFUS treatment envelope as well as predict and avoid secondary effects (standing waves, skull heating). Due to their superior performance, simulationbased techniques may prove invaluable in the amelioration of skull-induced aberration effects in tcFUS therapy. The next steps are to investigate shear-wave-induced effects in order to reliably exclude secondary hot-spots, and to develop comprehensive uncertainty assessment and validation procedures.
Van Vlack, C; Hughes, S
2007-04-20
Ultrashort pulse light-matter interactions in a semiconductor are investigated within the regime of resonant optical rectification. Using pulse envelope areas of around 1.5-3.5 pi, a single-shot dependence on carrier-envelope-offset phase (CEP) is demonstrated for 5 fs pulse durations. A characteristic phase map is predicted for several different frequency regimes using parameters for thin-film GaAs. We subsequently suggest a possible technique to extract the CEP, in both sign and amplitude, using a solid state detector.
Oelze, Michael L; Mamou, Jonathan
2016-02-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation, and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years, QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient (BSC), estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter (ESD) and the effective acoustic concentration (EAC) of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and preclinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy.
Anticipatory control: A software retrofit for current plant controllers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parthasarathy, S.; Parlos, A.G.; Atiya, A.F.
1993-01-01
The design and simulated testing of an artificial neural network (ANN)-based self-adapting controller for complex process systems are presented in this paper. The proposed controller employs concepts based on anticipatory systems, which have been widely used in the petroleum and chemical industries, and they are slowly finding their way into the power industry. In particular, model predictive control (MPC) is used for the systematic adaptation of the controller parameters to achieve desirable plant performance over the entire operating envelope. The versatile anticipatory control algorithm developed in this study is projected to enhance plant performance and lend robustness to drifts inmore » plant parameters and to modeling uncertainties. This novel technique of integrating recurrent ANNs with a conventional controller structure appears capable of controlling complex, nonlinear, and nonminimum phase process systems. The direct, on-line adaptive control algorithm presented in this paper considers the plant response over a finite time horizon, diminishing the need for manual control or process interruption for controller gain tuning.« less
Flood hydrology for Dry Creek, Lake County, Northwestern Montana
Parrett, C.; Jarrett, R.D.
2004-01-01
Dry Creek drains about 22.6 square kilometers of rugged mountainous terrain upstream from Tabor Dam in the Mission Range near St. Ignatius, Montana. Because of uncertainty about plausible peak discharges and concerns regarding the ability of the Tabor Dam spillway to safely convey these discharges, the flood hydrology for Dry Creek was evaluated on the basis of three hydrologic and geologic methods. The first method involved determining an envelope line relating flood discharge to drainage area on the basis of regional historical data and calculating a 500-year flood for Dry Creek using a regression equation. The second method involved paleoflood methods to estimate the maximum plausible discharge for 35 sites in the study area. The third method involved rainfall-runoff modeling for the Dry Creek basin in conjunction with regional precipitation information to determine plausible peak discharges. All of these methods resulted in estimates of plausible peak discharges that are substantially less than those predicted by the more generally applied probable maximum flood technique. Copyright ASCE 2004.
Kuperstein, Arthur S
2012-09-01
Fifty-two disinfected photostimulable phosphor (PSP) plates in plastic barrier envelopes were evaluated for contamination following placement in 30 study participants. Forty-four plates were acceptable for use in the study. The risk factor was the abundant oropharyngeal microbial flora and its ability to breach infection-control barrier sheaths. The presence of bacterial colonies on an agar plate was used to determine bacterial contamination and the presence of any growth indicated failure of the barrier envelope. Before clinical placement of the plates, quality review of the PSP plates revealed defects in the integrity of 4 barrier envelopes most likely caused by forceps-related damage or failure to achieve a uniform seal during manufacturing. These defects allowed substantial contamination. Contamination also occurred as a result of failure to extract the PSP plate from the barrier envelope cleanly. Of the 44 barriers with no obvious signs of a defect, 3 produced bacterial growth following culture. The authors concluded that digital sensor sheathed in barrier envelopes remain a potential source of contamination. PSP plates must be disinfected between removal from a contaminated barrier envelope (used in a patient) and placement in a new barrier envelope. In addition, placement into the barrier envelope should ideally be carried out under aseptic conditions. Finally, the integrity of each sealed barrier envelope must be verified visually. Copyright © 2012. Published by Mosby, Inc. All rights reserved.
Frequency analysis of uncertain structures using imprecise probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modares, Mehdi; Bergerson, Joshua
2015-01-01
Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less
Langenderfer, Joseph E; Rullkoetter, Paul J; Mell, Amy G; Laz, Peter J
2009-04-01
An accurate assessment of shoulder kinematics is useful for understanding healthy normal and pathological mechanics. Small variability in identifying and locating anatomical landmarks (ALs) has potential to affect reported shoulder kinematics. The objectives of this study were to quantify the effect of landmark location variability on scapular and humeral kinematic descriptions for multiple subjects using probabilistic analysis methods, and to evaluate the consistency in results across multiple subjects. Data from 11 healthy subjects performing humeral elevation in the scapular plane were used to calculate Euler angles describing humeral and scapular kinematics. Probabilistic analyses were performed for each subject to simulate uncertainty in the locations of 13 upper-extremity ALs. For standard deviations of 4 mm in landmark location, the analysis predicted Euler angle envelopes between the 1 and 99 percentile bounds of up to 16.6 degrees . While absolute kinematics varied with the subject, the average 1-99% kinematic ranges for the motion were consistent across subjects and sensitivity factors showed no statistically significant differences between subjects. The description of humeral kinematics was most sensitive to the location of landmarks on the thorax, while landmarks on the scapula had the greatest effect on the description of scapular elevation. The findings of this study can provide a better understanding of kinematic variability, which can aid in making accurate clinical diagnoses and refining kinematic measurement techniques.
A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis
NASA Astrophysics Data System (ADS)
Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui
2015-07-01
Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.
Unconventional nozzle tradeoff study. [space tug propulsion
NASA Technical Reports Server (NTRS)
Obrien, C. J.
1979-01-01
Plug cluster engine design, performance, weight, envelope, operational characteristics, development cost, and payload capability, were evaluated and comparisons were made with other space tug engine candidates using oxygen/hydrogen propellants. Parametric performance data were generated for existing developed or high technology thrust chambers clustered around a plug nozzle of very large diameter. The uncertainties in the performance prediction of plug cluster engines with large gaps between the modules (thrust chambers) were evaluated. The major uncertainty involves, the aerodynamics of the flow from discrete nozzles, and the lack of this flow to achieve the pressure ratio corresponding to the defined area ratio for a plug cluster. This uncertainty was reduced through a cluster design that consists of a plug contour that is formed from the cluster of high area ratio bell nozzles that have been scarfed. Light-weight, high area ratio, bell nozzles were achieved through the use of AGCarb (carbon-carbon cloth) nozzle extensions.
ERIC Educational Resources Information Center
Zheng, Henry Y.; Stewart, Alice A.
This study explores data envelopment analysis (DEA) as a tool for assessing and benchmarking the performance of public research universities. Using of national databases such as those conducted by the National Science Foundation and the National Center for Education Statistics, DEA analysis was conducted of the research and instructional outcomes…
Methodologies for Adaptive Flight Envelope Estimation and Protection
NASA Technical Reports Server (NTRS)
Tang, Liang; Roemer, Michael; Ge, Jianhua; Crassidis, Agamemnon; Prasad, J. V. R.; Belcastro, Christine
2009-01-01
This paper reports the latest development of several techniques for adaptive flight envelope estimation and protection system for aircraft under damage upset conditions. Through the integration of advanced fault detection algorithms, real-time system identification of the damage/faulted aircraft and flight envelop estimation, real-time decision support can be executed autonomously for improving damage tolerance and flight recoverability. Particularly, a bank of adaptive nonlinear fault detection and isolation estimators were developed for flight control actuator faults; a real-time system identification method was developed for assessing the dynamics and performance limitation of impaired aircraft; online learning neural networks were used to approximate selected aircraft dynamics which were then inverted to estimate command margins. As off-line training of network weights is not required, the method has the advantage of adapting to varying flight conditions and different vehicle configurations. The key benefit of the envelope estimation and protection system is that it allows the aircraft to fly close to its limit boundary by constantly updating the controller command limits during flight. The developed techniques were demonstrated on NASA s Generic Transport Model (GTM) simulation environments with simulated actuator faults. Simulation results and remarks on future work are presented.
Constraints for the Progenitor Masses of Historic Core-collapse Supernovae
NASA Astrophysics Data System (ADS)
Williams, Benjamin F.; Hillis, Tristan J.; Murphy, Jeremiah W.; Gilbert, Karoline; Dalcanton, Julianne J.; Dolphin, Andrew E.
2018-06-01
We age-date the stellar populations associated with 12 historic nearby core-collapse supernovae (CCSNe) and two supernova impostors; from these ages, we infer their initial masses and associated uncertainties. To do this, we have obtained new Hubble Space Telescope imaging covering these CCSNe. Using these images, we measure resolved stellar photometry for the stars surrounding the locations of the SNe. We then fit the color–magnitude distributions of this photometry with stellar evolution models to determine the ages of any young existing populations present. From these age distributions, we infer the most likely progenitor masses for all of the SNe in our sample. We find ages between 4 and 50 Myr, corresponding to masses from 7.5 to 59 solar masses. There were no SNe that lacked a local young population. Our sample contains four SNe Ib/c; their masses have a wide range of values, suggesting that the progenitors of stripped-envelope SNe are binary systems. Both impostors have masses constrained to be ≲7.5 solar masses. In cases with precursor imaging measurements, we find that age-dating and precursor imaging give consistent progenitor masses. This consistency implies that, although the uncertainties for each technique are significantly different, the results of both are reliable to the measured uncertainties. We combine these new measurements with those from our previous work and find that the distribution of 25 core-collapse SNe progenitor masses is consistent with a standard Salpeter power-law mass function, no upper mass cutoff, and an assumed minimum mass for core-collapse of 7.5 M⊙. The distribution is consistent with a minimum mass <9.5 M⊙.
NASA Astrophysics Data System (ADS)
Li, Yupeng; Ding, Ding
2017-09-01
Benefiting from the high spectral efficiency and low peak-to-average power ratio, constant envelope orthogonal frequency division multiplexing (OFDM) is a promising technique in coherent optical communication. Polarization-division multiplexing (PDM) has been employed as an effective way to double the transmission capacity in the commercial 100 Gb/s PDM-QPSK system. We investigated constant envelope OFDM together with PDM. Simulation results show that the acceptable maximum launch power into the fiber improves 10 and 6 dB for 80- and 320-km transmission, respectively (compared with the conventional PDM OFDM system). The maximum reachable distance of the constant envelope OFDM system is able to reach 800 km, and even 1200 km is reachable if an ideal erbium doped fiber amplifier is employed.
Habitat classification modeling with incomplete data: Pushing the habitat envelope
Zarnetske, P.L.; Edwards, T.C.; Moisen, Gretchen G.
2007-01-01
Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical techniques for HCMs. One option is to generate pseudo-absence points so that the many available statistical modeling tools can be used. Traditional techniques generate pseudoabsence points at random across broadly defined species ranges, often failing to include biological knowledge concerning the species-habitat relationship. We incorporated biological knowledge of the species-habitat relationship into pseudo-absence points by creating habitat envelopes that constrain the region from which points were randomly selected. We define a habitat envelope as an ecological representation of a species, or species feature's (e.g., nest) observed distribution (i.e., realized niche) based on a single attribute, or the spatial intersection of multiple attributes. We created HCMs for Northern Goshawk (Accipiter gentilis atricapillus) nest habitat during the breeding season across Utah forests with extant nest presence points and ecologically based pseudo-absence points using logistic regression. Predictor variables were derived from 30-m USDA Landfire and 250-m Forest Inventory and Analysis (FIA) map products. These habitat-envelope-based models were then compared to null envelope models which use traditional practices for generating pseudo-absences. Models were assessed for fit and predictive capability using metrics such as kappa, thresholdindependent receiver operating characteristic (ROC) plots, adjusted deviance (Dadj2), and cross-validation, and were also assessed for ecological relevance. For all cases, habitat envelope-based models outperformed null envelope models and were more ecologically relevant, suggesting that incorporating biological knowledge into pseudo-absence point generation is a powerful tool for species habitat assessments. Furthermore, given some a priori knowledge of the species-habitat relationship, ecologically based pseudo-absence points can be applied to any species, ecosystem, data resolution, and spatial extent. ?? 2007 by the Ecological Society of America.
Carrier-envelope phase-controlled quantum interference of injected photocurrents in semiconductors.
Fortier, T M; Roos, P A; Jones, D J; Cundiff, S T; Bhat, R D R; Sipe, J E
2004-04-09
We demonstrate quantum interference control of injected photocurrents in a semiconductor using the phase stabilized pulse train from a mode-locked Ti:sapphire laser. Measurement of the comb offset frequency via this technique results in a signal-to-noise ratio of 40 dB (10 Hz resolution bandwidth), enabling solid-state detection of carrier-envelope phase shifts of a Ti:sapphire oscillator.
Autonomous Object Manipulation Using a Soft Planar Grasping Manipulator
Katzschmann, Robert K.; Marchese, Andrew D.
2015-01-01
Abstract This article presents the development of an autonomous motion planning algorithm for a soft planar grasping manipulator capable of grasp-and-place operations by encapsulation with uncertainty in the position and shape of the object. The end effector of the soft manipulator is fabricated in one piece without weakening seams using lost-wax casting instead of the commonly used multilayer lamination process. The soft manipulation system can grasp randomly positioned objects within its reachable envelope and move them to a desired location without human intervention. The autonomous planning system leverages the compliance and continuum bending of the soft grasping manipulator to achieve repeatable grasps in the presence of uncertainty. A suite of experiments is presented that demonstrates the system's capabilities. PMID:27625916
Structural Polypeptides of the Granulosis Virus of Plodia interpunctella†
Tweeten, Kathleen A.; Bulla, Lee A.; Consigli, Richard A.
1980-01-01
Techniques were developed for the isolation and purification of three structural components of Plodia interpunctella granulosis virus: granulin, enveloped nucleocapsids, and nucleocapsids. The polypeptide composition and distribution of protein in each viral component were determined by sodium dodecyl sulfate discontinuous and gradient polyacrylamide slab gel electrophoresis. Enveloped nucleocapsids consisted of 15 structural proteins ranging in molecular weight from 12,600 to 97,300. Five of these proteins, having approximate molecular weights of 17,800, 39,700, 42,400, 48,200, and 97,300, were identified as envelope proteins by surface radioiodination of the enveloped nucleocapsids. Present in purified nucleocapsids were eight polypeptides. The predominant proteins in this structural component had molecular weights of 12,500 and 31,000. Whereas no evidence of polypeptide glycosylation was obtained, six of the viral proteins were observed to be phosphorylated. Images PMID:16789191
An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
1989-01-01
A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.
Yu, Tae Jun; Hong, Kyung-Han; Choi, Hyun-Gyug; Sung, Jae Hee; Choi, Il Woo; Ko, Do-Kyeong; Lee, Jongmin; Kim, Junwon; Kim, Dong Eon; Nam, Chang Hee
2007-06-25
We demonstrate a long-term operation with reduced phase noise in the carrier-envelope-phase (CEP) stabilization process by employing a double feedback loop and an improved signal detection in the direct locking technique [Opt. Express 13, 2969 (2005)]. A homodyne balanced detection method is employed for efficiently suppressing the dc noise in the f-2f beat signal, which is converted into the CEP noise in the direct locking loop working at around zero carrier-envelope offset frequency (f(ceo)). In order to enhance the long-term stability, we have used the double feedback scheme that modulates both the oscillator pump power for a fast control and the intracavity-prism insertion depth for a slow and high-dynamic-range control. As a result, the in-loop phase jitter is reduced from 50 mrad of the previous result to 29 mrad, corresponding to 13 as in time scale, and the long-term stable operation is achieved for more than 12 hours.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zenihiro, J.; Sakaguchi, H.; Murakami, T.
Cross sections and analyzing powers for polarized proton elastic scattering from {sup 58}Ni, and {sup 204,206,208}Pb were measured at intermediate energy E{sub p}=295 MeV. An effective relativistic Love-Franey interaction is tuned to reproduce {sup 58}Ni scattering data within the framework of the relativistic impulse approximation. The neutron densities of the lead isotopes are deduced using model-independent sum-of-Gaussians distributions. Their error envelopes are estimated by a new {chi}{sup 2} criterion including uncertainties associated with the reaction model. The systematic behaviors of extracted error envelopes of the neutron density distributions in {sup 204,206,208}Pb are presented. The extracted neutron and proton density distributionmore » of {sup 208}Pb gives a neutron skin thickness of {Delta}r{sub np}=0.211{sub -0.063}{sup +0.054} fm.« less
Effect of core cooling on the radius of sub-Neptune planets
NASA Astrophysics Data System (ADS)
Vazan, A.; Ormel, C. W.; Dominik, C.
2018-02-01
Sub-Neptune planets are very common in our Galaxy and show a large diversity in their mass-radius relation. In sub-Neptunes most of the planet mass is in the rocky part (hereafter, core), which is surrounded by a modest hydrogen-helium envelope. As a result, the total initial heat content of such a planet is dominated by that of the core. Nonetheless, most studies contend that the core cooling only has a minor effect on the radius evolution of the gaseous envelope because the cooling of the core is in sync with the envelope; that is most of the initial heat is released early on timescales of 10-100 Myr. In this Letter we examined the importance of the core cooling rate for the thermal evolution of the envelope. Thus, we relaxed the early core cooling assumption and present a model in which the core is characterized by two parameters: the initial temperature and the cooling time. We find that core cooling can significantly enhance the radius of the planet when it operates on a timescale similar to the observed age, i.e. Gyr. Consequently, the interpretation of the mass-radius observations of sub-Neptunes depends on the assumed core thermal properties and the uncertainty therein. The degeneracy of composition and core thermal properties can be reduced by obtaining better estimates of the planet ages (in addition to their radii and masses) as envisioned by future observations.
Parametric robust control and system identification: Unified approach
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1994-01-01
Despite significant advancement in the area of robust parametric control, the problem of synthesizing such a controller is still a wide open problem. Thus, we attempt to give a solution to this important problem. Our approach captures the parametric uncertainty as an H(sub infinity) unstructured uncertainty so that H(sub infinity) synthesis techniques are applicable. Although the techniques cannot cope with the exact parametric uncertainty, they give a reasonable guideline to model the unstructured uncertainty that contains the parametric uncertainty. An additional loop shaping technique is also introduced to relax its conservatism.
Carrier-envelope phase control over pathway interference in strong-field dissociation of H2+.
Kling, Nora G; Betsch, K J; Zohrabi, M; Zeng, S; Anis, F; Ablikim, U; Jochim, Bethany; Wang, Z; Kübel, M; Kling, M F; Carnes, K D; Esry, B D; Ben-Itzhak, I
2013-10-18
The dissociation of an H2+ molecular-ion beam by linearly polarized, carrier-envelope-phase-tagged 5 fs pulses at 4×10(14) W/cm2 with a central wavelength of 730 nm was studied using a coincidence 3D momentum imaging technique. Carrier-envelope-phase-dependent asymmetries in the emission direction of H+ fragments relative to the laser polarization were observed. These asymmetries are caused by interference of odd and even photon number pathways, where net zero-photon and one-photon interference predominantly contributes at H+ + H kinetic energy releases of 0.2-0.45 eV, and net two-photon and one-photon interference contributes at 1.65-1.9 eV. These measurements of the benchmark H2+ molecule offer the distinct advantage that they can be quantitatively compared with ab initio theory to confirm our understanding of strong-field coherent control via the carrier-envelope phase.
NASA Astrophysics Data System (ADS)
O'Neill, George C.; Barratt, Eleanor L.; Hunt, Benjamin A. E.; Tewarie, Prejaas K.; Brookes, Matthew J.
2015-11-01
The human brain can be divided into multiple areas, each responsible for different aspects of behaviour. Healthy brain function relies upon efficient connectivity between these areas and, in recent years, neuroimaging has been revolutionised by an ability to estimate this connectivity. In this paper we discuss measurement of network connectivity using magnetoencephalography (MEG), a technique capable of imaging electrophysiological brain activity with good (~5 mm) spatial resolution and excellent (~1 ms) temporal resolution. The rich information content of MEG facilitates many disparate measures of connectivity between spatially separate regions and in this paper we discuss a single metric known as power envelope correlation. We review in detail the methodology required to measure power envelope correlation including (i) projection of MEG data into source space, (ii) removing confounds introduced by the MEG inverse problem and (iii) estimation of connectivity itself. In this way, we aim to provide researchers with a description of the key steps required to assess envelope based functional networks, which are thought to represent an intrinsic mode of coupling in the human brain. We highlight the principal findings of the techniques discussed, and furthermore, we show evidence that this method can probe how the brain forms and dissolves multiple transient networks on a rapid timescale in order to support current processing demand. Overall, power envelope correlation offers a unique and verifiable means to gain novel insights into network coordination and is proving to be of significant value in elucidating the neural dynamics of the human connectome in health and disease.
Method to manage integration error in the Green-Kubo method.
Oliveira, Laura de Sousa; Greaney, P Alex
2017-02-01
The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.
Method to manage integration error in the Green-Kubo method
NASA Astrophysics Data System (ADS)
Oliveira, Laura de Sousa; Greaney, P. Alex
2017-02-01
The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Ronaldo C.; D'Auria, Francesco; Alvim, Antonio Carlos M.
2002-07-01
The Code with - the capability of - Internal Assessment of Uncertainty (CIAU) is a tool proposed by the 'Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione (DIMNP)' of the University of Pisa. Other Institutions including the nuclear regulatory body from Brazil, 'Comissao Nacional de Energia Nuclear', contributed to the development of the tool. The CIAU aims at providing the currently available Relap5/Mod3.2 system code with the integrated capability of performing not only relevant transient calculations but also the related estimates of uncertainty bands. The Uncertainty Methodology based on Accuracy Extrapolation (UMAE) is used to characterize the uncertainty in themore » prediction of system code calculations for light water reactors and is internally coupled with the above system code. Following an overview of the CIAU development, the present paper deals with the independent qualification of the tool. The qualification test is performed by estimating the uncertainty bands that should envelope the prediction of the Angra 1 NPP transient RES-11. 99 originated by an inadvertent complete load rejection that caused the reactor scram when the unit was operating at 99% of nominal power. The current limitation of the 'error' database, implemented into the CIAU prevented a final demonstration of the qualification. However, all the steps for the qualification process are demonstrated. (authors)« less
Study of synthesis techniques for insensitive aircraft control systems
NASA Technical Reports Server (NTRS)
Harvey, C. A.; Pope, R. E.
1977-01-01
Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.
X-31 high angle of attack control system performance
NASA Technical Reports Server (NTRS)
Huber, Peter; Seamount, Patricia
1994-01-01
The design goals for the X-31 flight control system were: (1) level 1 handling qualities during post-stall maneuvering (30 to 70 degrees angle-of-attack); (2) thrust vectoring to enhance performance across the flight envelope; and (3) adequate pitch-down authority at high angle-of-attack. Additional performance goals are discussed. A description of the flight control system is presented, highlighting flight control system features in the pitch and roll axes and X-31 thrust vectoring characteristics. The high angle-of-attack envelope clearance approach will be described, including a brief explanation of analysis techniques and tools. Also, problems encountered during envelope expansion will be discussed. This presentation emphasizes control system solutions to problems encountered in envelope expansion. An essentially 'care free' envelope was cleared for the close-in-combat demonstrator phase. High angle-of-attack flying qualities maneuvers are currently being flown and evaluated. These results are compared with pilot opinions expressed during the close-in-combat program and with results obtained from the F-18 HARV for identical maneuvers. The status and preliminary results of these tests are discussed.
García-Alonso, Carlos; Pérez-Naranjo, Leonor
2009-01-01
Introduction Knowledge management, based on information transfer between experts and analysts, is crucial for the validity and usability of data envelopment analysis (DEA). Aim To design and develop a methodology: i) to assess technical efficiency of small health areas (SHA) in an uncertainty environment, and ii) to transfer information between experts and operational models, in both directions, for improving expert’s knowledge. Method A procedure derived from knowledge discovery from data (KDD) is used to select, interpret and weigh DEA inputs and outputs. Based on KDD results, an expert-driven Monte-Carlo DEA model has been designed to assess the technical efficiency of SHA in Andalusia. Results In terms of probability, SHA 29 is the most efficient being, on the contrary, SHA 22 very inefficient. 73% of analysed SHA have a probability of being efficient (Pe) >0.9 and 18% <0.5. Conclusions Expert knowledge is necessary to design and validate any operational model. KDD techniques make the transfer of information from experts to any operational model easy and results obtained from the latter improve expert’s knowledge.
Imran, Tayyab; Lee, Yong S; Nam, Chang H; Hong, Kyung-Han; Yu, Tae J; Sung, Jae H
2007-01-08
We have stabilized and electronically controlled the carrier-envelope phase (CEP) of high-power femtosecond laser pulses, generated in a grating-based chirped-pulse amplification kHz Ti:sapphire laser, using the direct locking technique [Opt. Express 13, 2969 (2005)] combined with a slow feedback loop. An f-2f spectral interferometer has shown the CEP stabilities of 1.2 rad with the direct locking loop applied to the oscillator and of 180 mrad with an additional slow feedback loop, respectively. The electronic CEP modulations that can be easily realized in the direct locking loop are also demonstrated with the amplified pulses.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Hospitals Productivity Measurement Using Data Envelopment Analysis Technique.
Torabipour, Amin; Najarzadeh, Maryam; Arab, Mohammad; Farzianpour, Freshteh; Ghasemzadeh, Roya
2014-11-01
This study aimed to measure the hospital productivity using data envelopment analysis (DEA) technique and Malmquist indices. This is a cross sectional study in which the panel data were used in a 4 year period from 2007 to 2010. The research was implemented in 12 teaching and non-teaching hospitals of Ahvaz County. Data envelopment analysis technique and the Malmquist indices with an input-orientation approach, was used to analyze the data and estimation of productivity. Data were analyzed using the SPSS.18 and DEAP.2 software. Six hospitals (50%) had a value lower than 1, which represents an increase in total productivity and other hospitals were non-productive. the average of total productivity factor (TPF) was 1.024 for all hospitals, which represents a decrease in efficiency by 2.4% from 2007 to 2010. The average technical, technologic, scale and managerial efficiency change was 0.989, 1.008, 1.028, and 0.996 respectively. There was not a significant difference in mean productivity changes among teaching and non-teaching hospitals (P>0.05) (except in 2009 years). Productivity rate of hospitals had an increasing trend generally. However, the total average of productivity was decreased in hospitals. Besides, between the several components of total productivity, variation of technological efficiency had the highest impact on reduce of total average of productivity.
Cameron, Sharon; Chong-White, Nicky; Mealings, Kiri; Beechey, Tim; Dillon, Harvey; Young, Taegan
2018-02-01
Intensity peaks and valleys in the acoustic signal are salient cues to syllable structure, which is accepted to be a crucial early step in phonological processing. As such, the ability to detect low-rate (envelope) modulations in signal amplitude is essential to parse an incoming speech signal into smaller phonological units. The Parsing Syllable Envelopes (ParSE) test was developed to quantify the ability of children to recognize syllable boundaries using an amplitude modulation detection paradigm. The envelope of a 750-msec steady-state /a/ vowel is modulated into two or three pseudo-syllables using notches with modulation depths varying between 0% and 100% along an 11-step continuum. In an adaptive three-alternative forced-choice procedure, the participant identified whether one, two, or three pseudo-syllables were heard. Development of the ParSE stimuli and test protocols, and collection of normative and test-retest reliability data. Eleven adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 10 mo) and 134 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 72 females. Data were collected using a touchscreen computer. Psychometric functions (PFs) were automatically fit to individual data by the ParSE software. Performance was related to the modulation depth at which syllables can be detected with 88% accuracy (referred to as the upper boundary of the uncertainty region [UBUR]). A shallower PF slope reflected a greater level of uncertainty. Age effects were determined based on raw scores. z Scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UBUR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance. Across participants, the performance criterion (UBUR) was met with a median modulation depth of 42%. The effect of age on the UBUR was significant (p < 0.00001). The UBUR ranged from 50% modulation depth for 6-yr-olds to 25% for adults. Children aged 6-10 had significantly higher uncertainty region boundaries than adults. A skewed distribution toward negative performance occurred (p = 0.00007). There was no significant difference in performance on the ParSE between males and females (p = 0.60). Test-retest z scores were strongly correlated (r = 0.68, p < 0.0000001). The ParSE normative data show that the ability to identify syllable boundaries based on changes in amplitude modulation improves with age, and that some children in the general population have performance much worse than their age peers. The test is suitable for use in planned studies in a clinical population. American Academy of Audiology
Analysis of the readout of a high rate MWPC
NASA Astrophysics Data System (ADS)
Camerini, P.; Grion, N.; Rui, R.; Sheffer, G.; Openshaw, R.
1990-06-01
An analytical method to reduce the raw data supplied by a high-speed multiwire proportional chamber (MWPC) is presented. The results obtained with the MWPC and the associated readout system, LeCroy PCOS III, when monitoring a high-intensity flux of positive pions delivered by the M11 channel at TRIUMF are discussed. The method allows the flux intensity, the beam envelope and the detector efficiency to be determined with little uncertainty (few %) at intense particle beams ( > 10 7 particles/s).
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
NASA Astrophysics Data System (ADS)
Kumar Singh, Vinay; Dalal, U. D.
2017-06-01
To inhibit the effect of non-linearity of the LEDs leading to a significant increase in the peak to average power ratio (PAPR) of the OFDM signals in the Visible light communication (VLC) we propose a frequency modulated constant envelope OFDM (FM CE-OFDM) technique. The abrupt amplitude variations in the OFDM signal are frequency modulated before being applied to the LED for electro-optical conversion resulting in a constant envelope signal. The LED is maintained in the linear region of operation by this constant envelope signal at sufficient DC bias. The proposed technique reduces the PAPR to the least possible value ≈0 dB. We theoretically analyze and perform numerical simulations to assess the enhancement of the proposed system. The optimal modulation index is found to be 0.3. The metrics pertaining to the evaluation of the phase discontinuity is derived and is found to be lesser for the FM CE-OFDM as compared to the phase modulated (PM) CE-OFDM. The receiver sensitivity is improved by 1.6 dB for a transmission distance of 2 m for the FM CE-OFDM as compared to the PM CE-OFDM at the FEC threshold. We compare the BER performance of the ideal OFDM (without the non linearity of LED), power back-off OFDM, PM CE-OFDM and FM CE-OFDM in an optical wireless channel (OWC) scenario. The FM CE-OFDM has an improvement of 2.1 dB SNR at the FEC threshold as compared to the PM CE-OFDM. It also shows an improvement of 11 dB when compared with the power back-off technique used in the VLC systems for 10 dB power back-off.
Flight testing a V/STOL aircraft to identify a full-envelope aerodynamic model
NASA Technical Reports Server (NTRS)
Mcnally, B. David; Bach, Ralph E., Jr.
1988-01-01
Flight-test techniques are being used to generate a data base for identification of a full-envelope aerodynamic model of a V/STOL fighter aircraft, the YAV-8B Harrier. The flight envelope to be modeled includes hover, transition to conventional flight and back to hover, STOL operation, and normal cruise. Standard V/STOL procedures such as vertical takeoff and landings, and short takeoff and landings are used to gather data in the powered-lift flight regime. Long (3 to 5 min) maneuvers which include a variety of input types are used to obtain large-amplitude control and response excitations. The aircraft is under continuous radar tracking; a laser tracker is used for V/STOL operations near the ground. Tracking data are used with state-estimation techniques to check data consistency and to derive unmeasured variables, for example, angular accelerations. A propulsion model of the YAV-8B's engine and reaction control system is used to isolate aerodynamic forces and moments for model identification. Representative V/STOL flight data are presented. The processing of a typical short takeoff and slow landing maneuver is illustrated.
NASA Technical Reports Server (NTRS)
Brown, John C.; Fox, Geoffrey K.
1989-01-01
The depolarizing and occultation effects of a finite spherical light source on the polarization of light Thomson-scattered from a flat circumstellar envelope seen edge-on are analyzed. The analysis shows that neglect of the finite size of the light source leads to a gross overestimate of the polarization for a given disk geometry. By including occultation and depolarization, it is found that B-star envelopes are necessarily highly flattened disk-type structures. For a disk viewed edge-on, the effect of occultation reduces the polarization more than the inclusion of the depolarization factor alone. Analysis of a one-dimensional plume leads to a powerful technique that permits the electron density distribution to be explicitly obtained from the polarimetric data.
Nerot, A; Skalli, W; Wang, X
2016-10-03
Recent progress in 3D scanning technologies allows easy access to 3D human body envelope. To create personalized human models with an articulated linkage for realistic re-posturing and motion analyses, an accurate estimation of internal skeleton points, including joint centers, from the external envelope is required. For this research project, 3D reconstructions of both internal skeleton and external envelope from low dose biplanar X-rays of 40 male adults were obtained. Using principal component analysis technique (PCA), a low-dimensional dataset was used to predict internal points of the upper body from the trunk envelope. A least squares method was used to find PC scores that fit the PCA-based model to the envelope of a new subject. To validate the proposed approach, estimated internal points were evaluated using a leave-one-out (LOO) procedure, i.e. successively considering each individual from our dataset as an extra-subject. In addition, different methods were proposed to reduce the variability in data and improve the performance of the PCA-based prediction. The best method was considered as the one providing the smallest errors between estimated and reference internal points with an average error of 8.3mm anterior-posteriorly, 6.7mm laterally and 6.5mm vertically. As the proposed approach relies on few or no bony landmarks, it could be easily applicable and generalizable to surface scans from any devices. Combined with automatic body scanning techniques, this study could potentially constitute a new step towards automatic generation of external/internal subject-specific manikins. Copyright © 2016 Elsevier Ltd. All rights reserved.
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Dynamic stability and handling qualities tests on a highly augmented, statically unstable airplane
NASA Technical Reports Server (NTRS)
Gera, Joseph; Bosworth, John T.
1987-01-01
Initial envelope clearance and subsequent flight testing of a new, fully augmented airplane with an extremely high degree of static instability can place unusual demands on the flight test approach. Previous flight test experience with these kinds of airplanes is very limited or nonexistent. The safe and efficient flight testing may be further complicated by a multiplicity of control effectors that may be present on this class of airplanes. This paper describes some novel flight test and analysis techniques in the flight dynamics and handling qualities area. These techniques were utilized during the initial flight envelope clearance of the X-29A aircraft and were largely responsible for the completion of the flight controls clearance program without any incidents or significant delays.
Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting
NASA Astrophysics Data System (ADS)
Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.
2009-04-01
In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be either an intermediate forecast between the extremes of the ensemble spread or a manually selected forecast based on a meteorologists advice. 2. Downstream catchments with low influence of weather forecast In downstream catchments with strong human impact on discharge (e.g. by reservoir operation) and large influence of upstream gauge observation quality on forecast quality, the 'overall error' may in most cases be larger than the combination of the 'model error' and an ensemble spread. Therefore, the overall forecast uncertainty bounds are calculated differently: a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. Here, additionally the corresponding inflow hydrograph from all upstream catchments must be used. b) As for an upstream catchment, the uncertainty range is determined by combination of 'model error' and the ensemble member forecasts c) In addition, the 'overall error' is superimposed on the 'lead forecast'. For reasons of consistency, the lead forecast must be based on the same meteorological forecast in the downstream and all upstream catchments. d) From the resulting two uncertainty ranges (one from the ensemble forecast and 'model error', one from the 'lead forecast' and 'overall error'), the envelope is taken as the most prudent uncertainty range. In sum, the uncertainty associated with each forecast run is calculated and communicated to the public in the form of 10% and 90% percentiles. As in part I of this study, the methodology as well as the useful- or uselessness of the resulting uncertainty ranges will be presented and discussed by typical examples.
Rapid Automated Aircraft Simulation Model Updating from Flight Data
NASA Technical Reports Server (NTRS)
Brian, Geoff; Morelli, Eugene A.
2011-01-01
Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.
Circumstellar radio molecular lines
NASA Technical Reports Server (NTRS)
NGUYEN-QUANG-RIEU
1987-01-01
Radio molecular lines appear to be useful probes into the stellar environment. Silicon oxide masers provide information on the physical conditions in the immediate vicinity of the stellar photosphere. Valuable information on the physics operating in the envelope of IRC + 10216 was recently obtained by high sensitivity observations and detailed theoretical analyses. Infrared speckle interferometry in the molecular lines and in the continuum is helpful in the investigation of the inner region of the envelope. These techniques are discussed in terms of late-type star mass loss.
Mashiko, Hiroki; Gilbertson, Steve; Li, Chengquan; Khan, Sabih D; Shakya, Mahendra M; Moon, Eric; Chang, Zenghu
2008-03-14
We demonstrated a novel optical switch to control the high-order harmonic generation process so that single attosecond pulses can be generated with multiple-cycle pulses. The technique combines two powerful optical gating methods: polarization gating and two-color gating. An extreme ultraviolet supercontinuum supporting 130 as was generated with neon gas using 9 fs laser pulses. We discovered a unique dependence of the harmonic spectra on the carrier-envelope phase of the laser fields, which repeats every 2 pi radians.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashiko, Hiroki; Gilbertson, Steve; Li, Chengquan
2008-03-14
We demonstrated a novel optical switch to control the high-order harmonic generation process so that single attosecond pulses can be generated with multiple-cycle pulses. The technique combines two powerful optical gating methods: polarization gating and two-color gating. An extreme ultraviolet supercontinuum supporting 130 as was generated with neon gas using 9 fs laser pulses. We discovered a unique dependence of the harmonic spectra on the carrier-envelope phase of the laser fields, which repeats every 2{pi} radians.
A mission planning concept and mission planning system for future manned space missions
NASA Technical Reports Server (NTRS)
Wickler, Martin
1994-01-01
The international character of future manned space missions will compel the involvement of several international space agencies in mission planning tasks. Additionally, the community of users requires a higher degree of freedom for experiment planning. Both of these problems can be solved by a decentralized mission planning concept using the so-called 'envelope method,' by which resources are allocated to users by distributing resource profiles ('envelopes') which define resource availabilities at specified times. The users are essentially free to plan their activities independently of each other, provided that they stay within their envelopes. The new developments were aimed at refining the existing vague envelope concept into a practical method for decentralized planning. Selected critical functions were exercised by planning an example, founded on experience acquired by the MSCC during the Spacelab missions D-1 and D-2. The main activity regarding future mission planning tasks was to improve the existing MSCC mission planning system, using new techniques. An electronic interface was developed to collect all formalized user inputs more effectively, along with an 'envelope generator' for generation and manipulation of the resource envelopes. The existing scheduler and its data base were successfully replaced by an artificial intelligence scheduler. This scheduler is not only capable of handling resource envelopes, but also uses a new technology based on neuronal networks. Therefore, it is very well suited to solve the future scheduling problems more efficiently. This prototype mission planning system was used to gain new practical experience with decentralized mission planning, using the envelope method. In future steps, software tools will be optimized, and all data management planning activities will be embedded into the scheduler.
Full-envelope aerodynamic modeling of the Harrier aircraft
NASA Technical Reports Server (NTRS)
Mcnally, B. David
1986-01-01
A project to identify a full-envelope model of the YAV-8B Harrier using flight-test and parameter identification techniques is described. As part of the research in advanced control and display concepts for V/STOL aircraft, a full-envelope aerodynamic model of the Harrier is identified, using mathematical model structures and parameter identification methods. A global-polynomial model structure is also used as a basis for the identification of the YAV-8B aerodynamic model. State estimation methods are used to ensure flight data consistency prior to parameter identification.Equation-error methods are used to identify model parameters. A fixed-base simulator is used extensively to develop flight test procedures and to validate parameter identification software. Using simple flight maneuvers, a simulated data set was created covering the YAV-8B flight envelope from about 0.3 to 0.7 Mach and about -5 to 15 deg angle of attack. A singular value decomposition implementation of the equation-error approach produced good parameter estimates based on this simulated data set.
NASA Astrophysics Data System (ADS)
Kumar, Keshav; Shukla, Sumitra; Singh, Sachin Kumar
2018-04-01
Periodic impulses arise due to localised defects in rolling element bearing. At the early stage of defects, the weak impulses are immersed in strong machinery vibration. This paper proposes a combined approach based upon Hilbert envelop and zero frequency resonator for the detection of the weak periodic impulses. In the first step, the strength of impulses is increased by taking normalised Hilbert envelop of the signal. It also helps in better localization of these impulses on time axis. In the second step, Hilbert envelope of the signal is passed through the zero frequency resonator for the exact localization of the periodic impulses. Spectrum of the resonator output gives peak at the fault frequency. Simulated noisy signal with periodic impulses is used to explain the working of the algorithm. The proposed technique is verified with experimental data also. A comparison of the proposed method with Hilbert-Haung transform (HHT) based method is presented to establish the effectiveness of the proposed method.
Circumstellar envelopes of Cepheids: a possible bias affecting the distance scale?
NASA Astrophysics Data System (ADS)
Kervella, Pierre; Gallenne, Alexandre; Mérand, Antoine
2013-02-01
Circumstellar envelopes (CSEs) have been detected around many Cepheids, first based on long-baseline interferometry, and now also using other observing techniques. These envelopes are particularly interesting for two reasons: their presence could impact the Cepheid distance scale, and they may be valuable tracers of stellar mass loss. Here we focus on their potential impact on the calibration of the Cepheid distance scale. We consider the photometric contribution of the envelopes in the visible, near-, and thermal-infrared domains. We conclude that the impact of CSEs on the apparent luminosities of Cepheids is negligible at visible wavelengths and generally weak (<5%) in the near-infrared (λ ~ 2 μm). In the thermal-infrared domain (λ ~ 8 μm), the flux contribution of the CSEs differs depending on the pulsation period: it is relatively weak (<15%) for stars with periods shorter than P ~ 10 days, but can reach ~ 30% for long-period Cepheids. We specifically discuss the long-period Galactic Cepheid RS Puppis, which exhibits a very large circumstellar, dusty envelope, and we conclude that this is not a representative case. Overall, the contribution of CSEs to the usual period-luminosity relations (from the visible to the K band) is mostly negligible. They could affect calibrations at longer wavelengths, although the presence of envelopes may have been partially taken into account in the existing empirical calibrations.
Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien
2015-04-01
Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a sensitivity analysis to the fixed parameters of the streamgauging technique remain very useful for estimating the uncertainty related to the (non quantified) bias correction. In the absence of a reference, the uncertainty estimate is referenced to the average of all discharge measurements in the interlaboratory experiment, ignoring the technique bias. Simple equations can be used to assess the uncertainty of the uncertainty results, as a function of the number of participants and of repeated measurements. The interlaboratory method was applied to several interlaboratory experiments on ADCPs and currentmeters mounted on wading rods, in streams of different sizes and aspects, with 10 to 30 instruments, typically. The uncertainty results were consistent with the usual expert judgment and highly depended on the measurement environment. Approximately, the expanded uncertainties (within the 95% probability interval) were ±5% to ±10% for ADCPs in good or poor conditions, and ±10% to ±15% for currentmeters in shallow creeks. Due to the specific limitations related to a slow measurement process and to small, natural streams, uncertainty results for currentmeters were more uncertain than for ADCPs, for which the site-specific errors were significantly evidenced. The proposed method can be applied to a wide range of interlaboratory experiments conducted in contrasted environments for different streamgauging techniques, in a standardized way. Ideally, an international open database would enhance the investigation of hydrological data uncertainties, according to the characteristics of the measurement conditions and procedures. Such a dataset could be used for implementing and validating uncertainty propagation methods in hydrometry.
NASA Astrophysics Data System (ADS)
Saltas, Ippocratis D.; Sawicki, Ignacy; Lopes, Ilidio
2018-05-01
We use the most recent, complete and independent measurements of masses and radii of white dwarfs in binaries to bound the class of non-trivial modified gravity theories, viable after GW170817/GRB170817, using its effect on the mass-radius relation of the stars. We show that the uncertainty in the latest data is sufficiently small that residual evolutionary effects, most notably the effect of core composition, finite temperature and envelope structure, must now accounted for if correct conclusions about the nature of gravity are to be made. We model corrections resulting from finite temperature and envelopes to a base Hamada-Salpeter cold equation of state and derive consistent bounds on the possible modifications of gravity in the stars' interiors, finding that the parameter quantifying the strength of the modification Y< 0.14 at 95% confidence, an improvement of a factor of three with respect to previous bounds. Finally, our analysis reveals some fundamental degeneracies between the theory of gravity and the precise chemical makeup of white dwarfs.
NASA Astrophysics Data System (ADS)
McCall, Keisha C.
Identification and monitoring of sub-tumor targets will be a critical step for optimal design and evaluation of cancer therapies in general and biologically targeted radiotherapy (dose-painting) in particular. Quantitative PET imaging may be an important tool for these applications. Currently radiotherapy planning accounts for tumor motion by applying geometric margins. These margins create a motion envelope to encompass the most probable positions of the tumor, while also maintaining the appropriate tumor control and normal tissue complication probabilities. This motion envelope is effective for uniform dose prescriptions where the therapeutic dose is conformed to the external margins of the tumor. However, much research is needed to establish the equivalent margins for non-uniform fields, where multiple biological targets are present and each target is prescribed its own dose level. Additionally, the size of the biological targets and close proximity make it impractical to apply planning margins on the sub-tumor level. Also, the extent of high dose regions must be limited to avoid excessive dose to the surrounding tissue. As such, this research project is an investigation of the uncertainty within quantitative PET images of moving and displaced dose-painting targets, and an investigation of the residual errors that remain after motion management. This included characterization of the changes in PET voxel-values as objects are moved relative to the discrete sampling interval of PET imaging systems (SPECIFIC AIM 1). Additionally, the repeatability of PET distributions and the delineating dose-painting targets were measured (SPECIFIC AIM 2). The effect of imaging uncertainty on the dose distributions designed using these images (SPECIFIC AIM 3) has also been investigated. This project also included analysis of methods to minimize motion during PET imaging and reduce the dosimetric impact of motion/position-induced imaging uncertainty (SPECIFIC AIM 4).
Impact of climate change on runoff pollution in urban environments
NASA Astrophysics Data System (ADS)
Coutu, S.; Kramer, S.; Barry, D. A.; Roudier, P.
2012-12-01
Runoff from urban environments is generally contaminated. These contaminants mostly originate from road traffic and building envelopes. Facade envelopes generate lead, zinc and even biocides, which are used for facade protection. Road traffic produces particles from tires and brakes. The transport of these pollutants to the environment is controlled by rainfall. The interval, duration and intensity of rainfall events are important as the dynamics of the pollutants are often modeled with non-linear buildup/washoff functions. Buildup occurs during dry weather when pollution accumulates, and is subsequently washed-off at the time of the following rainfall, contaminating surface runoff. Climate predictions include modified rainfall distributions, with changes in both number and intensity of events, even if the expected annual rainfall varies little. Consequently, pollutant concentrations in urban runoff driven by buildup/washoff processes will be affected by these changes in rainfall distributions. We investigated to what extent modifications in future rainfall distributions will impact the concentrations of pollutants present in urban surface runoff. The study used the example of Lausanne, Switzerland (temperate climate zone). Three emission scenarios (time horizon 2090), multiple combinations of RCM/GCM and modifications in rain event frequency were used to simulate future rainfall distributions with various characteristics. Simulated rainfall events were used as inputs for four pairs of buildup/washoff models, in order to compare future pollution concentrations in surface runoff. In this way, uncertainty in model structure was also investigated. Future concentrations were estimated to be between ±40% of today's concentrations depending on the season and, importantly, on the choice of the RCM/GCM model. Overall, however, the dominant factor was the uncertainty inherent in buildup/washoff models, which dominated over the uncertainty in future rainfall distributions. Consequently, the choice of a proper buildup/washoff model, with calibrated site-specific coefficients, is a major factor in modeling future runoff concentrations from contaminated urban surfaces.
NASA Astrophysics Data System (ADS)
Borghesani, P.; Antoni, J.
2017-06-01
Second-order cyclostationary (CS2) analysis has become popular in the field of machine diagnostics and a series of digital signal processing techniques have been developed to extract CS2 components from the background noise. Among those techniques, squared envelope spectrum (SES) and cyclic modulation spectrum (CMS) have gained popularity thanks to their high computational efficiency and simple implementation. The effectiveness of CMS and SES has been previously quantified based on the hypothesis of Gaussian background noise and has led to statistical tests for the presence of CS2 peaks in squared envelope spectra and cyclic modulation spectra. However a recently established link of CMS with SES and of SES with kurtosis has exposed a potential weakness of those indicators in the case of highly leptokurtic background noise. This case is often present in practice when the machine is subjected to highly impulsive phenomena, either due to harsh operating conditions or to electric noise generated by power electronics and captured by the sensor. This study investigates and quantifies for the first time the effect of leptokurtic noise on the capabilities of SES and CMS, by analysing three progressively harsh situations: high kurtosis, infinite kurtosis and alpha-stable background noise (for which even first and second-order moments are not defined). Then the resilience of a recently proposed family of CS2 indicators, based on the log-envelope, is verified analytically, numerically and experimentally in the case of highly leptokurtic noise.
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
Spectrum transformation for divergent iterations
NASA Technical Reports Server (NTRS)
Gupta, Murli M.
1991-01-01
Certain spectrum transformation techniques are described that can be used to transform a diverging iteration into a converging one. Two techniques are considered called spectrum scaling and spectrum enveloping and how to obtain the optimum values of the transformation parameters is discussed. Numerical examples are given to show how this technique can be used to transform diverging iterations into converging ones; this technique can also be used to accelerate the convergence of otherwise convergent iterations.
Comparison of closed and open methods of pneumoperitonium in laparoscopic cholecystectomy.
Akbar, Mohammad; Khan, Ishtiaq Ali; Naveed, Danish; Khattak, Irfanuddin; Zafar, Arshad; Wazir, Muhammad Salim; Khan, Asif Nawaz; Zia-ur-Rehman
2008-01-01
Pneumoperitonium is the first step in laparoscopic surgery including cholecystectomy. Two commonly used methods to create pneumoperitonium are closed and open technique. Both have advantages and disadvantages. The current study was designed to compare these two techniques in terms of safety and time required to complete the procedure. This was a randomized controlled prospective study conducted at Department of Surgery, Ayub Hospital Complex Abbottabad, from 1st June 2007 to 31st May 2008. Randomization was done into two groups randomly using sealed envelopes containing the questionnaire. Seventy envelopes were kept in the cupboard, containing 35 proformas for group A and 35 for group B. An envelope was randomly fetched and opened upon selection of the patient after taking the informed consent. Pneumoperitonium was created by closed technique in group A, and by open technique in group B. Time required for successful pneumoperitonium was calculated in each group. Failure to induce pneumoperitonium was determined for each technique. Time required to close the wounds at completion, total operating time and injuries sustained during induction of pneumoperitonium were compared in both techniques. Out of the total 70 patients included in study, 35 were in group A and 35 in group B. Mean time required for successful pneumoperitonium was 9.17 minutes in group A and 8.11 minutes in group B. Total operating time ranged from 55 minutes to 130 minutes in group A and from 45 minutes to 110 minutes in group B. Mean of total operating time was 78.34 and 67 minutes in group A and B respectively. Mean time needed to close the wound was 9.88 minutes in group A and 4.97 minutes in group B. Failure of technique was noted in three patients in group A while no failure was experienced in group B. In two cases in group A minor complications during creation of pneumoperitonium were observed while in group B no complication occurred. No patient died in the study. We concluded from this study that open technique of pneumoperitonium was, less time consuming and safer than the closed technique.
Range of earth structure nonuniqueness implied by body wave observations.
NASA Technical Reports Server (NTRS)
Wiggins, R. A.; Mcmechan, G. A.; Toksoz, M. N.
1973-01-01
The Herglotz-Wiechert integral for the direct inversion of ray parameter versus distance curves can be manipulated to find the envelope of all possible models consistent with geometrical body wave observations (travel time and ray parameter versus distance). Such an extremal inversion approach has been used to find the uncertainty bounds for the velocity structure in the mantle and core. It is found, for example, that there is an uncertainty of plus or minus 40 km in the radius of the inner core boundary, plus or minus 18 km at the core-mantle boundary, and plus or minus 35 km at the 435-km transition zone. The velocity uncertainty is about plus or minus 0.08 km/sec for P and S waves in the lower mantle and about plus or minus 0.20 km/sec in the core. Experiments with various combinations of ray types in the core indicate that rather crude observations of SKKS-SKS travel times confine the range of possible models far more dramatically than do the most precise estimates of PmKP travel times. Comparisons of results from extremal inversion and linearized perturbation inversions indicate that body wave behavior is too strongly nonlinear for linearized schemes to be effective for predicting uncertainty.
Cieslak, John A; Focia, Pamela J; Gross, Adrian
2010-02-23
Electron spin-echo envelope modulation (ESEEM) spectroscopy is a well-established technique for the study of naturally occurring paramagnetic metal centers. The technique has been used to study copper complexes, hemes, enzyme mechanisms, micellar water content, and water permeation profiles in membranes, among other applications. In the present study, we combine ESEEM spectroscopy with site-directed spin labeling (SDSL) and X-ray crystallography in order to evaluate the technique's potential as a structural tool to describe the native environment of membrane proteins. Using the KcsA potassium channel as a model system, we demonstrate that deuterium ESEEM can detect water permeation along the lipid-exposed surface of the KcsA outer helix. We further demonstrate that (31)P ESEEM is able to identify channel residues that interact with the phosphate headgroup of the lipid bilayer. In combination with X-ray crystallography, the (31)P data may be used to define the phosphate interaction surface of the protein. The results presented here establish ESEEM as a highly informative technique for SDSL studies of membrane proteins.
Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review
NASA Technical Reports Server (NTRS)
Tripp, John S.
1999-01-01
This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.
An aircraft model for the AIAA controls design challenge
NASA Technical Reports Server (NTRS)
Brumbaugh, Randal W.
1991-01-01
A generic, state-of-the-art, high-performance aircraft model, including detailed, full-envelope, nonlinear aerodynamics, and full-envelope thrust and first-order engine response data is described. While this model was primarily developed Controls Design Challenge, the availability of such a model provides a common focus for research in aeronautical control theory and methodology. An implementation of this model using the FORTRAN computer language, associated routines furnished with the aircraft model, and techniques for interfacing these routines to external procedures is also described. Figures showing vehicle geometry, surfaces, and sign conventions are included.
NASA Astrophysics Data System (ADS)
Sarkar, Debdeep; Srivastava, Kumar Vaibhav
2017-02-01
In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.
Hsu, Yung-Heng; Chen, Dave Wei-Chih; Tai, Chun-Der; Chou, Ying-Chao; Liu, Shih-Jung; Ueng, Steve Wen-Neng; Chan, Err-Cheng
2014-01-01
We developed biodegradable drug-eluting nanofiber-enveloped implants that provided sustained release of vancomycin and ceftazidime. To prepare the biodegradable nanofibrous membranes, poly(D,L)-lactide-co-glycolide and the antibiotics were first dissolved in 1,1,1,3,3,3-hexafluoro-2-propanol. They were electrospun into biodegradable drug-eluting membranes, which were then enveloped on the surface of stainless plates. An elution method and a high-performance liquid chromatography assay were employed to characterize the in vivo and in vitro release rates of the antibiotics from the nanofiber-enveloped plates. The results showed that the biodegradable nanofiber-enveloped plates released high concentrations of vancomycin and ceftazidime (well above the minimum inhibitory concentration) for more than 3 and 8 weeks in vitro and in vivo, respectively. A bacterial inhibition test was carried out to determine the relative activity of the released antibiotics. The bioactivity ranged from 25% to 100%. In addition, the serum creatinine level remained within the normal range, suggesting that the high vancomycin concentration did not affect renal function. By adopting the electrospinning technique, we will be able to manufacture biodegradable drug-eluting implants for the long-term drug delivery of different antibiotics. PMID:25246790
NASA Astrophysics Data System (ADS)
Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.
2015-02-01
In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).
NASA Astrophysics Data System (ADS)
Chahal, Balwinder Singh; Singh, Manpreet; Shalini; Saini, N. S.
2018-02-01
We present an investigation for the nonlinear dust ion acoustic wave modulation in a plasma composed of charged dust grains, two temperature (cold and hot) nonextensive electrons and ions. For this purpose, the multiscale reductive perturbation technique is used to obtain a nonlinear Schrödinger equation. The critical wave number, which indicates where the modulational instability sets in, has been determined precisely for various regimes. The influence of plasma background nonextensivity on the growth rate of modulational instability is discussed. The modulated wavepackets in the form of either bright or dark type envelope solitons may exist. Formation of rogue waves from bright envelope solitons is also discussed. The investigation indicates that the structural characteristics of these envelope excitations (width, amplitude) are significantly affected by nonextensivity, dust concentration, cold electron-ion density ratio and temperature ratio.
NASA Astrophysics Data System (ADS)
Shariati, A.; Aghamohammadi, A.
1995-12-01
We propose a simple and concise method to construct the inhomogeneous quantum group IGLq(n) and its universal enveloping algebra Uq(igl(n)). Our technique is based on embedding an n-dimensional quantum space in an n+1-dimensional one as the set xn+1=1. This is possible only if one considers the multiparametric quantum space whose parameters are fixed in a specific way. The quantum group IGLq(n) is then the subset of GLq(n+1), which leaves the xn+1=1 subset invariant. For the deformed universal enveloping algebra Uq(igl(n)), we will show that it can also be embedded in Uq(gl(n+1)), provided one uses the multiparametric deformation of U(gl(n+1)) with a specific choice of its parameters.
Johnston, Iain G; Rickett, Benjamin C; Jones, Nick S
2014-12-02
Back-of-the-envelope or rule-of-thumb calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behavior of physical systems, for example in so-called Fermi problems in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a probabilistic calculator called CALADIS, a free web tool, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between CALADIS and the BioNumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
NASA Technical Reports Server (NTRS)
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
NASA Astrophysics Data System (ADS)
Ghazi, Georges
This report presents several methodologies for the design of tools intended to the analysis of the stability and the control of a business aircraft. At first, a generic flight dynamic model was developed to predict the behavior of the aircraft further to a movement on the control surfaces or further to any disturbance. For that purpose, different categories of winds were considered in the module of simulation to generate various scenarios and conclude about the efficiency of the autopilot. Besides being realistic, the flight model takes into account the variation of the mass parameters according to fuel consumption. A comparison with a simulator of the company CAE Inc. and certified level D allowed to validate this first stage with an acceptable success rate. Once the dynamics is validated, the next stage deals with the stability around a flight condition. For that purpose, a first static analysis is established to find the trim conditions inside the flight envelop. Then, two algorithms of linearization generate the state space models which approximate the decoupled dynamics (longitudinal and lateral) of the aircraft. Then to test the viability of the linear models, 1,500 comparisons with the nonlinear dynamics have been done with a 100% rate of success. The study of stability allowed to highlight the need of control systems to improve first the performances of the plane, then to control its different axes. A methodology based on a coupling between a modern control technique (LQR) and a genetic algorithm is presented. This methodology allowed to find optimal and successful controllers which satisfy a large number of specifications. Besides being successful, they have to be robust to uncertainties owed to the variation of mass. Thus, an analysis of robustness using the theory of the guardian maps was applied to uncertain dynamics. However, because of a too sensitive region of the flight envelop, some analyses are biased. Nevertheless, a validation with the nonlinear dynamics allowed to prove the robustness of the controllers over the entire flight envelope. Finally, the last stage of this project concerned the control laws for the autopilot. Once again, the proposed methodology, bases itself on the association of flight mechanic equations, control theory and a metaheuristic optimization method. Afterward, four detailed test scenarios are presented to illustrate the efficiency and the robustness of the entire autopilot.
SU-F-T-185: Study of the Robustness of a Proton Arc Technique Based On PBS Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Zheng, Y
Purpose: One potential technique to realize proton arc is through using PBS beams from many directions to form overlaid Bragg peak (OBP) spots and placing these OBP spots throughout the target volume to achieve desired dose distribution. In this study, we analyzed the robustness of this proton arc technique. Methods: We used a cylindrical water phantom of 20 cm in radius in our robustness analysis. To study the range uncertainty effect, we changed the density of the phantom by ±3%. To study the setup uncertainty effect, we shifted the phantom by 3 & 5 mm. We also combined the rangemore » and setup uncertainties (3mm/±3%). For each test plan, we performed dose calculation for the nominal and 6 disturbed scenarios. Two test plans were used, one with single OBP spot and the other consisting of 121 OBP spots covering a 10×10cm{sup 2} area. We compared the dose profiles between the nominal and disturbed scenarios to estimate the impact of the uncertainties. Dose calculation was performed with Gate/GEANT based Monte Carlo software in cloud computing environment. Results: For each of the 7 scenarios, we simulated 100k & 10M events for plans consisting of single OBP spot and 121 OBP spots respectively. For single OBP spot, the setup uncertainty had minimum impact on the spot’s dose profile while range uncertainty had significant impact on the dose profile. For plan consisting of 121 OBP spots, similar effect was observed but the extent of disturbance was much less compared to single OBP spot. Conclusion: For PBS arc technique, range uncertainty has significantly more impact than setup uncertainty. Although single OBP spot can be severely disturbed by the range uncertainty, the overall effect is much less when a large number of OBP spots are used. Robustness optimization for PBS arc technique should consider range uncertainty with priority.« less
Isolating The Building Thermal Envelope
NASA Astrophysics Data System (ADS)
Harrje, D. T.; Dutt, G. S.; Gadsby, K. J.
1981-01-01
The evaluation of the thermal integrity of building envelopes by infrared scanning tech-niques is often hampered in mild weather because temperature differentials across the envelope are small. Combining the infrared scanning with positive or negative building pressures, induced by a "blower door" or the building ventilation system, considerably extends the periods during which meaningful diagnostics can be conducted. Although missing or poorly installed insulation may lead to a substantial energy penalty, it is the search for air leakage sites that often has the largest potential for energy savings. Infrared inspection of the attic floor with air forced from the occupied space through ceiling by-passes, and inspecting the interior of the building when outside air is being sucked through the envelope reveals unexpected leakage sites. Portability of the diagnostic equipment is essential in these surveys which may include access into some tight spaces. A catalog of bypass heat losses that have been detected in residential housing using the combined infrared pressure differential technique is included to point out the wide variety of leakage sites which may compromise the benefits of thermal insulation and allow excessive air infiltration. Detection and suppression of such leaks should be key items in any building energy audit program. Where a calibrated blower door is used to pressurize or evacuate the house, the leakage rate can be quantified and an excessively tight house recognized. Houses that are too tight may be improved with a minimal energy penalty by forced ventilation,preferably with a heat recuperator and/or by providing combustion air directly to the furnace.
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center
NASA Technical Reports Server (NTRS)
Reinath, Michael S.
1997-01-01
Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.
Early focus development effort, ultrasonic inspection of fixed housing metal-to-adhesive bondline
NASA Technical Reports Server (NTRS)
Hartmann, John K.; Hoskins, Brad R.; Karner, Paul
1991-01-01
An ultrasonic technique was developed for the fixed housing metal-to-adhesive bondline that will support the Flight 15 time frame and subsequent motors. The technique has the capability to detect a 1.0 inch diameter unbond with a 90 percent probability of detection (POD) at a 95 percent confidence level. The technique and support equipment will perform within the working envelope dictated by a stacked motor configuration.
NUCLEAR ENVELOPE-ASSOCIATED RESUMPTION OF RNA SYNTHESIS IN LATE MITOSIS OF HELA CELLS
Simmons, T.; Heywood, P.; Hodge, L.
1973-01-01
The restitution of RNA synthesis in cultures progressing from metaphase into interphase (G1) has been investigated in synchronized HeLa S3 cells by using inhibitors of macro-molecular synthesis and the technique of electron microscope autoradiography. The rate of incorporation of radioactive uridine into RNA approached interphase levels in the absence of renewed protein synthesis. In contrast, maintenance of this rate in G1 was dependent upon renewed protein synthesis. Restoration of synthesis of heterogeneous nuclear RNA occurred under conditions that inhibited production of ribosomal precursor RNA. In autoradiographs of individual cells exposed to radioactive uridine, silver grains were first detected after nuclear envelope reformation at the periphery of the chromosome mass but before chromosomal decondensation. These data are consistent with the following interpretation. Multiple RNA polymerase activities persist through mitosis and are involved in the initiation of RNA synthesis in early telophase at sites on the nuclear envelope. PMID:4752403
Pathogen Reduction in Human Plasma Using an Ultrashort Pulsed Laser
Tsen, Shaw-Wei D.; Kingsley, David H.; Kibler, Karen; Jacobs, Bert; Sizemore, Sara; Vaiana, Sara M.; Anderson, Jeanne; Tsen, Kong-Thon; Achilefu, Samuel
2014-01-01
Pathogen reduction is a viable approach to ensure the continued safety of the blood supply against emerging pathogens. However, the currently licensed pathogen reduction techniques are ineffective against non-enveloped viruses such as hepatitis A virus, and they introduce chemicals with concerns of side effects which prevent their widespread use. In this report, we demonstrate the inactivation of both enveloped and non-enveloped viruses in human plasma using a novel chemical-free method, a visible ultrashort pulsed laser. We found that laser treatment resulted in 2-log, 1-log, and 3-log reductions in human immunodeficiency virus, hepatitis A virus, and murine cytomegalovirus in human plasma, respectively. Laser-treated plasma showed ≥70% retention for most coagulation factors tested. Furthermore, laser treatment did not alter the structure of a model coagulation factor, fibrinogen. Ultrashort pulsed lasers are a promising new method for chemical-free, broad-spectrum pathogen reduction in human plasma. PMID:25372037
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Non-destructive terahertz imaging of illicit drugs using spectral fingerprints
NASA Astrophysics Data System (ADS)
Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuuki; Inoue, Hiroyuki
2003-10-01
The absence of non-destructive inspection techniques for illicit drugs hidden in mail envelopes has resulted in such drugs being smuggled across international borders freely. We have developed a novel basic technology for terahertz imaging, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. The spatial distributions of the targets are obtained from terahertz multispectral transillumination images, using absorption spectra measured with a tunable terahertz-wave source. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
NASA Technical Reports Server (NTRS)
Zoladz, T.; Earhart, E.; Fiorucci, T.
1995-01-01
Utilizing high-frequency data from a highly instrumented rotor assembly, seeded bearing defect signatures are characterized using both conventional linear approaches, such as power spectral density analysis, and recently developed nonlinear techniques such as bicoherence analysis. Traditional low-frequency (less than 20 kHz) analysis and high-frequency envelope analysis of both accelerometer and acoustic emission data are used to recover characteristic bearing distress information buried deeply in acquired data. The successful coupling of newly developed nonlinear signal analysis with recovered wideband envelope data from accelerometers and acoustic emission sensors is the innovative focus of this research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou Pu; Zeng Zhinan; Zheng Yinghui
2010-11-15
We propose a scheme for generating isolated attosecond pulse (IAP) via high-order harmonic generation in gases using a chirped two-color laser field of multicycle duration. In contrast to previous techniques where the stable carrier-envelope phase (CEP) of the driving laser pulses is a prerequisite for IAP generation, the proposed scheme is robust against the large variation of CEP. We show the generation of IAP with an intensity fluctuation less than 50% and an intensity contrast ratio higher than 5:1 when the CEP shift is as large as 1.35{pi}.
Measuring the absolute carrier-envelope phase of many-cycle laser fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzallas, P.; Skantzakis, E.; Charalambidis, D.
2010-12-15
The carrier-envelope phase (CEP) of high-peak-power, many-cycle laser fields becomes a crucial parameter when such fields are used, in conjunction with polarization gating techniques, in isolated attosecond (asec) pulse generation. However, its measurement has not been achieved so far. We demonstrate a physical process sensitive to the CEP value of such fields and describe a method for its online shot-to-shot monitoring. This work paves the way for the exploitation of energetic isolated asec pulses in studies of nonlinear extreme ultraviolet (XUV) processes and XUV-pump-XUV-probe experiments with asec resolutions.
Envelopment filter and K-means for the detection of QRS waveforms in electrocardiogram.
Merino, Manuel; Gómez, Isabel María; Molina, Alberto J
2015-06-01
The electrocardiogram (ECG) is a well-established technique for determining the electrical activity of the heart and studying its diseases. One of the most common pieces of information that can be read from the ECG is the heart rate (HR) through the detection of its most prominent feature: the QRS complex. This paper describes an offline version and a real-time implementation of a new algorithm to determine QRS localization in the ECG signal based on its envelopment and K-means clustering algorithm. The envelopment is used to obtain a signal with only QRS complexes, deleting P, T, and U waves and baseline wander. Two moving average filters are applied to smooth data. The K-means algorithm classifies data into QRS and non-QRS. The technique is validated using 22 h of ECG data from five Physionet databases. These databases were arbitrarily selected to analyze different morphologies of QRS complexes: three stored data with cardiac pathologies, and two had data with normal heartbeats. The algorithm has a low computational load, with no decision thresholds. Furthermore, it does not require any additional parameter. Sensitivity, positive prediction and accuracy from results are over 99.7%. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Data acquisition and analysis of the UNCOSS underwater explosive neutron sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carasco, C.; Eleon, C.; Perot, B.
2011-07-01
The purpose of the FP7 UNCOSS project (Underwater Coastal Sea Surveyor, http://www.uncoss-project.org) is to develop a neutron-based underwater explosive sensor to detect unexploded ordnance lying on the sea bottom. The Associated Particle Technique is used to focus the inspection on a suspicious object located by optical and electromagnetic sensors and to determine if there is an explosive charge inside. This paper presents the data acquisition electronics and data analysis software which have been developed for this project. The electronics digitize and process the signal in real-time based on a field programmable gate array structure to perform precise time-of-flight and gamma-raymore » energy measurements. UNCOSS software offers the basic tools to analyze the time-of-flight and energy spectra of the interrogated object. It allows to unfold the gamma-ray spectrum into pure elemental count proportions, mainly C, N, O, Fe, Al, Si, and Ca. The C, N, and O count fractions are converted into chemical proportions by taking into account the gamma-ray production cross sections, as well as neutron and photon attenuation in the different shields between the ROV (Remotely Operated Vehicle) and the explosive, such as the explosive iron shell, seawater, and ROV envelop. These chemical ratios are plotted in a two-dimensional (2D) barycentric representation to position the measured point with respect to common explosives. The systematic uncertainty due to the above attenuation effects and counting statistical fluctuations are combined with a Monte Carlo method to provide a 3D uncertainty area in a barycentric plot, which allows to determine the most probable detected materials in view to make a decision about the presence of explosive. (authors)« less
Trade-off between linewidth and slip rate in a mode-locked laser model.
Moore, Richard O
2014-05-15
We demonstrate a trade-off between linewidth and loss-of-lock rate in a mode-locked laser employing active feedback to control the carrier-envelope offset phase difference. In frequency metrology applications, the linewidth translates directly to uncertainty in the measured frequency, whereas the impact of lock loss and recovery on the measured frequency is less well understood. We reduce the dynamics to stochastic differential equations, specifically diffusion processes, and compare the linearized linewidth to the rate of lock loss determined by the mean time to exit, as calculated from large deviation theory.
NASA Technical Reports Server (NTRS)
Lih, Shyh-Shiuh; Bar-Cohen, Yoseph; Lee, Hyeong Jae; Takano, Nobuyuki; Bao, Xiaoqi
2013-01-01
An advanced signal processing methodology is being developed to monitor the height of condensed water thru the wall of a steel pipe while operating at temperatures as high as 250deg. Using existing techniques, previous study indicated that, when the water height is low or there is disturbance in the environment, the predicted water height may not be accurate. In recent years, the use of the autocorrelation and envelope techniques in the signal processing has been demonstrated to be a very useful tool for practical applications. In this paper, various signal processing techniques including the auto correlation, Hilbert transform, and the Shannon Energy Envelope methods were studied and implemented to determine the water height in the steam pipe. The results have shown that the developed method provides a good capability for monitoring the height in the regular conditions. An alternative solution for shallow water or no water conditions based on a developed hybrid method based on Hilbert transform (HT) with a high pass filter and using the optimized windowing technique is suggested. Further development of the reported methods would provide a powerful tool for the identification of the disturbances of water height inside the pipe.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
BUILDING ENVELOPE OPTIMIZATION USING EMERGY ANALYSIS
Energy analysis is an integral component of sustainable building practices. Energy analysis coupled with optimization techniques may offer solutions for greater energy efficiency over the lifetime of the building. However, all such computationsemploy the energy used for operation...
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perko, Z.; Gilli, L.; Lathouwers, D.
2013-07-01
Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardenas, Ibsen C., E-mail: c.cardenas@utwente.nl; Halman, Johannes I.M., E-mail: J.I.M.Halman@utwente.nl
Uncertainty is virtually unavoidable in environmental impact assessments (EIAs). From the literature related to treating and managing uncertainty, we have identified specific techniques for coping with uncertainty in EIAs. Here, we have focused on basic steps in the decision-making process that take place within an EIA setting. More specifically, we have identified uncertainties involved in each decision-making step and discussed the extent to which these can be treated and managed in the context of an activity or project that may have environmental impacts. To further demonstrate the relevance of the techniques identified, we have examined the extent to which themore » EIA guidelines currently used in Colombia consider and provide guidance on managing the uncertainty involved in these assessments. Some points that should be considered in order to provide greater robustness in impact assessments in Colombia have been identified. These include the management of stakeholder values, the systematic generation of project options, and their associated impacts as well as the associated management actions, and the evaluation of uncertainties and assumptions. We believe that the relevant and specific techniques reported here can be a reference for future evaluations of other EIA guidelines in different countries. - Highlights: • uncertainty is unavoidable in environmental impact assessments, EIAs; • we have identified some open techniques to EIAs for treating and managing uncertainty in these assessments; • points for improvement that should be considered in order to provide greater robustness in EIAs in Colombia have been identified; • the paper provides substantiated a reference for possible examinations of EIAs guidelines in other countries.« less
Complement and the control of HIV infection: an evolving story.
Frank, Michael M; Hester, Christopher; Jiang, Haixiang
2014-05-01
Thirty years ago, investigators isolated and later determined the structure of HIV-1 and its envelope proteins. Using techniques that were effective with other viruses, they prepared vaccines designed to generate antibody or T-cell responses, but they were ineffective in clinical trials. In this article, we consider the role of complement in host defense against enveloped viruses, the role it might play in the antibody response and why complement has not controlled HIV-1 infection. Complement consists of a large group of cell-bound and plasma proteins that are an integral part of the innate immune system. They provide a first line of defense against microbes and also play a role in the immune response. Here we review the studies of complement-mediated HIV destruction and the role of complement in the HIV antibody response. HIV-1 has evolved a complex defense to prevent complement-mediated killing reviewed here. As part of these studies, we have discovered that HIV-1 envelope, on administration into animals, is rapidly broken down into small peptides that may prove to be very inefficient at provident the type of antigenic stimulation that leads to an effective immune response. Improving complement binding and stabilizing envelope may improve the vaccine response.
High efficiency RF amplifier development over wide dynamic range for accelerator application
NASA Astrophysics Data System (ADS)
Mishra, Jitendra Kumar; Ramarao, B. V.; Pande, Manjiri M.; Joshi, Gopal; Sharma, Archana; Singh, Pitamber
2017-10-01
Superconducting (SC) cavities in an accelerating section are designed to have the same geometrical velocity factor (βg). For these cavities, Radio Frequency (RF) power needed to accelerate charged particles varies with the particle velocity factor (β). RF power requirement from one cavity to other can vary by 2-5 dB within the accelerating section depending on the energy gain in the cavity and beam current. In this paper, we have presented an idea to improve operating efficiency of the SC RF accelerators using envelope tracking technique. A study on envelope tracking technique without feedback is carried out on a 1 kW, 325 MHz, class B (conduction angle of 180 degrees) tuned load power amplifier (PA). We have derived expressions for the efficiency and power output for tuned load amplifier operating on the envelope tracking technique. From the derived expressions, it is observed that under constant load resistance to the device (MOSFET), optimum amplifier efficiency is invariant whereas output power varies with the square of drain bias voltage. Experimental results on 1 kW PA module show that its optimum efficiency is always greater than 62% with variation less than 5% from mean value over 7 dB dynamic range. Low power amplifier modules are the basic building block for the high power amplifiers. Therefore, results for 1 kW PA modules remain valid for the high power solid state amplifiers built using these PA modules. The SC RF accelerators using these constant efficiency power amplifiers can improve overall accelerator efficiency.
Goldberg, Martin W
2016-01-01
Scanning electron microscopy (SEM) is a technique used to image surfaces. Field emission SEMs (feSEMs) can resolve structures that are ~0.5-1.5 nm apart. FeSEM, therefore is a useful technique for imaging molecular structures that exist at surfaces such as membranes. The nuclear envelope consists of four membrane surfaces, all of which may be accessible for imaging. Imaging of the cytoplasmic face of the outer membrane gives information about ribosomes and cytoskeletal attachments, as well as details of the cytoplasmic peripheral components of the nuclear pore complex, and is the most easily accessed surface. The nucleoplasmic face of the inner membrane is easily accessible in some cells, such as amphibian oocytes, giving valuable details about the organization of the nuclear lamina and how it interacts with the nuclear pore complexes. The luminal faces of both membranes are difficult to access, but may be exposed by various fracturing techniques. Protocols are presented here for the preparation, labeling, and feSEM imaging of Xenopus laevis oocyte nuclear envelopes.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Operationalising uncertainty in data and models for integrated water resources management.
Blind, M W; Refsgaard, J C
2007-01-01
Key sources of uncertainty of importance for water resources management are (1) uncertainty in data; (2) uncertainty related to hydrological models (parameter values, model technique, model structure); and (3) uncertainty related to the context and the framing of the decision-making process. The European funded project 'Harmonised techniques and representative river basin data for assessment and use of uncertainty information in integrated water management (HarmoniRiB)' has resulted in a range of tools and methods to assess such uncertainties, focusing on items (1) and (2). The project also engaged in a number of discussions surrounding uncertainty and risk assessment in support of decision-making in water management. Based on the project's results and experiences, and on the subsequent discussions a number of conclusions can be drawn on the future needs for successful adoption of uncertainty analysis in decision support. These conclusions range from additional scientific research on specific uncertainties, dedicated guidelines for operational use to capacity building at all levels. The purpose of this paper is to elaborate on these conclusions and anchoring them in the broad objective of making uncertainty and risk assessment an essential and natural part in future decision-making processes.
UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.
Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois
2018-03-01
Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.
Methods and Tools for Evaluating Uncertainty in Ecological Models: A Survey
Poster presented at the Ecological Society of America Meeting. Ecologists are familiar with a variety of uncertainty techniques, particularly in the intersection of maximum likelihood parameter estimation and Monte Carlo analysis techniques, as well as a recent increase in Baye...
1985-05-30
consisting of quarterwave layers by detecting the -- extrema of transmission or reflectance at a particular wavelength. This method is extremely stable for the...technique, which is based on an envelope method , and gives some experimental *results. L"( iL -2- I. Introduction The refractive index and the...constants determination :ecnnique by computer simulation, we have applied the method to various layers of titanium dioxide. This technique can then
NASA Astrophysics Data System (ADS)
Shen, Mingxi; Chen, Jie; Zhuan, Meijia; Chen, Hua; Xu, Chong-Yu; Xiong, Lihua
2018-01-01
Uncertainty estimation of climate change impacts on hydrology has received much attention in the research community. The choice of a global climate model (GCM) is usually considered as the largest contributor to the uncertainty of climate change impacts. The temporal variation of GCM uncertainty needs to be investigated for making long-term decisions to deal with climate change. Accordingly, this study investigated the temporal variation (mainly long-term) of uncertainty related to the choice of a GCM in predicting climate change impacts on hydrology by using multi-GCMs over multiple continuous future periods. Specifically, twenty CMIP5 GCMs under RCP4.5 and RCP8.5 emission scenarios were adapted to adequately represent this uncertainty envelope, fifty-one 30-year future periods moving from 2021 to 2100 with 1-year interval were produced to express the temporal variation. Future climatic and hydrological regimes over all future periods were compared to those in the reference period (1971-2000) using a set of metrics, including mean and extremes. The periodicity of climatic and hydrological changes and their uncertainty were analyzed using wavelet analysis, while the trend was analyzed using Mann-Kendall trend test and regression analysis. The results showed that both future climate change (precipitation and temperature) and hydrological response predicted by the twenty GCMs were highly uncertain, and the uncertainty increased significantly over time. For example, the change of mean annual precipitation increased from 1.4% in 2021-2050 to 6.5% in 2071-2100 for RCP4.5 in terms of the median value of multi-models, but the projected uncertainty reached 21.7% in 2021-2050 and 25.1% in 2071-2100 for RCP4.5. The uncertainty under a high emission scenario (RCP8.5) was much larger than that under a relatively low emission scenario (RCP4.5). Almost all climatic and hydrological regimes and their uncertainty did not show significant periodicity at the P = .05 significance level, but their temporal variation could be well modeled by using the fourth-order polynomial. Overall, this study further emphasized the importance of using multiple GCMs for studying climate change impacts on hydrology. Furthermore, the temporal variation of uncertainty sourced from GCMs should be given more attention.
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark
2011-01-01
Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.
The effect of uncertainties in distance-based ranking methods for multi-criteria decision making
NASA Astrophysics Data System (ADS)
Jaini, Nor I.; Utyuzhnikov, Sergei V.
2017-08-01
Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.
A synopsis of climate change effects on groundwater recharge
NASA Astrophysics Data System (ADS)
Smerdon, Brian D.
2017-12-01
Six review articles published between 2011 and 2016 on groundwater and climate change are briefly summarized. This synopsis focuses on aspects related to predicting changes to groundwater recharge conditions, with several common conclusions between the review articles being noted. The uncertainty of distribution and trend in future precipitation from General Circulation Models (GCMs) results in varying predictions of recharge, so much so that modelling studies are often not able to predict the magnitude and direction (increase or decrease) of future recharge conditions. Evolution of modelling approaches has led to the use of multiple GCMs and hydrologic models to create an envelope of future conditions that reflects the probability distribution. The choice of hydrologic model structure and complexity, and the choice of emissions scenario, has been investigated and somewhat resolved; however, recharge results remain sensitive to downscaling methods. To overcome uncertainty and provide practical use in water management, the research community indicates that modelling at a mesoscale, somewhere between watersheds and continents, is likely ideal. Improvements are also suggested for incorporating groundwater processes within GCMs.
GRACE Mission Design: Impact of Uncertainties in Disturbance Environment and Satellite Force Models
NASA Technical Reports Server (NTRS)
Mazanek, Daniel D.; Kumar, Renjith R.; Seywald, Hans; Qu, Min
2000-01-01
The Gravity Recovery and Climate Experiment (GRACE) primary mission will be performed by making measurements of the inter-satellite range change between two co-planar, low altitude, near-polar orbiting satellites. Understanding the uncertainties in the disturbance environment, particularly the aerodynamic drag and torques, is critical in several mission areas. These include an accurate estimate of the spacecraft orbital lifetime, evaluation of spacecraft attitude control requirements, and estimation of the orbital maintenance maneuver frequency necessitated by differences in the drag forces acting on both satellites. The FREEMOL simulation software has been developed and utilized to analyze and suggest design modifications to the GRACE spacecraft. Aerodynamic accommodation bounding analyses were performed and worst-case envelopes were obtained for the aerodynamic torques and the differential ballistic coefficients between the leading and trailing GRACE spacecraft. These analyses demonstrate how spacecraft aerodynamic design and analysis can benefit from a better understanding of spacecraft surface accommodation properties, and the implications for mission design constraints such as formation spacing control.
NASA Astrophysics Data System (ADS)
Nielsen, M. B.; Schunker, H.; Gizon, L.; Schou, J.; Ball, W. H.
2017-06-01
Context. Rotational shear in Sun-like stars is thought to be an important ingredient in models of stellar dynamos. Thanks to helioseismology, rotation in the Sun is characterized well, but the interior rotation profiles of other Sun-like stars are not so well constrained. Until recently, measurements of rotation in Sun-like stars have focused on the mean rotation, but little progress has been made on measuring or even placing limits on differential rotation. Aims: Using asteroseismic measurements of rotation we aim to constrain the radial shear in five Sun-like stars observed by the NASA Kepler mission: KIC 004914923, KIC 005184732, KIC 006116048, KIC 006933899, and KIC 010963065. Methods: We used stellar structure models for these five stars from previous works. These models provide the mass density, mode eigenfunctions, and the convection zone depth, which we used to compute the sensitivity kernels for the rotational frequency splitting of the modes. We used these kernels as weights in a parametric model of the stellar rotation profile of each star, where we allowed different rotation rates for the radiative interior and the convective envelope. This parametric model was incorporated into a fit to the oscillation power spectrum of each of the five Kepler stars. This fit included a prior on the rotation of the envelope, estimated from the rotation of surface magnetic activity measured from the photometric variability. Results: The asteroseismic measurements without the application of priors are unable to place meaningful limits on the radial shear. Using a prior on the envelope rotation enables us to constrain the interior rotation rate and thus the radial shear. In the five cases that we studied, the interior rotation rate does not differ from the envelope by more than approximately ± 30%. Uncertainties in the rotational splittings are too large to unambiguously determine the sign of the radial shear.
Field Trial of an Aerosol-Based Enclosure Sealing Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, Curtis; Springer, David
2015-09-01
This report presents the results from several demonstrations of a new method for sealing building envelope air leaks using an aerosol sealing process developed by the Western Cooling Efficiency Center at UC Davis. The process involves pressurizing a building while applying an aerosol sealant to the interior. As air escapes through leaks in the building envelope, the aerosol particles are transported to the leaks where they collect and form a seal that blocks the leak. Standard blower door technology is used to facilitate the building pressurization, which allows the installer to track the sealing progress during the installation and automaticallymore » verify the final building tightness. Each aerosol envelope sealing installation was performed after drywall was installed and taped, and the process did not appear to interrupt the construction schedule or interfere with other trades working in the homes. The labor needed to physically seal bulk air leaks in typical construction will not be replaced by this technology. However, this technology is capable of bringing the air leakage of a building that was built with standard construction techniques and HERS-verified sealing down to levels that would meet DOE Zero Energy Ready Homes program requirements. When a developer is striving to meet a tighter envelope leakage specification, this technology could greatly reduce the cost to achieve that goal by providing a simple and relatively low cost method for reducing the air leakage of a building envelope with little to no change in their common building practices.« less
TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-00: New Methods to Ensure Target Coverage
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
Exploring Uncertainty with Projectile Launchers
ERIC Educational Resources Information Center
Orzel, Chad; Reich, Gary; Marr, Jonathan
2012-01-01
The proper choice of a measurement technique that minimizes systematic and random uncertainty is an essential part of experimental physics. These issues are difficult to teach in the introductory laboratory, though. Because most experiments involve only a single measurement technique, students are often unable to make a clear distinction between…
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
NASA Astrophysics Data System (ADS)
Stankunas, Gediminas; Batistoni, Paola; Sjöstrand, Henrik; Conroy, Sean; JET Contributors
2015-07-01
The neutron activation technique is routinely used in fusion experiments to measure the neutron yields. This paper investigates the uncertainty on these measurements as due to the uncertainties on dosimetry and activation reactions. For this purpose, activation cross-sections were taken from the International Reactor Dosimetry and Fusion File (IRDFF-v1.05) in 640 groups ENDF-6 format for several reactions of interest for both 2.5 and 14 MeV neutrons. Activation coefficients (reaction rates) have been calculated using the neutron flux spectra at JET vacuum vessel, both for DD and DT plasmas, calculated by MCNP in the required 640-energy group format. The related uncertainties for the JET neutron spectra are evaluated as well using the covariance data available in the library. These uncertainties are in general small, but not negligible when high accuracy is required in the determination of the fusion neutron yields.
Improved pressure measurement system for calibration of the NASA LeRC 10x10 supersonic wind tunnel
NASA Technical Reports Server (NTRS)
Blumenthal, Philip Z.; Helland, Stephen M.
1994-01-01
This paper discusses a method used to provide a significant improvement in the accuracy of the Electronically Scanned Pressure (ESP) Measurement System by means of a fully automatic floating pressure generating system for the ESP calibration and reference pressures. This system was used to obtain test section Mach number and flow angularity measurements over the full envelope of test conditions for the 10 x 10 Supersonic Wind Tunnel. The uncertainty analysis and actual test data demonstrated that, for most test conditions, this method could reduce errors to about one-third to one-half that obtained with the standard system.
Do bioclimate variables improve performance of climate envelope models?
Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.
2012-01-01
Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.
An Envelope Based Feedback Control System for Earthquake Early Warning: Reality Check Algorithm
NASA Astrophysics Data System (ADS)
Heaton, T. H.; Karakus, G.; Beck, J. L.
2016-12-01
Earthquake early warning systems are, in general, designed to be open loop control systems in such a way that the output, i.e., the warning messages, only depend on the input, i.e., recorded ground motions, up to the moment when the message is issued in real-time. We propose an algorithm, which is called Reality Check Algorithm (RCA), which would assess the accuracy of issued warning messages, and then feed the outcome of the assessment back into the system. Then, the system would modify its messages if necessary. That is, we are proposing to convert earthquake early warning systems into feedback control systems by integrating them with RCA. RCA works by continuously monitoring and comparing the observed ground motions' envelopes to the predicted envelopes of Virtual Seismologist (Cua 2005). Accuracy of magnitude and location (both spatial and temporal) estimations of the system are assessed separately by probabilistic classification models, which are trained by a Sparse Bayesian Learning technique called Automatic Relevance Determination prior.
Retrovirus purification: method that conserves envelope glycoprotein and maximizes infectivity.
McGrath, M; Witte, O; Pincus, T; Weissman, I L
1978-01-01
A Sepharose 4B chromatographic method for purification of retroviruses is described which was less time consuming, increased purified virus yields, conserved viral glycoprotein, and increased recovery of biological infectivity in comparison with conventional sucrose gradient ultracentrifugation techniques. Images PMID:205680
Envelope filter sequence to delete blinks and overshoots.
Merino, Manuel; Gómez, Isabel María; Molina, Alberto J
2015-05-30
Eye movements have been used in control interfaces and as indicators of somnolence, workload and concentration. Different techniques can be used to detect them: we focus on the electrooculogram (EOG) in which two kinds of interference occur: blinks and overshoots. While they both draw bell-shaped waveforms, blinks are caused by the eyelid, whereas overshoots occur due to target localization error and are placed on saccade. They need to be extracted from the EOG to increase processing effectiveness. This paper describes off- and online processing implementations based on lower envelope for removing bell-shaped noise; they are compared with a 300-ms-median filter. Techniques were analyzed using two kinds of EOG data: those modeled from our own design, and real signals. Using a model signal allowed to compare filtered outputs with ideal data, so that it was possible to quantify processing precision to remove noise caused by blinks, overshoots, and general interferences. We analyzed the ability to delete blinks and overshoots, and waveform preservation. Our technique had a high capacity for reducing interference amplitudes (>97%), even exceeding median filter (MF) results. However, the MF obtained better waveform preservation, with a smaller dependence on fixation width. The proposed technique is better at deleting blinks and overshoots than the MF in model and real EOG signals.
Practical uncertainty reduction and quantification in shock physics measurements
Akin, M. C.; Nguyen, J. H.
2015-04-20
We report the development of a simple error analysis sampling method for identifying intersections and inflection points to reduce total uncertainty in experimental data. This technique was used to reduce uncertainties in sound speed measurements by 80% over conventional methods. Here, we focused on its impact on a previously published set of Mo sound speed data and possible implications for phase transition and geophysical studies. However, this technique's application can be extended to a wide range of experimental data.
For many water quality-impaired stream segments, streamflow and water quality monitoring sites are not available. Lack of available streamflow data at impaired ungauged sites leads to uncertainties in total maximum daily load (TMDL) estimation. We developed a technique to minimiz...
Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.
2010-01-01
Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less
The planetary nebula IC 4776 and its post-common-envelope binary central star
NASA Astrophysics Data System (ADS)
Sowicka, Paulina; Jones, David; Corradi, Romano L. M.; Wesson, Roger; García-Rojas, Jorge; Santander-García, Miguel; Boffin, Henri M. J.; Rodríguez-Gil, Pablo
2017-11-01
We present a detailed analysis of IC 4776, a planetary nebula displaying a morphology believed to be typical of central star binarity. The nebula is shown to comprise a compact hourglass-shaped central region and a pair of precessing jet-like structures. Time-resolved spectroscopy of its central star reveals a periodic radial velocity variability consistent with a binary system. Whilst the data are insufficient to accurately determine the parameters of the binary, the most likely solutions indicate that the secondary is probably a low-mass main-sequence star. An empirical analysis of the chemical abundances in IC 4776 indicates that the common-envelope phase may have cut short the asymptotic giant branch evolution of the progenitor. Abundances calculated from recombination lines are found to be discrepant by a factor of approximately 2 relative to those calculated using collisionally excited lines, suggesting a possible correlation between low-abundance discrepancy factors and intermediate-period post-common-envelope central stars and/or Wolf-Rayet central stars. The detection of a radial velocity variability associated with the binarity of the central star of IC 4776 may be indicative of a significant population of (intermediate-period) post-common-envelope binary central stars that would be undetected by classic photometric monitoring techniques.
NASA Technical Reports Server (NTRS)
Coen, Peter G.
1991-01-01
A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Pathogen reduction in human plasma using an ultrashort pulsed laser
USDA-ARS?s Scientific Manuscript database
Pathogen reduction is an ideal approach to ensure the continued safety of the blood supply against emerging pathogens. However, the currently licensed pathogen reduction techniques are ineffective against non-enveloped viruses, and they introduce chemicals with concerns of side effects which prevent...
Helioseismic measurements in the solar envelope using group velocities of surface waves
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Baturin, V. A.; Ayukov, S. V.; Gryaznov, V. K.
2014-07-01
At intermediate- and high-degree l, solar p and f modes can be considered as surface waves. Using variational principle, we derive an integral expression for the group velocities of the surface waves in terms of adiabatic eigenfunctions of normal modes, and address the benefits of using group-velocity measurements as a supplementary diagnostic tool in solar seismology. The principal advantage of using group velocities, when compared with direct analysis of the oscillation frequencies, comes from their smaller sensitivity to the uncertainties in the near-photospheric layers. We address some numerical examples where group velocities are used to reveal inconsistencies between the solar models and the seismic data. Further, we implement the group-velocity measurements to the calibration of the specific entropy, helium abundance Y, and heavy-element abundance Z in the adiabatically stratified part of the solar convective envelope, using different recent versions of the equation of state. The results are in close agreement with our earlier measurements based on more sophisticated analysis of the solar oscillation frequencies. These results bring further support to the downward revision of the solar heavy-element abundances in recent spectroscopic measurements.
Extraction of stability and control derivatives from orbiter flight data
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Shafer, Mary F.
1993-01-01
The Space Shuttle Orbiter has provided unique and important information on aircraft flight dynamics. This information has provided the opportunity to assess the flight-derived stability and control derivatives for maneuvering flight in the hypersonic regime. In the case of the Space Shuttle Orbiter, these derivatives are required to determine if certain configuration placards (limitations on the flight envelope) can be modified. These placards were determined on the basis of preflight predictions and the associated uncertainties. As flight-determined derivatives are obtained, the placards are reassessed, and some of them are removed or modified. Extraction of the stability and control derivatives was justified by operational considerations and not by research considerations. Using flight results to update the predicted database of the orbiter is one of the most completely documented processes for a flight vehicle. This process followed from the requirement for analysis of flight data for control system updates and for expansion of the operational flight envelope. These results show significant changes in many important stability and control derivatives from the preflight database. This paper presents some of the stability and control derivative results obtained from Space Shuttle flights. Some of the limitations of this information are also examined.
Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2017-08-07
The sensitivity and uncertainty analysis course will introduce students to k eff sensitivity data, cross-section uncertainty data, how k eff sensitivity data and k eff uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.
Control of relative carrier-envelope phase slip in femtosecond Ti:sapphire and Cr:forsterite lasers.
Kobayashi, Yohei; Torizuka, Kenji; Wei, Zhiyi
2003-05-01
We were able to control relative carrier-envelope phase slip among mode-locked Ti:sapphire and Cr:forsterite lasers by employing electronic feedback. The pulse timings of these lasers were passively synchronized with our crossing-beam technique. Since the optical-frequency ratio of Ti:sapphire and Cr:forsterite is approximately 3:2, we can observe the phase relation by superimposing the third harmonic of Cr:forsterite and the second harmonic of Ti:sapphire lasers in time and in space. The spectrum width of the locked beat note was less than 3 kHz, which corresponds to the controlled fluctuation of a cavity-length difference of less than 10 pm.
Nishigami, Misako; Mori, Takaaki; Tomita, Masahiro; Takiguchi, Kingo; Tsumoto, Kanta
2017-07-01
Giant proteoliposomes are generally useful as artificial cell membranes in biochemical and biophysical studies, and various procedures for their preparation have been reported. We present here a novel preparation technique that involves the combination of i) cell-sized lipid vesicles (giant unilamellar vesicles, GUVs) that are generated using the droplet-transfer method, where lipid monolayer-coated water-in-oil microemulsion droplets interact with oil/water interfaces to form enclosed bilayer vesicles, and ii) budded viruses (BVs) of baculovirus (Autographa californica nucleopolyhedrovirus) that express recombinant transmembrane proteins on their envelopes. GP64, a fusogenic glycoprotein on viral envelopes, is activated by weak acids and is thought to cause membrane fusion with liposomes. Using confocal laser scanning microscopy (CLSM), we observed that the single giant liposomes fused with octadecyl rhodamine B chloride (R18)-labeled wild-type BV envelopes with moderate leakage of entrapped soluble compounds (calcein), and the fusion profile depended on the pH of the exterior solution: membrane fusion occurred at pH ∼4-5. We further demonstrated that recombinant transmembrane proteins, a red fluorescent protein (RFP)-tagged GPCR (corticotropin-releasing hormone receptor 1, CRHR1) and envelope protein GP64 could be partly incorporated into membranes of the individual giant liposomes with a reduction of the pH value, though there were also some immobile fluorescent spots observed on their circumferences. This combination may be useful for preparing giant proteoliposomes containing the desired membranes and inner phases. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
This report presents the results from several demonstrations of a new method for sealing building envelope air leaks using an aerosol sealing process developed by the Western Cooling Efficiency Center at UC Davis. The process involves pressurizing a building while applying an aerosol sealant to the interior. As air escapes through leaks in the building envelope, the aerosol particles are transported to the leaks where they collect and form a seal that blocks the leak. Standard blower door technology is used to facilitate the building pressurization, which allows the installer to track the sealing progress during the installation and automaticallymore » verify the final building tightness. Each aerosol envelope sealing installation was performed after drywall was installed and taped, and the process did not appear to interrupt the construction schedule or interfere with other trades working in the homes. The labor needed to physically seal bulk air leaks in typical construction will not be replaced by this technology. However, this technology is capable of bringing the air leakage of a building that was built with standard construction techniques and HERS-verified sealing down to levels that would meet DOE Zero Energy Ready Homes program requirements. When a developer is striving to meet a tighter envelope leakage specification, this technology could greatly reduce the cost to achieve that goal by providing a simple and relatively low cost method for reducing the air leakage of a building envelope with little to no change in their common building practices.« less
Parvoviruses Cause Nuclear Envelope Breakdown by Activating Key Enzymes of Mitosis
Porwal, Manvi; Cohen, Sarah; Snoussi, Kenza; Popa-Wagner, Ruth; Anderson, Fenja; Dugot-Senant, Nathalie; Wodrich, Harald; Dinsart, Christiane; Kleinschmidt, Jürgen A.; Panté, Nelly; Kann, Michael
2013-01-01
Disassembly of the nuclear lamina is essential in mitosis and apoptosis requiring multiple coordinated enzymatic activities in nucleus and cytoplasm. Activation and coordination of the different activities is poorly understood and moreover complicated as some factors translocate between cytoplasm and nucleus in preparatory phases. Here we used the ability of parvoviruses to induce nuclear membrane breakdown to understand the triggers of key mitotic enzymes. Nuclear envelope disintegration was shown upon infection, microinjection but also upon their application to permeabilized cells. The latter technique also showed that nuclear envelope disintegration was independent upon soluble cytoplasmic factors. Using time-lapse microscopy, we observed that nuclear disassembly exhibited mitosis-like kinetics and occurred suddenly, implying a catastrophic event irrespective of cell- or type of parvovirus used. Analyzing the order of the processes allowed us to propose a model starting with direct binding of parvoviruses to distinct proteins of the nuclear pore causing structural rearrangement of the parvoviruses. The resulting exposure of domains comprising amphipathic helices was required for nuclear envelope disintegration, which comprised disruption of inner and outer nuclear membrane as shown by electron microscopy. Consistent with Ca++ efflux from the lumen between inner and outer nuclear membrane we found that Ca++ was essential for nuclear disassembly by activating PKC. PKC activation then triggered activation of cdk-2, which became further activated by caspase-3. Collectively our study shows a unique interaction of a virus with the nuclear envelope, provides evidence that a nuclear pool of executing enzymes is sufficient for nuclear disassembly in quiescent cells, and demonstrates that nuclear disassembly can be uncoupled from initial phases of mitosis. PMID:24204256
A Limited-Vocabulary, Multi-Speaker Automatic Isolated Word Recognition System.
ERIC Educational Resources Information Center
Paul, James E., Jr.
Techniques for automatic recognition of isolated words are investigated, and a computer simulation of a word recognition system is effected. Considered in detail are data acquisition and digitizing, word detection, amplitude and time normalization, short-time spectral estimation including spectral windowing, spectral envelope approximation,…
Benchmarking in Universities: League Tables Revisited
ERIC Educational Resources Information Center
Turner, David
2005-01-01
This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to…
HBCU Efficiency and Endowments: An Exploratory Analysis
ERIC Educational Resources Information Center
Coupet, Jason; Barnum, Darold
2010-01-01
Discussions of efficiency among Historically Black Colleges and Universities (HBCUs) are often missing in academic conversations. This article seeks to assess efficiency of individual HBCUs using Data Envelopment Analysis (DEA), a non-parametric technique that can synthesize multiple inputs and outputs to determine a single efficiency score for…
How to find what you don't know: Visualising variability in 3D geological models
NASA Astrophysics Data System (ADS)
Lindsay, Mark; Wellmann, Florian; Jessell, Mark; Ailleres, Laurent
2014-05-01
Uncertainties in input data can have compounding effects on the predictive reliability of three-dimensional (3D) geological models. Resource exploration, tectonic studies and environmental modelling can be compromised by using 3D models that misrepresent the target geology, and drilling campaigns that attempt to intersect particular geological units guided by 3D models are at risk of failure if the exploration geologist is unaware of inherent uncertainties. In addition, the visual inspection of 3D models is often the first contact decision makers have with the geology, thus visually communicating the presence and magnitude of uncertainties contained within geological 3D models is critical. Unless uncertainties are presented early in the relationship between decision maker and model, the model will be considered more truthful than the uncertainties allow with each subsequent viewing. We present a selection of visualisation techniques that provide the viewer with an insight to the location and amount of uncertainty contained within a model, and the geological characteristics which are most affected. A model of the Gippsland Basin, southeastern Australia is used as a case study to demonstrate the concepts of information entropy, stratigraphic variability and geodiversity. Central to the techniques shown here is the creation of a model suite, performed by creating similar (but not the same) version of the original model through perturbation of the input data. Specifically, structural data in the form of strike and dip measurements is perturbed in the creation of the model suite. The visualisation techniques presented are: (i) information entropy; (ii) stratigraphic variability and (iii) geodiversity. Information entropy is used to analyse uncertainty in a spatial context, combining the empirical probability distributions of multiple outcomes with a single quantitative measure. Stratigraphic variability displays the number of possible lithologies that may exist at a given point within the model volume. Geodiversity analyses various model characteristics (or 'geodiveristy metrics'), including the depth, volume of unit, the curvature of an interface, the geological complexity of a contact and the contact relationships units have with each other. Principal component analysis, a multivariate statistical technique, is used to simultaneously examine each of the geodiveristy metrics to determine the boundaries of model space, and identify which metrics contribute most to model uncertainty. The combination of information entropy, stratigraphic variability and geodiversity analysis provides a descriptive and thorough representation of uncertainty with effective visualisation techniques that clearly communicate the geological uncertainty contained within the geological model.
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Overall uncertainty study of the hydrological impacts of climate change for a Canadian watershed
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, FrançOis P.; Poulin, Annie; Leconte, Robert
2011-12-01
General circulation models (GCMs) and greenhouse gas emissions scenarios (GGES) are generally considered to be the two major sources of uncertainty in quantifying the climate change impacts on hydrology. Other sources of uncertainty have been given less attention. This study considers overall uncertainty by combining results from an ensemble of two GGES, six GCMs, five GCM initial conditions, four downscaling techniques, three hydrological model structures, and 10 sets of hydrological model parameters. Each climate projection is equally weighted to predict the hydrology on a Canadian watershed for the 2081-2100 horizon. The results show that the choice of GCM is consistently a major contributor to uncertainty. However, other sources of uncertainty, such as the choice of a downscaling method and the GCM initial conditions, also have a comparable or even larger uncertainty for some hydrological variables. Uncertainties linked to GGES and the hydrological model structure are somewhat less than those related to GCMs and downscaling techniques. Uncertainty due to the hydrological model parameter selection has the least important contribution among all the variables considered. Overall, this research underlines the importance of adequately covering all sources of uncertainty. A failure to do so may result in moderately to severely biased climate change impact studies. Results further indicate that the major contributors to uncertainty vary depending on the hydrological variables selected, and that the methodology presented in this paper is successful at identifying the key sources of uncertainty to consider for a climate change impact study.
Seismic envelope-based detection and location of ground-coupled airwaves from volcanoes in Alaska
Fee, David; Haney, Matt; Matoza, Robin S.; Szuberla, Curt A.L.; Lyons, John; Waythomas, Christopher F.
2016-01-01
Volcanic explosions and other infrasonic sources frequently produce acoustic waves that are recorded by seismometers. Here we explore multiple techniques to detect, locate, and characterize ground‐coupled airwaves (GCA) on volcano seismic networks in Alaska. GCA waveforms are typically incoherent between stations, thus we use envelope‐based techniques in our analyses. For distant sources and planar waves, we use f‐k beamforming to estimate back azimuth and trace velocity parameters. For spherical waves originating within the network, we use two related time difference of arrival (TDOA) methods to detect and localize the source. We investigate a modified envelope function to enhance the signal‐to‐noise ratio and emphasize both high energies and energy contrasts within a spectrogram. We apply these methods to recent eruptions from Cleveland, Veniaminof, and Pavlof Volcanoes, Alaska. Array processing of GCA from Cleveland Volcano on 4 May 2013 produces robust detection and wave characterization. Our modified envelopes substantially improve the short‐term average/long‐term average ratios, enhancing explosion detection. We detect GCA within both the Veniaminof and Pavlof networks from the 2007 and 2013–2014 activity, indicating repeated volcanic explosions. Event clustering and forward modeling suggests that high‐resolution localization is possible for GCA on typical volcano seismic networks. These results indicate that GCA can be used to help detect, locate, characterize, and monitor volcanic eruptions, particularly in difficult‐to‐monitor regions. We have implemented these GCA detection algorithms into our operational volcano‐monitoring algorithms at the Alaska Volcano Observatory.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Development of an Uncertainty Model for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.
2010-01-01
This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.
Bayesian analysis of stage-fall-discharge rating curves and their uncertainties
NASA Astrophysics Data System (ADS)
Mansanarez, Valentin; Le Coz, Jérôme; Renard, Benjamin; Lang, Michel; Pierrefeu, Gilles; Le Boursicaud, Raphaël; Pobanz, Karine
2016-04-01
Stage-fall-discharge (SFD) rating curves are traditionally used to compute streamflow records at sites where the energy slope of the flow is variable due to variable backwater effects. Building on existing Bayesian approaches, we introduce an original hydraulics-based method for developing SFD rating curves used at twin gauge stations and estimating their uncertainties. Conventional power functions for channel and section controls are used, and transition to a backwater-affected channel control is computed based on a continuity condition, solved either analytically or numerically. The difference between the reference levels at the two stations is estimated as another uncertain parameter of the SFD model. The method proposed in this presentation incorporates information from both the hydraulic knowledge (equations of channel or section controls) and the information available in the stage-fall-discharge observations (gauging data). The obtained total uncertainty combines the parametric uncertainty and the remnant uncertainty related to the model of rating curve. This method provides a direct estimation of the physical inputs of the rating curve (roughness, width, slope bed, distance between twin gauges, etc.). The performance of the new method is tested using an application case affected by the variable backwater of a run-of-the-river dam: the Rhône river at Valence, France. In particular, a sensitivity analysis to the prior information and to the gauging dataset is performed. At that site, the stage-fall-discharge domain is well documented with gaugings conducted over a range of backwater affected and unaffected conditions. The performance of the new model was deemed to be satisfactory. Notably, transition to uniform flow when the overall range of the auxiliary stage is gauged is correctly simulated. The resulting curves are in good agreement with the observations (gaugings) and their uncertainty envelopes are acceptable for computing streamflow records. Similar conclusions were drawn from the application to other similar sites.
Applying Metrological Techniques to Satellite Fundamental Climate Data Records
NASA Astrophysics Data System (ADS)
Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.
2018-02-01
Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.
Plasma protein biomarkers associated with exposure of rainbow trout (Oncorhynchus mykiss) to 17β-estradiol were isolated and identified using novel sample preparation techniques and state-of-the-art mass spectrometry and bioinformatics approaches. Juvenile male and female trout ...
Preservation Concerns in Construction and Remodeling of Libraries: Planning for Preservation.
ERIC Educational Resources Information Center
Trinkley, Michael
To help libraries and other holdings institutions better incorporate preservation concerns in construction, renovation, and routine maintenance, various techniques are presented that allow preservation concerns to be integrated. The following topics are considered: (1) site selection; (2) design of the building envelope; (3) the library interior;…
evaluations of innovative building envelopes, water heating, and HVAC systems. She also conducts laboratory barriers for emerging and advanced retrofit systems to be implemented on a broad basis, as well as field Monitoring (NILM) techniques, and control strategies to develop cost-effective systems that integrate
Estimating School Efficiency: A Comparison of Methods Using Simulated Data.
ERIC Educational Resources Information Center
Bifulco, Robert; Bretschneider, Stuart
2001-01-01
Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…
NASA Astrophysics Data System (ADS)
Kuschmierz, R.; Czarske, J.; Fischer, A.
2014-08-01
Optical measurement techniques offer great opportunities in diverse applications, such as lathe monitoring and microfluidics. Doppler-based interferometric techniques enable simultaneous measurement of the lateral velocity and axial distance of a moving object. However, there is a complementarity between the unambiguous axial measurement range and the uncertainty of the distance. Therefore, we present an extended sensor setup, which provides an unambiguous axial measurement range of 1 mm while achieving uncertainties below 100 nm. Measurements at a calibration system are performed. When using a pinhole for emulating a single scattering particle, the tumbling motion of the rotating object is resolved with a distance uncertainty of 50 nm. For measurements at the rough surface, the distance uncertainty amounts to 280 nm due to a lower signal-to-noise ratio. Both experimental results are close to the respective Cramér-Rao bound, which is derived analytically for both surface and single particle measurements.
Real-time open-loop frequency response analysis of flight test data
NASA Technical Reports Server (NTRS)
Bosworth, J. T.; West, J. C.
1986-01-01
A technique has been developed to compare the open-loop frequency response of a flight test aircraft real time with linear analysis predictions. The result is direct feedback to the flight control systems engineer on the validity of predictions and adds confidence for proceeding with envelope expansion. Further, gain and phase margins can be tracked for trends in a manner similar to the techniques used by structural dynamics engineers in tracking structural modal damping.
Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.
Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J
2018-03-01
Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
William Salas; Steve Hagen
2013-01-01
This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Prior to 1978, the Wilsonville Advanced Coal Liquefaction facility material balance surrounded only the thermal liquefaction unit and involved analyses of only the slurry stream and individual gas streams. The distillate solvent yield was determined by difference. Subsequently, several modifications and additional process units were introduced to this single unit system. With the inclusion of the deashing unit in 1978 and the catalytic hydrogenation unit in 1981, the process has evolved into a sophisticated two-stage coal liquefaction process and has the potential for various modes of integration. This report presents an elemental balancing procedure and a simplified presentation format thatmore » is sufficiently flexible to meet current and future needs. The development of the elemental balancing technique and the relevant computer programs to handle the calculations have been addressed. This will be useful in modelling individual unit performance as well as determining the impact of each unit on the overall liquefaction system, provided the units are on a steady-state basis. Five different material balance envelopes are defined. Three of these envelopes pertain to the individual units (the thermal liquefaction or TL unit, the Critical Solvent Deashing or CSD unit and the H-Oil Ebullated Bed Hydrotreating or HTR unit). The fourth or single stage material balance envelope combines the TL and CSD units. The fifth envelope is the two-stage configuration combining all three units. 3 references.« less
Wire bonding quality monitoring via refining process of electrical signal from ultrasonic generator
NASA Astrophysics Data System (ADS)
Feng, Wuwei; Meng, Qingfeng; Xie, Youbo; Fan, Hong
2011-04-01
In this paper, a technique for on-line quality detection of ultrasonic wire bonding is developed. The electrical signals from the ultrasonic generator supply, namely, voltage and current, are picked up by a measuring circuit and transformed into digital signals by a data acquisition system. A new feature extraction method is presented to characterize the transient property of the electrical signals and further evaluate the bond quality. The method includes three steps. First, the captured voltage and current are filtered by digital bandpass filter banks to obtain the corresponding subband signals such as fundamental signal, second harmonic, and third harmonic. Second, each subband envelope is obtained using the Hilbert transform for further feature extraction. Third, the subband envelopes are, respectively, separated into three phases, namely, envelope rising, stable, and damping phases, to extract the tiny waveform changes. The different waveform features are extracted from each phase of these subband envelopes. The principal components analysis (PCA) method is used for the feature selection in order to remove the relevant information and reduce the dimension of original feature variables. Using the selected features as inputs, an artificial neural network (ANN) is constructed to identify the complex bond fault pattern. By analyzing experimental data with the proposed feature extraction method and neural network, the results demonstrate the advantages of the proposed feature extraction method and the constructed artificial neural network in detecting and identifying bond quality.
Yield Determination of Underground and Near Surface Explosions
NASA Astrophysics Data System (ADS)
Pasyanos, M.
2015-12-01
As seismic coverage of the earth's surface continues to improve, we are faced with signals from a wide variety of explosions from various sources ranging from oil train and ordnance explosions to military and terrorist attacks, as well as underground nuclear tests. We present on a method for determining the yield of underground and near surface explosions, which should be applicable for many of these. We first review the regional envelope method that was developed for underground explosions (Pasyanos et al., 2012) and more recently modified for near surface explosions (Pasyanos and Ford, 2015). The technique models the waveform envelope templates as a product of source, propagation (geometrical spreading and attenuation), and site terms, while near surface explosions include an additional surface effect. Yields and depths are determined by comparing the observed envelopes to the templates and minimizing the misfit. We then apply the method to nuclear and chemical explosions for a range of yields, depths, and distances. We will review some results from previous work, and show new examples from ordnance explosions in Scandinavia, nuclear explosions in Eurasia, and chemical explosions in Nevada associated with the Source Physics Experiments (SPE).
Executive-Attentional Uncertainty Responses by Rhesus Macaques ("Macaca mulatta")
ERIC Educational Resources Information Center
Smith, J. David; Coutinho, Mariana V. C.; Church, Barbara A.; Beran, Michael J.
2013-01-01
The uncertainty response has been influential in studies of human perception, and it is crucial in the growing research literature that explores animal metacognition. However, the uncertainty response's interpretation is still sharply debated. The authors sought to clarify this interpretation using the dissociative technique of cognitive loads…
NASA Astrophysics Data System (ADS)
Gu, Wen; Zhu, Zhiwei; Zhu, Wu-Le; Lu, Leyao; To, Suet; Xiao, Gaobo
2018-05-01
An automatic identification method for obtaining the critical depth-of-cut (DoC) of brittle materials with nanometric accuracy and sub-nanometric uncertainty is proposed in this paper. With this method, a two-dimensional (2D) microscopic image of the taper cutting region is captured and further processed by image analysis to extract the margin of generated micro-cracks in the imaging plane. Meanwhile, an analytical model is formulated to describe the theoretical curve of the projected cutting points on the imaging plane with respect to a specified DoC during the whole cutting process. By adopting differential evolution algorithm-based minimization, the critical DoC can be identified by minimizing the deviation between the extracted margin and the theoretical curve. The proposed method is demonstrated through both numerical simulation and experimental analysis. Compared with conventional 2D- and 3D-microscopic-image-based methods, determination of the critical DoC in this study uses the envelope profile rather than the onset point of the generated cracks, providing a more objective approach with smaller uncertainty.
Using multiple sensors for printed circuit board insertion
NASA Technical Reports Server (NTRS)
Sood, Deepak; Repko, Michael C.; Kelley, Robert B.
1989-01-01
As more and more activities are performed in space, there will be a greater demand placed on the information handling capacity of people who are to direct and accomplish these tasks. A promising alternative to full-time human involvement is the use of semi-autonomous, intelligent robot systems. To automate tasks such as assembly, disassembly, repair and maintenance, the issues presented by environmental uncertainties need to be addressed. These uncertainties are introduced by variations in the computed position of the robot at different locations in its work envelope, variations in part positioning, and tolerances of part dimensions. As a result, the robot system may not be able to accomplish the desired task without the help of sensor feedback. Measurements on the environment allow real time corrections to be made to the process. A design and implementation of an intelligent robot system which inserts printed circuit boards into a card cage are presented. Intelligent behavior is accomplished by coupling the task execution sequence with information derived from three different sensors: an overhead three-dimensional vision system, a fingertip infrared sensor, and a six degree of freedom wrist-mounted force/torque sensor.
NASA Astrophysics Data System (ADS)
Pascoe, D. J.; Anfinogentov, S.; Nisticò, G.; Goddard, C. R.; Nakariakov, V. M.
2017-04-01
Context. The strong damping of kink oscillations of coronal loops can be explained by mode coupling. The damping envelope depends on the transverse density profile of the loop. Observational measurements of the damping envelope have been used to determine the transverse loop structure which is important for understanding other physical processes such as heating. Aims: The general damping envelope describing the mode coupling of kink waves consists of a Gaussian damping regime followed by an exponential damping regime. Recent observational detection of these damping regimes has been employed as a seismological tool. We extend the description of the damping behaviour to account for additional physical effects, namely a time-dependent period of oscillation, the presence of additional longitudinal harmonics, and the decayless regime of standing kink oscillations. Methods: We examine four examples of standing kink oscillations observed by the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO). We use forward modelling of the loop position and investigate the dependence on the model parameters using Bayesian inference and Markov chain Monte Carlo (MCMC) sampling. Results: Our improvements to the physical model combined with the use of Bayesian inference and MCMC produce improved estimates of model parameters and their uncertainties. Calculation of the Bayes factor also allows us to compare the suitability of different physical models. We also use a new method based on spline interpolation of the zeroes of the oscillation to accurately describe the background trend of the oscillating loop. Conclusions: This powerful and robust method allows for accurate seismology of coronal loops, in particular the transverse density profile, and potentially reveals additional physical effects.
Factoring uncertainty into restoration modeling of in-situ leach uranium mines
Johnson, Raymond H.; Friedel, Michael J.
2009-01-01
Postmining restoration is one of the greatest concerns for uranium in-situ leach (ISL) mining operations. The ISL-affected aquifer needs to be returned to conditions specified in the mining permit (either premining or other specified conditions). When uranium ISL operations are completed, postmining restoration is usually achieved by injecting reducing agents into the mined zone. The objective of this process is to restore the aquifer to premining conditions by reducing the solubility of uranium and other metals in the ground water. Reactive transport modeling is a potentially useful method for simulating the effectiveness of proposed restoration techniques. While reactive transport models can be useful, they are a simplification of reality that introduces uncertainty through the model conceptualization, parameterization, and calibration processes. For this reason, quantifying the uncertainty in simulated temporal and spatial hydrogeochemistry is important for postremedial risk evaluation of metal concentrations and mobility. Quantifying the range of uncertainty in key predictions (such as uranium concentrations at a specific location) can be achieved using forward Monte Carlo or other inverse modeling techniques (trial-and-error parameter sensitivity, calibration constrained Monte Carlo). These techniques provide simulated values of metal concentrations at specified locations that can be presented as nonlinear uncertainty limits or probability density functions. Decisionmakers can use these results to better evaluate environmental risk as future metal concentrations with a limited range of possibilities, based on a scientific evaluation of uncertainty.
Scaling Up Decision Theoretic Planning to Planetary Rover Problems
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Dearden, Richard; Washington, Rich
2004-01-01
Because of communication limits, planetary rovers must operate autonomously during consequent durations. The ability to plan under uncertainty is one of the main components of autonomy. Previous approaches to planning under uncertainty in NASA applications are not able to address the challenges of future missions, because of several apparent limits. On another side, decision theory provides a solid principle framework for reasoning about uncertainty and rewards. Unfortunately, there are several obstacles to a direct application of decision-theoretic techniques to the rover domain. This paper focuses on the issues of structure and concurrency, and continuous state variables. We describes two techniques currently under development that address specifically these issues and allow scaling-up decision theoretic solution techniques to planetary rover planning problems involving a small number of goals.
Issues and recent advances in optimal experimental design for site investigation (Invited)
NASA Astrophysics Data System (ADS)
Nowak, W.
2013-12-01
This presentation provides an overview over issues and recent advances in model-based experimental design for site exploration. The addressed issues and advances are (1) how to provide an adequate envelope to prior uncertainty, (2) how to define the information needs in a task-oriented manner, (3) how to measure the expected impact of a data set that it not yet available but only planned to be collected, and (4) how to perform best the optimization of the data collection plan. Among other shortcomings of the state-of-the-art, it is identified that there is a lack of demonstrator studies where exploration schemes based on expert judgment are compared to exploration schemes obtained by optimal experimental design. Such studies will be necessary do address the often voiced concern that experimental design is an academic exercise with little improvement potential over the well- trained gut feeling of field experts. When addressing this concern, a specific focus has to be given to uncertainty in model structure, parameterizations and parameter values, and to related surprises that data often bring about in field studies, but never in synthetic-data based studies. The background of this concern is that, initially, conceptual uncertainty may be so large that surprises are the rule rather than the exception. In such situations, field experts have a large body of experience in handling the surprises, and expert judgment may be good enough compared to meticulous optimization based on a model that is about to be falsified by the incoming data. In order to meet surprises accordingly and adapt to them, there needs to be a sufficient representation of conceptual uncertainty within the models used. Also, it is useless to optimize an entire design under this initial range of uncertainty. Thus, the goal setting of the optimization should include the objective to reduce conceptual uncertainty. A possible way out is to upgrade experimental design theory towards real-time interaction with the ongoing site investigation, such that surprises in the data are immediately accounted for to restrict the conceptual uncertainty and update the optimization of the plan.
NASA Astrophysics Data System (ADS)
Lindley, S. J.; Walsh, T.
There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.
Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C
2017-06-01
The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.
Dynamic stability and handling qualities tests on a highly augmented, statically unstable airplane
NASA Technical Reports Server (NTRS)
Gera, Joseph; Bosworth, John T.
1987-01-01
This paper describes some novel flight tests and analysis techniques in the flight dynamics and handling qualities area. These techniques were utilized during the initial flight envelope clearance of the X-29A aircraft and were largely responsible for the completion of the flight controls clearance program without any incidents or significant delays. The resulting open-loop and closed-loop frequency responses and the time history comparison using flight and linear simulation data are discussed.
Faroug, Radwane; Stirling, Paul; Ali, Farhan
2013-01-01
Paediatric calcaneal fractures are rare injuries usually managed conservatively or with open reduction and internal fixation (ORIF). Closed reduction was previously thought to be impossible, and very few cases are reported in the literature. We report a new technique for closed reduction using Ilizarov half-rings. We report successful closed reduction and screwless fixation of an extra-articular calcaneal fracture dislocation in a 7-year-old boy. Reduction was achieved using two Ilizarov half-ring frames arranged perpendicular to each other, enabling simultaneous application of longitudinal and rotational traction. Anatomical reduction was achieved with restored angles of Bohler and Gissane. Two K-wires were the definitive fixation. Bony union with good functional outcome and minimal pain was achieved at eight-weeks follow up. ORIF of calcaneal fractures provides good functional outcome but is associated with high rates of malunion and postoperative pain. Preservation of the unique soft tissue envelope surrounding the calcaneus reduces the risk of infection. Closed reduction prevents distortion of these tissues and may lead to faster healing and mobilisation. Closed reduction and screwless fixation of paediatric calcaneal fractures is an achievable management option. Our technique has preserved the soft tissue envelope surrounding the calcaneus, has avoided retained metalwork related complications, and has resulted in a good functional outcome. PMID:23819090
Full Flight Envelope Direct Thrust Measurement on a Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Conners, Timothy R.; Sims, Robert L.
1998-01-01
Direct thrust measurement using strain gages offers advantages over analytically-based thrust calculation methods. For flight test applications, the direct measurement method typically uses a simpler sensor arrangement and minimal data processing compared to analytical techniques, which normally require costly engine modeling and multisensor arrangements throughout the engine. Conversely, direct thrust measurement has historically produced less than desirable accuracy because of difficulty in mounting and calibrating the strain gages and the inability to account for secondary forces that influence the thrust reading at the engine mounts. Consequently, the strain-gage technique has normally been used for simple engine arrangements and primarily in the subsonic speed range. This paper presents the results of a strain gage-based direct thrust-measurement technique developed by the NASA Dryden Flight Research Center and successfully applied to the full flight envelope of an F-15 aircraft powered by two F100-PW-229 turbofan engines. Measurements have been obtained at quasi-steady-state operating conditions at maximum non-augmented and maximum augmented power throughout the altitude range of the vehicle and to a maximum speed of Mach 2.0 and are compared against results from two analytically-based thrust calculation methods. The strain-gage installation and calibration processes are also described.
Data acquisition and analysis of the UNCOSS underwater explosive neutron sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carasco, Cedric; Eleon, Cyrille; Perot, Bertrand
2012-08-15
The purpose of the FP7 UNCOSS project (Underwater Coastal Sea Surveyor, http://www.uncoss-project.org) is to develop a neutron-based underwater explosive sensor to detect unexploded ordnance lying on the sea bottom. The Associated Particle Technique is used to focus the inspection on a suspicious object located by optical and electromagnetic sensors and to determine if there is an explosive charge inside. This paper presents the data acquisition electronics and data analysis software which have been developed for this project. A field programmable gate array that digitizes and processes the signal allows to perform precise time-of-flight and gamma-ray energy measurements. The gamma-ray spectramore » are unfolded into pure elemental count proportions, mainly C, N, O, Fe, Al, Si, and Ca. The C, N, and O count fractions are converted into chemical proportions, taking into account the gamma-ray production cross sections, as well as neutron and photon attenuation in the different shields between the ROV (Remotely Operated Vehicle) and the explosive, such as the explosive iron shell, seawater, and ROV envelop. A two-dimensional (2D) barycentric representation of the C, N, and O proportions is built from their chemical ratios, and a 2D likelihood map is built from the associated statistical and systematic uncertainties. The threat level is evaluated from the best matching materials of a database including explosives. (authors)« less
NASA Astrophysics Data System (ADS)
Choukulkar, Aditya; Brewer, W. Alan; Sandberg, Scott P.; Weickmann, Ann; Bonin, Timothy A.; Hardesty, R. Michael; Lundquist, Julie K.; Delgado, Ruben; Valerio Iungo, G.; Ashton, Ryan; Debnath, Mithu; Bianco, Laura; Wilczak, James M.; Oncley, Steven; Wolfe, Daniel
2017-01-01
Accurate three-dimensional information of wind flow fields can be an important tool in not only visualizing complex flow but also understanding the underlying physical processes and improving flow modeling. However, a thorough analysis of the measurement uncertainties is required to properly interpret results. The XPIA (eXperimental Planetary boundary layer Instrumentation Assessment) field campaign conducted at the Boulder Atmospheric Observatory (BAO) in Erie, CO, from 2 March to 31 May 2015 brought together a large suite of in situ and remote sensing measurement platforms to evaluate complex flow measurement strategies. In this paper, measurement uncertainties for different single and multi-Doppler strategies using simple scan geometries (conical, vertical plane and staring) are investigated. The tradeoffs (such as time-space resolution vs. spatial coverage) among the different measurement techniques are evaluated using co-located measurements made near the BAO tower. Sensitivity of the single-/multi-Doppler measurement uncertainties to averaging period are investigated using the sonic anemometers installed on the BAO tower as the standard reference. Finally, the radiometer measurements are used to partition the measurement periods as a function of atmospheric stability to determine their effect on measurement uncertainty. It was found that with an increase in spatial coverage and measurement complexity, the uncertainty in the wind measurement also increased. For multi-Doppler techniques, the increase in uncertainty for temporally uncoordinated measurements is possibly due to requiring additional assumptions of stationarity along with horizontal homogeneity and less representative line-of-sight velocity statistics. It was also found that wind speed measurement uncertainty was lower during stable conditions compared to unstable conditions.
NASA Astrophysics Data System (ADS)
Walz, Michael; Leckebusch, Gregor C.
2016-04-01
Extratropical wind storms pose one of the most dangerous and loss intensive natural hazards for Europe. However, due to only 50 years of high quality observational data, it is difficult to assess the statistical uncertainty of these sparse events just based on observations. Over the last decade seasonal ensemble forecasts have become indispensable in quantifying the uncertainty of weather prediction on seasonal timescales. In this study seasonal forecasts are used in a climatological context: By making use of the up to 51 ensemble members, a broad and physically consistent statistical base can be created. This base can then be used to assess the statistical uncertainty of extreme wind storm occurrence more accurately. In order to determine the statistical uncertainty of storms with different paths of progression, a probabilistic clustering approach using regression mixture models is used to objectively assign storm tracks (either based on core pressure or on extreme wind speeds) to different clusters. The advantage of this technique is that the entire lifetime of a storm is considered for the clustering algorithm. Quadratic curves are found to describe the storm tracks most accurately. Three main clusters (diagonal, horizontal or vertical progression of the storm track) can be identified, each of which have their own particulate features. Basic storm features like average velocity and duration are calculated and compared for each cluster. The main benefit of this clustering technique, however, is to evaluate if the clusters show different degrees of uncertainty, e.g. more (less) spread for tracks approaching Europe horizontally (diagonally). This statistical uncertainty is compared for different seasonal forecast products.
Uncertainty Estimate for the Outdoor Calibration of Solar Pyranometers: A Metrologist Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reda, I.; Myers, D.; Stoffel, T.
2008-12-01
Pyranometers are used outdoors to measure solar irradiance. By design, this type of radiometer can measure the; total hemispheric (global) or diffuse (sky) irradiance when the detector is unshaded or shaded from the sun disk, respectively. These measurements are used in a variety of applications including solar energy conversion, atmospheric studies, agriculture, and materials science. Proper calibration of pyranometers is essential to ensure measurement quality. This paper describes a step-by-step method for calculating and reporting the uncertainty of the calibration, using the guidelines of the ISO 'Guide to the Expression of Uncertainty in Measurement' or GUM, that is applied tomore » the pyranometer; calibration procedures used at the National Renewable Energy Laboratory (NREL). The NREL technique; characterizes a responsivity function of a pyranometer as a function of the zenith angle, as well as reporting a single; calibration responsivity value for a zenith angle of 45 ..deg... The uncertainty analysis shows that a lower uncertainty can be achieved by using the response function of a pyranometer determined as a function of zenith angle, in lieu of just using; the average value at 45..deg... By presenting the contribution of each uncertainty source to the total uncertainty; users will be able to troubleshoot and improve their calibration process. The uncertainty analysis method can also be used to determine the uncertainty of different calibration techniques and applications, such as deriving the uncertainty of field measurements.« less
In the present study, protein markers of estrogenic exposure in rainbow trout (Oncorhynchus mykiss) were isolated and identified using innovative sample preparation techniques followed by advanced MS and bioinformatics approaches. Juvenile trout were administered 17ß-estradiol t...
USDA-ARS?s Scientific Manuscript database
Potato leafroll virus (PLRV) is an aphid-borne, positive sense, single stranded RNA virus in the Luteoviridae that causes significant loss to potato production worldwide. The capsid structure for this family consists of a non-enveloped, icosohedral shaped virion composed of two structural proteins, ...
Siciliani, Luigi
2006-01-01
Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.
Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak
2011-01-01
This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.
Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges
NASA Technical Reports Server (NTRS)
Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam
2014-01-01
As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.
Lo Storto, Corrado
2013-11-01
This paper presents an integrative framework to evaluate ecommerce website efficiency from the user viewpoint using Data Envelopment Analysis (DEA). This framework is inspired by concepts driven from theories of information processing and cognition and considers the website efficiency as a measure of its quality and performance. When the users interact with the website interfaces to perform a task, they are involved in a cognitive effort, sustaining a cognitive cost to search, interpret and process information, and experiencing either a sense of satisfaction or dissatisfaction for that. The amount of ambiguity and uncertainty, and the search (over-)time during navigation that they perceive determine the effort size - and, as a consequence, the cognitive cost amount - they have to bear to perform their task. On the contrary, task performing and result achievement provide the users with cognitive benefits, making interaction with the website potentially attractive, satisfying, and useful. In total, 9 variables are measured, classified in a set of 3 website macro-dimensions (user experience, site navigability and structure). The framework is implemented to compare 52 ecommerce websites that sell products in the information technology and media market. A stepwise regression is performed to assess the influence of cognitive costs and benefits that mostly affect website efficiency. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Crida, Aurélien; Ligi, Roxanne; Dorn, Caroline; Lebreton, Yveline
2018-06-01
The characterization of exoplanets relies on that of their host star. However, stellar evolution models cannot always be used to derive the mass and radius of individual stars, because many stellar internal parameters are poorly constrained. Here, we use the probability density functions (PDFs) of directly measured parameters to derive the joint PDF of the stellar and planetary mass and radius. Because combining the density and radius of the star is our most reliable way of determining its mass, we find that the stellar (respectively planetary) mass and radius are strongly (respectively moderately) correlated. We then use a generalized Bayesian inference analysis to characterize the possible interiors of 55 Cnc e. We quantify how our ability to constrain the interior improves by accounting for correlation. The information content of the mass–radius correlation is also compared with refractory element abundance constraints. We provide posterior distributions for all interior parameters of interest. Given all available data, we find that the radius of the gaseous envelope is 0.08+/- 0.05{R}p. A stronger correlation between the planetary mass and radius (potentially provided by a better estimate of the transit depth) would significantly improve interior characterization and reduce drastically the uncertainty on the gas envelope properties.
Kodama, Wataru; Nakasako, Masayoshi
2011-08-01
Coherent x-ray diffraction microscopy is a novel technique in the structural analyses of particles that are difficult to crystallize, such as the biological particles composing living cells. As water is indispensable for maintaining particles in functional structures, sufficient hydration of targeted particles is required during sample preparation for diffraction microscopy experiments. However, the water enveloping particles also contributes significantly to the diffraction patterns and reduces the electron-density contrast of the sample particles. In this study, we propose a protocol for the structural analyses of particles in water by applying a three-dimensional reconstruction method in real space for the projection images phase-retrieved from diffraction patterns, together with a developed density modification technique. We examined the feasibility of the protocol through three simulations involving a protein molecule in a vacuum, and enveloped in either a droplet or a cube-shaped water. The simulations were carried out for the diffraction patterns in the reciprocal planes normal to the incident x-ray beam. This assumption and the simulation conditions corresponded to experiments using x-ray wavelengths of shorter than 0.03 Å. The analyses demonstrated that our protocol provided an interpretable electron-density map. Based on the results, we discuss the advantages and limitations of the proposed protocol and its practical application for experimental data. In particular, we examined the influence of Poisson noise in diffraction patterns on the reconstructed three-dimensional electron density in the proposed protocol.
Tebbetts, John B
2013-07-01
This article defines a comprehensive process using quantified parameters for objective decision making, operative planning, technique selection, and outcomes analysis in mastopexy and breast reduction, and defines quantified parameters for nipple position and vertical and horizontal skin excess. Future submissions will detail application of the processes for skin envelope design and address composite, three-dimensional parenchyma modification options. Breast base width was used to define a proportional, desired nipple-to-inframammary fold distance for optimal aesthetics. Vertical and horizontal skin excess were measured, documented, and used for technique selection and skin envelope design in mastopexy and breast reduction. This method was applied in 124 consecutive mastopexy and 122 consecutive breast reduction cases. Average follow-up was 4.6 years (range, 6 to 14 years). No changes were made to the basic algorithm of the defined process during the study period. No patient required nipple repositioning. Complications included excessive lower pole restretch (4 percent), periareolar scar hypertrophy (0.8 percent), hematoma (1.2 percent), and areola shape irregularities (1.6 percent). Delayed healing at the junction of vertical and horizontal scars occurred in two of 124 reduction patients (1.6 percent), neither of whom required revision. The overall reoperation rate was 6.5 percent (16 of 246). This study defines the first steps of a comprehensive process for using objectively defined parameters that surgeons can apply to skin envelope design for mastopexy and breast reduction. The method can be used in conjunction with, or in lieu of, other described methods to determine nipple position.
Uncertainties in cylindrical anode current inferences on pulsed power drivers
NASA Astrophysics Data System (ADS)
Porwitzky, Andrew; Brown, Justin
2018-06-01
For over a decade, velocimetry based techniques have been used to infer the electrical current delivered to dynamic materials properties experiments on pulsed power drivers such as the Z Machine. Though originally developed for planar load geometries, in recent years, inferring the current delivered to cylindrical coaxial loads has become a valuable diagnostic tool for numerous platforms. Presented is a summary of uncertainties that can propagate through the current inference technique when applied to expanding cylindrical anodes. An equation representing quantitative uncertainty is developed which shows the unfold method to be accurate to a few percent above 10 MA of load current.
Valdivieso-Torres, Leonardo; Sarangi, Anindita; Whidby, Jillian; Marcotrigiano, Joseph; Roth, Monica J
2015-12-30
Retargeting of gammaretroviral envelope proteins has shown promising results in the isolation of novel isolates with therapeutic potential. However, the optimal conditions required to obtain high-affinity retargeted envelope proteins with narrow tropism are not understood. This study highlights the advantage of constrained peptides within receptor binding domains and validates the random library screening technique of obtaining novel retargeted Env proteins. Using a modified vector backbone to screen the envelope libraries on 143B osteosarcoma cells, three novel and unique retargeted envelopes were isolated. The use of complex disulfide bonds within variable regions required for receptor binding is found within natural gammaretroviral envelope isolates. Interestingly, two of the isolates, named AII and BV2, have a pair of cysteines located within the randomized region of 11 amino acids similar to that identified within the CP Env, an isolate identified in a previous Env library screen on the human renal carcinoma Caki-1 cell line. The amino acids within the randomized region of AII and BV2 envelopes that are essential for viral infection have been identified in this study and include these cysteine residues. Through mutagenesis studies, the putative disulfide bond pairs including and beyond the randomized region were examined. In parallel, the disulfide bonds of CP Env were identified using mass spectrometry. The results indicate that this pair of cysteines creates the structural context to position key hydrophobic (F and W) and basic (K and H) residues critical for viral titer and suggest that AII, BV2, and CP internal cysteines bond together in distinct ways. Retargeted gammaretroviral particles have broad applications for therapeutic use. Although great advances have been achieved in identifying new Env-host cell receptor pairs, the rules for designing optimal Env libraries are still unclear. We have found that isolates with an additional pair of cysteines within the randomized region have the highest transduction efficiencies. This emphasizes the importance of considering cysteine pairs in the design of new libraries. Furthermore, our data clearly indicate that these cysteines are essential for viral infectivity by presenting essential residues to the host cell receptor. These studies facilitate the screening of Env libraries for functional entry into target cells, allowing the identification of novel gammaretroviral Envs targeting alternative host cell receptors for gene and protein delivery. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
NASA Technical Reports Server (NTRS)
Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2004-01-01
This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.
2016-07-01
characteristics and to examine the sensitivity of using such techniques for evaluating microstructure. In addition to the GUI tool, a manual describing its use has... Evaluating Local Primary Dendrite Arm Spacing Characterization Techniques Using Synthetic Directionally Solidified Dendritic Microstructures, Metallurgical and...driven approach for quanti - fying materials uncertainty in creep deformation and failure of aerspace materials, Multi-scale Structural Mechanics and
Uncertainty Quantification Techniques of SCALE/TSUNAMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don
2011-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less
Signal Processing for Determining Water Height in Steam Pipes with Dynamic Surface Conditions
NASA Technical Reports Server (NTRS)
Lih, Shyh-Shiuh; Lee, Hyeong Jae; Bar-Cohen, Yoseph
2015-01-01
An enhanced signal processing method based on the filtered Hilbert envelope of the auto-correlation function of the wave signal has been developed to monitor the height of condensed water through the steel wall of steam pipes with dynamic surface conditions. The developed signal processing algorithm can also be used to estimate the thickness of the pipe to determine the cut-off frequency for the low pass filter frequency of the Hilbert Envelope. Testing and analysis results by using the developed technique for dynamic surface conditions are presented. A multiple array of transducers setup and methodology are proposed for both the pulse-echo and pitch-catch signals to monitor the fluctuation of the water height due to disturbance, water flow, and other anomaly conditions.
Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.
2012-01-01
The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.
Status of the Planet Formation Imager (PFI) concept
NASA Astrophysics Data System (ADS)
Ireland, Michael J.; Monnier, John D.; Kraus, Stefan; Isella, Andrea; Minardi, Stefano; Petrov, Romain; ten Brummelaar, Theo; Young, John; Vasisht, Gautam; Mozurkewich, David; Rinehart, Stephen; Michael, Ernest A.; van Belle, Gerard; Woillez, Julien
2016-08-01
The Planet Formation Imager (PFI) project aims to image the period of planet assembly directly, resolving structures as small as a giant planet's Hill sphere. These images will be required in order to determine the key mechanisms for planet formation at the time when processes of grain growth, protoplanet assembly, magnetic fields, disk/planet dynamical interactions and complex radiative transfer all interact - making some planetary systems habitable and others inhospitable. We will present the overall vision for the PFI concept, focusing on the key technologies and requirements that are needed to achieve the science goals. Based on these key requirements, we will define a cost envelope range for the design and highlight where the largest uncertainties lie at this conceptual stage.
Choukulkar, Aditya; Brewer, W. Alan; Sandberg, Scott P.; ...
2017-01-23
Accurate three-dimensional information of wind flow fields can be an important tool in not only visualizing complex flow but also understanding the underlying physical processes and improving flow modeling. However, a thorough analysis of the measurement uncertainties is required to properly interpret results. The XPIA (eXperimental Planetary boundary layer Instrumentation Assessment) field campaign conducted at the Boulder Atmospheric Observatory (BAO) in Erie, CO, from 2 March to 31 May 2015 brought together a large suite of in situ and remote sensing measurement platforms to evaluate complex flow measurement strategies. In this paper, measurement uncertainties for different single and multi-Doppler strategies using simple scanmore » geometries (conical, vertical plane and staring) are investigated. The tradeoffs (such as time–space resolution vs. spatial coverage) among the different measurement techniques are evaluated using co-located measurements made near the BAO tower. Sensitivity of the single-/multi-Doppler measurement uncertainties to averaging period are investigated using the sonic anemometers installed on the BAO tower as the standard reference. Finally, the radiometer measurements are used to partition the measurement periods as a function of atmospheric stability to determine their effect on measurement uncertainty. It was found that with an increase in spatial coverage and measurement complexity, the uncertainty in the wind measurement also increased. For multi-Doppler techniques, the increase in uncertainty for temporally uncoordinated measurements is possibly due to requiring additional assumptions of stationarity along with horizontal homogeneity and less representative line-of-sight velocity statistics. Lastly, it was also found that wind speed measurement uncertainty was lower during stable conditions compared to unstable conditions.« less
NASA Astrophysics Data System (ADS)
Poppeliers, C.; Preston, L. A.
2017-12-01
Measurements of seismic surface wave dispersion can be used to infer the structure of the Earth's subsurface. Typically, to identify group- and phase-velocity, a series of narrow-band filters are applied to surface wave seismograms. Frequency dependent arrival times of surface waves can then be identified from the resulting suite of narrow band seismograms. The frequency-dependent velocity estimates are then inverted for subsurface velocity structure. However, this technique has no method to estimate the uncertainty of the measured surface wave velocities, and subsequently there is no estimate of uncertainty on, for example, tomographic results. For the work here, we explore using the multiwavelet transform (MWT) as an alternate method to estimate surface wave speeds. The MWT decomposes a signal similarly to the conventional filter bank technique, but with two primary advantages: 1) the time-frequency localization is optimized in regard to the time-frequency tradeoff, and 2) we can use the MWT to estimate the uncertainty of the resulting surface wave group- and phase-velocities. The uncertainties of the surface wave speed measurements can then be propagated into tomographic inversions to provide uncertainties of resolved Earth structure. As proof-of-concept, we apply our technique to four seismic ambient noise correlograms that were collected from the University of Nevada Reno seismic network near the Nevada National Security Site. We invert the estimated group- and phase-velocities, as well the uncertainties, for 1-D Earth structure for each station pair. These preliminary results generally agree with 1-D velocities that are obtained from inverting dispersion curves estimated from a conventional Gaussian filter bank.
NASA Technical Reports Server (NTRS)
Mellish, J. A.
1980-01-01
Engine control techniques were established and new technology requirements were identified. The designs of the components and engine were prepared in sufficient depth to calculate engine and component weights and envelopes, turbopump efficiencies and recirculation leakage rates, and engine performance. Engine design assumptions are presented along with the structural design criteria.
Techniques for analyses of trends in GRUAN data
NASA Astrophysics Data System (ADS)
Bodeker, G. E.; Kremser, S.
2015-04-01
The Global Climate Observing System (GCOS) Reference Upper Air Network (GRUAN) provides reference quality RS92 radiosonde measurements of temperature, pressure and humidity. A key attribute of reference quality measurements, and hence GRUAN data, is that each datum has a well characterized and traceable estimate of the measurement uncertainty. The long-term homogeneity of the measurement records, and their well characterized uncertainties, make these data suitable for reliably detecting changes in global and regional climate on decadal time scales. Considerable effort is invested in GRUAN operations to (i) describe and analyse all sources of measurement uncertainty to the extent possible, (ii) quantify and synthesize the contribution of each source of uncertainty to the total measurement uncertainty, and (iii) verify that the evaluated net uncertainty is within the required target uncertainty. However, if the climate science community is not sufficiently well informed on how to capitalize on this added value, the significant investment in estimating meaningful measurement uncertainties is largely wasted. This paper presents and discusses the techniques that will need to be employed to reliably quantify long-term trends in GRUAN data records. A pedagogical approach is taken whereby numerical recipes for key parts of the trend analysis process are explored. The paper discusses the construction of linear least squares regression models for trend analysis, boot-strapping approaches to determine uncertainties in trends, dealing with the combined effects of autocorrelation in the data and measurement uncertainties in calculating the uncertainty on trends, best practice for determining seasonality in trends, how to deal with co-linear basis functions, and interpreting derived trends. Synthetic data sets are used to demonstrate these concepts which are then applied to a first analysis of temperature trends in RS92 radiosonde upper air soundings at the GRUAN site at Lindenberg, Germany (52.21° N, 14.12° E).
Techniques for analyses of trends in GRUAN data
NASA Astrophysics Data System (ADS)
Bodeker, G. E.; Kremser, S.
2014-12-01
The Global Climate Observing System (GCOS) Reference Upper Air Network (GRUAN) provides reference quality RS92 radiosonde measurements of temperature, pressure and humidity. A key attribute of reference quality measurements, and hence GRUAN data, is that each datum has a well characterised and traceable estimate of the measurement uncertainty. The long-term homogeneity of the measurement records, and their well characterised uncertainties, make these data suitable for reliably detecting changes in global and regional climate on decadal time scales. Considerable effort is invested in GRUAN operations to (i) describe and analyse all sources of measurement uncertainty to the extent possible, (ii) quantify and synthesize the contribution of each source of uncertainty to the total measurement uncertainty, and (iii) verify that the evaluated net uncertainty is within the required target uncertainty. However, if the climate science community is not sufficiently well informed on how to capitalize on this added value, the significant investment in estimating meaningful measurement uncertainties is largely wasted. This paper presents and discusses the techniques that will need to be employed to reliably quantify long-term trends in GRUAN data records. A pedagogical approach is taken whereby numerical recipes for key parts of the trend analysis process are explored. The paper discusses the construction of linear least squares regression models for trend analysis, boot-strapping approaches to determine uncertainties in trends, dealing with the combined effects of autocorrelation in the data and measurement uncertainties in calculating the uncertainty on trends, best practice for determining seasonality in trends, how to deal with co-linear basis functions, and interpreting derived trends. Synthetic data sets are used to demonstrate these concepts which are then applied to a first analysis of temperature trends in RS92 radiosonde upper air soundings at the GRUAN site at Lindenberg, Germany (52.21° N, 14.12° E).
Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.
Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana
2012-05-15
Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability distribution of parameters), it was found that SCEM-UA and AMALGAM produce results quicker than GLUE in terms of required number of simulations. However, GLUE requires the lowest modelling skills and is easy to implement. All non-Bayesian methods have problems with the way they accept behavioural parameter sets, e.g. GLUE, SCEM-UA and AMALGAM have subjective acceptance thresholds, while MICA has usually problem with its hypothesis on normality of residuals. It is concluded that modellers should select the method which is most suitable for the system they are modelling (e.g. complexity of the model's structure including the number of parameters), their skill/knowledge level, the available information, and the purpose of their study. Copyright © 2012 Elsevier Ltd. All rights reserved.
Collision judgment when using an augmented-vision head-mounted display device
Luo, Gang; Woods, Russell L; Peli, Eli
2016-01-01
Purpose We have developed a device to provide an expanded visual field to patients with tunnel vision by superimposing minified edge images of the wide scene, in which objects appear closer to the heading direction than they really are. We conducted experiments in a virtual environment to determine if users would overestimate collision risks. Methods Given simulated scenes of walking or standing with intention to walk towards a given direction (intended walking) in a shopping mall corridor, participants (12 normally sighted and 7 with tunnel vision) reported whether they would collide with obstacles appearing at different offsets from variable walking paths (or intended directions), with and without the device. The collision envelope (CE), a personal space based on perceived collision judgments, and judgment uncertainty (variability of response) were measured. When the device was used, combinations of two image scales (5× minified and 1:1) and two image types (grayscale or edge images) were tested. Results Image type did not significantly alter collision judgment (p>0.7). Compared to the without-device baseline, minification did not significantly change the CE of normally sighted subjects for simulated walking (p=0.12), but increased CE by 30% for intended walking (p<0.001). Their uncertainty was not affected by minification (p>0.25). For the patients, neither CE nor uncertainty was affected by minification (p>0.13) in both walking conditions. Baseline CE and uncertainty were greater for patients than normally-sighted subjects in simulated walking (p=0.03), but the two groups were not significantly different in all other conditions. Conclusion Users did not substantially overestimate collision risk, as the 5× minified images had only limited impact on collision judgments either during walking or before starting to walk. PMID:19458339
Collision judgment when using an augmented-vision head-mounted display device.
Luo, Gang; Woods, Russell L; Peli, Eli
2009-09-01
A device was developed to provide an expanded visual field to patients with tunnel vision by superimposing minified edge images of the wide scene, in which objects appear closer to the heading direction than they really are. Experiments were conducted in a virtual environment to determine whether users would overestimate collision risks. Given simulated scenes of walking or standing with intention to walk toward a given direction (intended walking) in a shopping mall corridor, participants (12 normally sighted and 7 with tunnel vision) reported whether they would collide with obstacles appearing at different offsets from variable walking paths (or intended directions), with and without the device. The collision envelope (CE), a personal space based on perceived collision judgments, and judgment uncertainty (variability of response) were measured. When the device was used, combinations of two image scales (5x minified and 1:1) and two image types (grayscale or edge images) were tested. Image type did not significantly alter collision judgment (P > 0.7). Compared to the without-device baseline, minification did not significantly change the CE of normally sighted subjects for simulated walking (P = 0.12), but increased CE by 30% for intended walking (P < 0.001). Their uncertainty was not affected by minification (P > 0.25). For the patients, neither CE nor uncertainty was affected by minification (P > 0.13) in both walking conditions. Baseline CE and uncertainty were greater for patients than normally sighted subjects in simulated walking (P = 0.03), but the two groups were not significantly different in all other conditions. Users did not substantially overestimate collision risk, as the x5 minified images had only limited impact on collision judgments either during walking or before starting to walk.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Flow Control Research at NASA Langley in Support of High-Lift Augmentation
NASA Technical Reports Server (NTRS)
Sellers, William L., III; Jones, Gregory S.; Moore, Mark D.
2002-01-01
The paper describes the efforts at NASA Langley to apply active and passive flow control techniques for improved high-lift systems, and advanced vehicle concepts utilizing powered high-lift techniques. The development of simplified high-lift systems utilizing active flow control is shown to provide significant weight and drag reduction benefits based on system studies. Active flow control that focuses on separation, and the development of advanced circulation control wings (CCW) utilizing unsteady excitation techniques will be discussed. The advanced CCW airfoils can provide multifunctional controls throughout the flight envelope. Computational and experimental data are shown to illustrate the benefits and issues with implementation of the technology.
SI-BEARING MOLECULES TOWARD IRC+10216: ALMA UNVEILS THE MOLECULAR ENVELOPE OF CWLEO.
Prieto, L Velilla; Cernicharo, J; Quintana-Lacaci, G; Agúndez, M; Castro-Carrizo, A; Fonfŕia, J P; Marcelino, N; Zúñiga, J; Requena, A; Bastida, A; Lique, F; Guélin, M
2015-06-01
We report the detection of SiS rotational lines in high-vibrational states as well as SiO and SiC 2 lines in their ground vibrational state toward IRC+10216 during the Atacama Large Millimeter Array Cycle 0. The spatial distribution of these molecules shows compact emission for SiS and a more extended emission for SiO and SiC 2 , and also proves the existence of an increase in the SiC 2 emission at the outer shells of the circumstellar envelope. We analyze the excitation conditions of the vibrationally excited SiS using the population diagram technique, and we use a large velocity gradient model to compare with the observations. We found moderate discrepancies between the observations and the models that could be explained if SiS lines detected are optically thick. Additionally, the line profiles of the detected rotational lines in the high energy vibrational states show a decreasing linewidth with increasing energy levels. This may be evidence that these lines could be excited only in the inner shells, i.e., the densest and hottest, of the circumstellar envelope of IRC+10216.
SI-BEARING MOLECULES TOWARD IRC+10216: ALMA UNVEILS THE MOLECULAR ENVELOPE OF CWLEO
Prieto, L. Velilla; Cernicharo, J.; Quintana–Lacaci, G.; Agúndez, M.; Castro–Carrizo, A.; Fonfŕia, J. P.; Marcelino, N.; Zúñiga, J.; Requena, A.; Bastida, A.; Lique, F.; Guélin, M.
2015-01-01
We report the detection of SiS rotational lines in high-vibrational states as well as SiO and SiC2 lines in their ground vibrational state toward IRC+10216 during the Atacama Large Millimeter Array Cycle 0. The spatial distribution of these molecules shows compact emission for SiS and a more extended emission for SiO and SiC2, and also proves the existence of an increase in the SiC2 emission at the outer shells of the circumstellar envelope. We analyze the excitation conditions of the vibrationally excited SiS using the population diagram technique, and we use a large velocity gradient model to compare with the observations. We found moderate discrepancies between the observations and the models that could be explained if SiS lines detected are optically thick. Additionally, the line profiles of the detected rotational lines in the high energy vibrational states show a decreasing linewidth with increasing energy levels. This may be evidence that these lines could be excited only in the inner shells, i.e., the densest and hottest, of the circumstellar envelope of IRC+10216. PMID:26688711
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2017-02-01
Nowadays, the vibration analysis of rotating machine signals is a well-established methodology, rooted on powerful tools offered, in particular, by the theory of cyclostationary (CS) processes. Among them, the squared envelope spectrum (SES) is probably the most popular to detect random CS components which are typical symptoms, for instance, of rolling element bearing faults. Recent researches are shifted towards the extension of existing CS tools - originally devised in constant speed conditions - to the case of variable speed conditions. Many of these works combine the SES with computed order tracking after some preprocessing steps. The principal object of this paper is to organize these dispersed researches into a structured comprehensive framework. Three original features are furnished. First, a model of rotating machine signals is introduced which sheds light on the various components to be expected in the SES. Second, a critical comparison is made of three sophisticated methods, namely, the improved synchronous average, the cepstrum prewhitening, and the generalized synchronous average, used for suppressing the deterministic part. Also, a general envelope enhancement methodology which combines the latter two techniques with a time-domain filtering operation is revisited. All theoretical findings are experimentally validated on simulated and real-world vibration signals.
Uncertainty of Videogrammetric Techniques used for Aerodynamic Testing
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tianshu; DeLoach, Richard
2002-01-01
The uncertainty of videogrammetric techniques used for the measurement of static aeroelastic wind tunnel model deformation and wind tunnel model pitch angle is discussed. Sensitivity analyses and geometrical considerations of uncertainty are augmented by analyses of experimental data in which videogrammetric angle measurements were taken simultaneously with precision servo accelerometers corrected for dynamics. An analysis of variance (ANOVA) to examine error dependence on angle of attack, sensor used (inertial or optical). and on tunnel state variables such as Mach number is presented. Experimental comparisons with a high-accuracy indexing table are presented. Small roll angles are found to introduce a zero-shift in the measured angles. It is shown experimentally that. provided the proper constraints necessary for a solution are met, a single- camera solution can he comparable to a 2-camera intersection result. The relative immunity of optical techniques to dynamics is illustrated.
ARM Best Estimate Data (ARMBE) Products for Climate Science for a Sustainable Energy Future (CSSEF)
Riihimaki, Laura; Gaustad, Krista; McFarlane, Sally
2014-06-12
This data set was created for the Climate Science for a Sustainable Energy Future (CSSEF) model testbed project and is an extension of the hourly average ARMBE dataset to other extended facility sites and to include uncertainty estimates. Uncertainty estimates were needed in order to use uncertainty quantification (UQ) techniques with the data.
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Bertrand-Krajewski, J L; Bardin, J P; Mourad, M; Béranger, Y
2003-01-01
Assessing the functioning and the performance of urban drainage systems on both rainfall event and yearly time scales is usually based on online measurements of flow rates and on samples of influent effluent for some rainfall events per year. In order to draw pertinent scientific and operational conclusions from the measurement results, it is absolutely necessary to use appropriate methods and techniques in order to i) calibrate sensors and analytical methods, ii) validate raw data, iii) evaluate measurement uncertainties, iv) evaluate the number of rainfall events to sample per year in order to determine performance indicator with a given uncertainty. Based an previous work, the paper gives a synthetic review of required and techniques, and illustrates their application to storage and settling tanks. Experiments show that, controlled and careful experimental conditions, relative uncertainties are about 20% for flow rates in sewer pipes, 6-10% for volumes, 25-35% for TSS concentrations and loads, and 18-276% for TSS removal rates. In order to evaluate the annual pollutant interception efficiency of storage and settling tanks with a given uncertainty, efforts should first be devoted to decrease the sampling uncertainty by increasing the number of sampled events.
Uncertainty of fast biological radiation dose assessment for emergency response scenarios.
Ainsbury, Elizabeth A; Higueras, Manuel; Puig, Pedro; Einbeck, Jochen; Samaga, Daniel; Barquinero, Joan Francesc; Barrios, Lleonard; Brzozowska, Beata; Fattibene, Paola; Gregoire, Eric; Jaworska, Alicja; Lloyd, David; Oestreicher, Ursula; Romm, Horst; Rothkamm, Kai; Roy, Laurence; Sommer, Sylwester; Terzoudi, Georgia; Thierens, Hubert; Trompier, Francois; Vral, Anne; Woda, Clemens
2017-01-01
Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises. The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization.
Measurement of absolute gamma emission probabilities
NASA Astrophysics Data System (ADS)
Sumithrarachchi, Chandana S.; Rengan, Krish; Griffin, Henry C.
2003-06-01
The energies and emission probabilities (intensities) of gamma-rays emitted in radioactive decays of particular nuclides are the most important characteristics by which to quantify mixtures of radionuclides. Often, quantification is limited by uncertainties in measured intensities. A technique was developed to reduce these uncertainties. The method involves obtaining a pure sample of a nuclide using radiochemical techniques, and using appropriate fractions for beta and gamma measurements. The beta emission rates were measured using a liquid scintillation counter, and the gamma emission rates were measured with a high-purity germanium detector. Results were combined to obtain absolute gamma emission probabilities. All sources of uncertainties greater than 0.1% were examined. The method was tested with 38Cl and 88Rb.
NASA Astrophysics Data System (ADS)
Sanz, Claude; Giusca, Claudiu; Morantz, Paul; Marin, Antonio; Chérif, Ahmed; Schneider, Jürgen; Mainaud-Durand, Hélène; Shore, Paul; Steffens, Norbert
2018-07-01
The accurate characterisation of a copper–beryllium wire with a diameter of 0.1 mm is one of the steps to increase the precision of future accelerators’ pre-alignment. Novelties in measuring the wire properties were found in order to overcome the difficulties brought by its small size. This paper focuses on an implementation of a chromatic-confocal sensor leading to a sub-micrometric uncertainty on the form measurements. Hence, this text reveals a high-accuracy metrology technique applicable to objects with small diameters: it details the methodology, describes a validation by comparison with a reference and specifies the uncertainty budget of this technique.
Joint inversion of regional and teleseismic earthquake waveforms
NASA Astrophysics Data System (ADS)
Baker, Mark R.; Doser, Diane I.
1988-03-01
A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.
food science. Matthew's research at NREL is focused on applying uncertainty quantification techniques . Research Interests Uncertainty quantification Computational multilinear algebra Approximation theory of and the Canonical Tensor Decomposition, Journal of Computational Physics (2017) Randomized Alternating
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Uncertainty of quantitative microbiological methods of pharmaceutical analysis.
Gunar, O V; Sakhno, N G
2015-12-30
The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong
2012-01-01
Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the optimization algorithm attempts to produce a single-field uniform dose distribution while minimizing the patching field as much as possible; and (2) perturbed dose distribution, which follows the change in anatomical geometry. Multiple-instance optimization has more knowledge of the influence matrices; this greater knowledge improves IMPT plans’ ability to retain robustness despite the presence of uncertainties. PMID:22755694
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
Bozler, Julianna; Nguyen, Huy Q; Rogers, Gregory C; Bosco, Giovanni
2014-12-30
Although the nuclear envelope is known primarily for its role as a boundary between the nucleus and cytoplasm in eukaryotes, it plays a vital and dynamic role in many cellular processes. Studies of nuclear structure have revealed tissue-specific changes in nuclear envelope architecture, suggesting that its three-dimensional structure contributes to its functionality. Despite the importance of the nuclear envelope, the factors that regulate and maintain nuclear envelope shape remain largely unexplored. The nuclear envelope makes extensive and dynamic interactions with the underlying chromatin. Given this inexorable link between chromatin and the nuclear envelope, it is possible that local and global chromatin organization reciprocally impact nuclear envelope form and function. In this study, we use Drosophila salivary glands to show that the three-dimensional structure of the nuclear envelope can be altered with condensin II-mediated chromatin condensation. Both naturally occurring and engineered chromatin-envelope interactions are sufficient to allow chromatin compaction forces to drive distortions of the nuclear envelope. Weakening of the nuclear lamina further enhanced envelope remodeling, suggesting that envelope structure is capable of counterbalancing chromatin compaction forces. Our experiments reveal that the nucleoplasmic reticulum is born of the nuclear envelope and remains dynamic in that they can be reabsorbed into the nuclear envelope. We propose a model where inner nuclear envelope-chromatin tethers allow interphase chromosome movements to change nuclear envelope morphology. Therefore, interphase chromatin compaction may be a normal mechanism that reorganizes nuclear architecture, while under pathological conditions, such as laminopathies, compaction forces may contribute to defects in nuclear morphology. Copyright © 2015 Bozler et al.
Bozler, Julianna; Nguyen, Huy Q.; Rogers, Gregory C.; Bosco, Giovanni
2014-01-01
Although the nuclear envelope is known primarily for its role as a boundary between the nucleus and cytoplasm in eukaryotes, it plays a vital and dynamic role in many cellular processes. Studies of nuclear structure have revealed tissue-specific changes in nuclear envelope architecture, suggesting that its three-dimensional structure contributes to its functionality. Despite the importance of the nuclear envelope, the factors that regulate and maintain nuclear envelope shape remain largely unexplored. The nuclear envelope makes extensive and dynamic interactions with the underlying chromatin. Given this inexorable link between chromatin and the nuclear envelope, it is possible that local and global chromatin organization reciprocally impact nuclear envelope form and function. In this study, we use Drosophila salivary glands to show that the three-dimensional structure of the nuclear envelope can be altered with condensin II-mediated chromatin condensation. Both naturally occurring and engineered chromatin-envelope interactions are sufficient to allow chromatin compaction forces to drive distortions of the nuclear envelope. Weakening of the nuclear lamina further enhanced envelope remodeling, suggesting that envelope structure is capable of counterbalancing chromatin compaction forces. Our experiments reveal that the nucleoplasmic reticulum is born of the nuclear envelope and remains dynamic in that they can be reabsorbed into the nuclear envelope. We propose a model where inner nuclear envelope-chromatin tethers allow interphase chromosome movements to change nuclear envelope morphology. Therefore, interphase chromatin compaction may be a normal mechanism that reorganizes nuclear architecture, while under pathological conditions, such as laminopathies, compaction forces may contribute to defects in nuclear morphology. PMID:25552604
Demodulation techniques for the amplitude modulated laser imager
NASA Astrophysics Data System (ADS)
Mullen, Linda; Laux, Alan; Cochenour, Brandon; Zege, Eleonora P.; Katsev, Iosif L.; Prikhach, Alexander S.
2007-10-01
A new technique has been found that uses in-phase and quadrature phase (I/Q) demodulation to optimize the images produced with an amplitude-modulated laser imaging system. An I/Q demodulator was used to collect the I/Q components of the received modulation envelope. It was discovered that by adjusting the local oscillator phase and the modulation frequency, the backscatter and target signals can be analyzed separately via the I/Q components. This new approach enhances image contrast beyond what was achieved with a previous design that processed only the composite magnitude information.
Evaluation of automobiles with alternative fuels utilizing multicriteria techniques
NASA Astrophysics Data System (ADS)
Brey, J. J.; Contreras, I.; Carazo, A. F.; Brey, R.; Hernández-Díaz, A. G.; Castro, A.
This work applies the non-parametric technique of Data Envelopment Analysis (DEA) to conduct a multicriteria comparison of some existing and under development technologies in the automotive sector. The results indicate that some of the technologies under development, such as hydrogen fuel cell vehicles, can be classified as efficient when evaluated in function of environmental and economic criteria, with greater importance being given to the environmental criteria. The article also demonstrates the need to improve the hydrogen-based technology, in comparison with the others, in aspects such as vehicle sale costs and fuel price.
Superresolution Microscopy of the Nuclear Envelope and Associated Proteins.
Xie, Wei; Horn, Henning F; Wright, Graham D
2016-01-01
Superresolution microscopy is undoubtedly one of the most exciting technologies since the invention of the optical microscope. Capable of nanometer-scale resolution to surpass the diffraction limit and coupled with the versatile labeling techniques available, it is revolutionizing the study of cell biology. Our understanding of the nucleus, the genetic and architectural center of the cell, has gained great advancements through the application of various superresolution microscopy techniques. This chapter describes detailed procedures of multichannel superresolution imaging of the mammalian nucleus, using structured illumination microscopy and single-molecule localization microscopy.
Fröba, Andreas P; Kremer, Heiko; Leipertz, Alfred
2008-10-02
The density, refractive index, interfacial tension, and viscosity of ionic liquids (ILs) [EMIM][EtSO 4] (1-ethyl-3-methylimidazolium ethylsulfate), [EMIM][NTf 2] (1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide), [EMIM][N(CN) 2] (1-ethyl-3-methylimidazolium dicyanimide), and [OMA][NTf 2] (trioctylmethylammonium bis(trifluoromethylsulfonyl)imide) were studied in dependence on temperature at atmospheric pressure both by conventional techniques and by surface light scattering (SLS). A vibrating tube densimeter was used for the measurement of density at temperatures from (273.15 to 363.15) K and the results have an expanded uncertainty ( k = 2) of +/-0.02%. Using an Abbe refractometer, the refractive index was measured for temperatures between (283.15 and 313.15) K with an expanded uncertainty ( k = 2) of about +/-0.0005. The interfacial tension was obtained from the pendant drop technique at a temperature of 293.15 K with an expanded uncertainty ( k = 2) of +/-1%. For higher and lower temperatures, the interfacial tension was estimated by an adequate prediction scheme based on the datum at 293.15 K and the temperature dependence of density. For the ILs studied within this work, at a first order approximation, the quantity directly accessible by the SLS technique was the ratio of surface tension to dynamic viscosity. By combining the experimental results of the SLS technique with density and interfacial tension from conventional techniques, the dynamic viscosity could be obtained for temperatures between (273.15 and 333.15) K with an estimated expanded uncertainty ( k = 2) of less than +/-3%. The measured density, refractive index, and viscosity are represented by interpolating expressions with differences between the experimental and calculated values that are comparable with but always smaller than the expanded uncertainties ( k = 2). Besides a comparison with the literature, the influence of structural variations on the thermophysical properties of the ILs is discussed in detail. The viscosities mostly agree with values reported in the literature within the combined estimated expanded uncertainties ( k = 2) of the measurements while our density and interfacial tension data differ by more than +/-1% and +/-5%.
Aboal, J R; Boquete, M T; Carballeira, A; Casanova, A; Debén, S; Fernández, J A
2017-05-01
In this study we examined 6080 data gathered by our research group during more than 20 years of research on the moss biomonitoring technique, in order to quantify the variability generated by different aspects of the protocol and to calculate the overall measurement uncertainty associated with the technique. The median variance of the concentrations of different pollutants measured in moss tissues attributed to the different methodological aspects was high, reaching values of 2851 (ng·g -1 ) 2 for Cd (sample treatment), 35.1 (μg·g -1 ) 2 for Cu (sample treatment), 861.7 (ng·g -1 ) 2 and for Hg (material selection). These variances correspond to standard deviations that constitute 67, 126 and 59% the regional background levels of these elements in the study region. The overall measurement uncertainty associated with the worst experimental protocol (5 subsamples, refrigerated, washed, 5 × 5 m size of the sampling area and once a year sampling) was between 2 and 6 times higher than that associated with the optimal protocol (30 subsamples, dried, unwashed, 20 × 20 m size of the sampling area and once a week sampling), and between 1.5 and 7 times higher than that associated with the standardized protocol (30 subsamples and once a year sampling). The overall measurement uncertainty associated with the standardized protocol could generate variations of between 14 and 47% in the regional background levels of Cd, Cu, Hg, Pb and Zn in the study area and much higher levels of variation in polluted sampling sites. We demonstrated that although the overall measurement uncertainty of the technique is still high, it can be reduced by using already well defined aspects of the protocol. Further standardization of the protocol together with application of the information on the overall measurement uncertainty would improve the reliability and comparability of the results of different biomonitoring studies, thus extending use of the technique beyond the context of scientific research. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dunne, Erin; Galbally, Ian E.; Cheng, Min; Selleck, Paul; Molloy, Suzie B.; Lawson, Sarah J.
2018-01-01
Understanding uncertainty is essential for utilizing atmospheric volatile organic compound (VOC) measurements in robust ways to develop atmospheric science. This study describes an inter-comparison of the VOC data, and the derived uncertainty estimates, measured with three independent techniques (PTR-MS, proton-transfer-reaction mass spectrometry; GC-FID-MS, gas chromatography with flame-ionization and mass spectrometric detection; and DNPH-HPLC, 2,4-dinitrophenylhydrazine derivatization followed by analysis by high-performance liquid chromatography) during routine monitoring as part of the Sydney Particle Study (SPS) campaign in 2012. Benzene, toluene, C8 aromatics, isoprene, formaldehyde and acetaldehyde were selected for the comparison, based on objective selection criteria from the available data. Bottom-up uncertainty analyses were undertaken for each compound and each measurement system. Top-down uncertainties were quantified via the inter-comparisons. In all seven comparisons, the correlations between independent measurement techniques were high with R2 values with a median of 0.92 (range 0.75-0.98) and small root mean square of the deviations (RMSD) of the observations from the regression line with a median of 0.11 (range 0.04-0.23 ppbv). These results give a high degree of confidence that for each comparison the response of the two independent techniques is dominated by the same constituents. The slope and intercept as determined by reduced major axis (RMA) regression gives a different story. The slopes varied considerably with a median of 1.25 and a range of 1.16-2.01. The intercepts varied with a median of 0.04 and a range of -0.03 to 0.31 ppbv. An ideal comparison would give a slope of 1.00 and an intercept of 0. Some sources of uncertainty that are poorly quantified by the bottom-up uncertainty analysis method were identified, including: contributions of non-target compounds to the measurement of the target compound for benzene, toluene and isoprene by PTR-MS as well as the under-reporting of formaldehyde, acetaldehyde and acetone by the DNPH technique. As well as these, this study has identified a specific interference of liquid water with acetone measurements by the DNPH technique. These relationships reported for Sydney 2012 were incorporated into a larger analysis with 61 similar published inter-comparison studies for the same compounds. Overall, for the light aromatics, isoprene and the C1-C3 carbonyls, the uncertainty in a set of measurements varies by a factor of between 1.5 and 2. These uncertainties ( ˜ 50 %) are significantly higher than uncertainties estimated using standard propagation of error methods, which in this case were ˜ 22 % or less, and are the result of the presence of poorly understood or neglected processes that affect the measurement and its uncertainty. The uncertainties in VOC measurements identified here should be considered when assessing the reliability of VOC measurements from routine monitoring with individual, stand-alone instruments; when utilizing VOC data to constrain and inform air quality and climate models; when using VOC observations for human exposure studies; and for comparison with satellite retrievals.
Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.
2017-12-01
Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.
Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma
2009-10-01
Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.
25 CFR 90.43 - Canvass of election returns.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the inner envelope, the voter fails to sign the statement appearing on the outer envelope, and for failure to seal the inner envelope or enclose the inner envelope in the outer envelope. Votes cast for... all other ballots have been counted, the sealed inner envelopes containing the absentee ballots shall...
A technique for the assessment of fighter aircraft precision controllability
NASA Technical Reports Server (NTRS)
Sisk, T. R.
1978-01-01
Today's emerging fighter aircraft are maneuvering as well at normal accelerations of 7 to 8 g's as their predecessors did at 4 to 5 g's. This improved maneuvering capability has significantly expanded their operating envelope and made the task of evaluating handling qualities more difficult. This paper describes a technique for assessing the precision controllability of highly maneuverable aircraft, a technique that was developed to evaluate the effects of buffet intensity on gunsight tracking capability and found to be a useful tool for the general assessment of fighter aircraft handling qualities. It has also demonstrated its usefulness for evaluating configuration and advanced flight control system refinements. This technique is believed to have application to future aircraft dynamics and pilot-vehicle interface studies.
Real-Time Onboard Global Nonlinear Aerodynamic Modeling from Flight Data
NASA Technical Reports Server (NTRS)
Brandon, Jay M.; Morelli, Eugene A.
2014-01-01
Flight test and modeling techniques were developed to accurately identify global nonlinear aerodynamic models onboard an aircraft. The techniques were developed and demonstrated during piloted flight testing of an Aermacchi MB-326M Impala jet aircraft. Advanced piloting techniques and nonlinear modeling techniques based on fuzzy logic and multivariate orthogonal function methods were implemented with efficient onboard calculations and flight operations to achieve real-time maneuver monitoring and analysis, and near-real-time global nonlinear aerodynamic modeling and prediction validation testing in flight. Results demonstrated that global nonlinear aerodynamic models for a large portion of the flight envelope were identified rapidly and accurately using piloted flight test maneuvers during a single flight, with the final identified and validated models available before the aircraft landed.
Assessing and reducing hydrogeologic model uncertainty
USDA-ARS?s Scientific Manuscript database
NRC is sponsoring research that couples model abstraction techniques with model uncertainty assessment methods. Insights and information from this program will be useful in decision making by NRC staff, licensees and stakeholders in their assessment of subsurface radionuclide transport. All analytic...
Helioseismic and neutrino data-driven reconstruction of solar properties
NASA Astrophysics Data System (ADS)
Song, Ningqiang; Gonzalez-Garcia, M. C.; Villante, Francesco L.; Vinyoles, Nuria; Serenelli, Aldo
2018-06-01
In this work, we use Bayesian inference to quantitatively reconstruct the solar properties most relevant to the solar composition problem using as inputs the information provided by helioseismic and solar neutrino data. In particular, we use a Gaussian process to model the functional shape of the opacity uncertainty to gain flexibility and become as free as possible from prejudice in this regard. With these tools we first readdress the statistical significance of the solar composition problem. Furthermore, starting from a composition unbiased set of standard solar models (SSMs) we are able to statistically select those with solar chemical composition and other solar inputs which better describe the helioseismic and neutrino observations. In particular, we are able to reconstruct the solar opacity profile in a data-driven fashion, independently of any reference opacity tables, obtaining a 4 per cent uncertainty at the base of the convective envelope and 0.8 per cent at the solar core. When systematic uncertainties are included, results are 7.5 per cent and 2 per cent, respectively. In addition, we find that the values of most of the other inputs of the SSMs required to better describe the helioseismic and neutrino data are in good agreement with those adopted as the standard priors, with the exception of the astrophysical factor S11 and the microscopic diffusion rates, for which data suggests a 1 per cent and 30 per cent reduction, respectively. As an output of the study we derive the corresponding data-driven predictions for the solar neutrino fluxes.
Verification and Tuning of an Adaptive Controller for an Unmanned Air Vehicle
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.
2010-01-01
This paper focuses on the analysis and tuning of a controller based on the Adaptive Control Technology for Safe Flight (ACTS) architecture. The ACTS architecture consists of a nominal, non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off-nominal ones. A framework unifying control verification and gain tuning is used to make the controller s ability to satisfy the closed-loop requirements more robust to uncertainty. In this paper we tune the gains of both controllers using this approach. Some advantages and drawbacks of adaptation are identified by performing a global robustness assessment of both the adaptive controller and its non-adaptive counterpart. The analyses used to determine these characteristics are based on evaluating the degradation in closed-loop performance resulting from uncertainties having increasing levels of severity. The specific adverse conditions considered can be grouped into three categories: aerodynamic uncertainties, structural damage, and actuator failures. These failures include partial and total loss of control effectiveness, locked-in-place control surface deflections, and engine out conditions. The requirements considered are the peak structural loading, the ability of the controller to track pilot commands, the ability of the controller to keep the aircraft s state within the reliable flight envelope, and the handling/riding qualities of the aircraft. The nominal controller resulting from these tuning strategies was successfully validated using the NASA GTM Flight Test Vehicle.
NASA Astrophysics Data System (ADS)
Dobre, M.; Peruzzi, A.; Kalemci, M.; Van Geel, J.; Maeck, M.; Uytun, A.
2018-05-01
Recent international comparisons showed that there is still room for improvement in triple point of water (TPW) realization uncertainty. Large groups of cells manufactured, maintained and measured in similar conditions still show a spread in the realized TPW temperature that is larger than the best measurement uncertainties (25 µK). One cause is the time-dependent concentration of dissolved impurities in water. The origin of such impurities is the glass/quartz envelope dissolution during a cell lifetime. The effect is a difference in the triple point temperature proportional to the impurities concentration. In order to measure this temperature difference and to investigate the effect of different types of impurities, we manufactured doped cells with different concentrations of silicon (Si), boron (B), sodium (Na) and potassium (K), the glass main chemical components. To identify any influence of the filling process, two completely independent manufacturing procedures were followed in two different laboratories, both national metrology institutes (VSL, Netherlands and UME, Turkey). Cells glass and filling water were also different while the doping materials were identical. Measuring the temperature difference as a function of the liquid fraction is a method to obtain information about impurities concentrations in TPW. Only cells doped with 1 µmol·mol-1 B, Na and K proved to be suitable for measurements at different liquid fractions. We present here the results with related uncertainties and discuss the critical points in this experimental approach.
NASA Astrophysics Data System (ADS)
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.
PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabharwall, Piyush; Skifton, Richard; Stoots, Carl
2013-12-01
Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart ofmore » any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.« less
Morphology-based three-dimensional segmentation of coronary artery tree from CTA scans
NASA Astrophysics Data System (ADS)
Banh, Diem Phuc T.; Kyprianou, Iacovos S.; Paquerault, Sophie; Myers, Kyle J.
2007-03-01
We developed an algorithm based on a rule-based threshold framework to segment the coronary arteries from angiographic computed tomography (CTA) data. Computerized segmentation of the coronary arteries is a challenging procedure due to the presence of diverse anatomical structures surrounding the heart on cardiac CTA data. The proposed algorithm incorporates various levels of image processing and organ information including region, connectivity and morphology operations. It consists of three successive stages. The first stage involves the extraction of the three-dimensional scaffold of the heart envelope. This stage is semiautomatic requiring a reader to review the CTA scans and manually select points along the heart envelope in slices. These points are further processed using a surface spline-fitting technique to automatically generate the heart envelope. The second stage consists of segmenting the left heart chambers and coronary arteries using grayscale threshold, size and connectivity criteria. This is followed by applying morphology operations to further detach the left and right coronary arteries from the aorta. In the final stage, the 3D vessel tree is reconstructed and labeled using an Isolated Connected Threshold technique. The algorithm was developed and tested on a patient coronary artery CTA that was graciously shared by the Department of Radiology of the Massachusetts General Hospital. The test showed that our method constantly segmented the vessels above 79% of the maximum gray-level and automatically extracted 55 of the 58 coronary segments that can be seen on the CTA scan by a reader. These results are an encouraging step toward our objective of generating high resolution models of the male and female heart that will be subsequently used as phantoms for medical imaging system optimization studies.
Source analysis of auditory steady-state responses in acoustic and electric hearing.
Luke, Robert; De Vos, Astrid; Wouters, Jan
2017-02-15
Speech is a complex signal containing a broad variety of acoustic information. For accurate speech reception, the listener must perceive modulations over a range of envelope frequencies. Perception of these modulations is particularly important for cochlear implant (CI) users, as all commercial devices use envelope coding strategies. Prolonged deafness affects the auditory pathway. However, little is known of how cochlear implantation affects the neural processing of modulated stimuli. This study investigates and contrasts the neural processing of envelope rate modulated signals in acoustic and CI listeners. Auditory steady-state responses (ASSRs) are used to study the neural processing of amplitude modulated (AM) signals. A beamforming technique is applied to determine the increase in neural activity relative to a control condition, with particular attention paid to defining the accuracy and precision of this technique relative to other tomographies. In a cohort of 44 acoustic listeners, the location, activity and hemispheric lateralisation of ASSRs is characterised while systematically varying the modulation rate (4, 10, 20, 40 and 80Hz) and stimulation ear (right, left and bilateral). We demonstrate a complex pattern of laterality depending on both modulation rate and stimulation ear that is consistent with, and extends, existing literature. We present a novel extension to the beamforming method which facilitates source analysis of electrically evoked auditory steady-state responses (EASSRs). In a cohort of 5 right implanted unilateral CI users, the neural activity is determined for the 40Hz rate and compared to the acoustic cohort. Results indicate that CI users activate typical thalamic locations for 40Hz stimuli. However, complementary to studies of transient stimuli, the CI population has atypical hemispheric laterality, preferentially activating the contralateral hemisphere. Copyright © 2016. Published by Elsevier Inc.
Okada, Takaharu; Uto, Koichiro; Sasai, Masao; Lee, Chun Man; Ebara, Mitsuhiro; Aoyagi, Takao
2013-06-18
In this study, we created a nanoscale layer of hyaluronic acid (HA) on the inactivated Hemagglutinating Virus of Japan envelope (HVJ-E) via a layer-by-layer (LbL) assembly technique for CD-44 targeted delivery. HVJ-E was selected as the template virus because it has shown a tumor-suppressing ability by eliciting inflammatory cytokine production in dendritic cells. Although it has been required to increase the tumor-targeting ability and reduce nonspecific binding because HVJ-E fuses with virtually all cells and induces hemagglutination in the bloodstream, complete modifications of single-envelope-type viruses with HA have been difficult. Therefore, we studied the surface ζ potential of HVJ-E at different pH values and carefully examined the deposition conditions for the first layer using three cationic polymers: poly-L-lysine (PLL), chitosan (CH), and glycol chitosan (GC). GC-coated HVJ-E particles showed the highest disperse ability under physiological pH and salt conditions without aggregation. An HA layer was then prepared via alternating deposition of HA and GC. The successive decoration of multilayers on HVJ-E has been confirmed by dynamic light scattering (DLS), ζ potentials, and transmission electron microscopy (TEM). An enzymatic degradation assay revealed that only the outermost HA layer was selectively degraded by hyaluronidase. However, entire layers were destabilized at lower pH. Therefore, the HA/GC-coated HVJ-E describe here can be thought of as a potential bomb for cancer immunotherapy because of the ability of targeting CD44 as well as the explosion of nanodecorated HA/GC layers at endosomal pH while preventing nonspecific binding at physiological pH and salt conditions such as in the bloodstream or normal tissues.
Robinson, Jason L; Fordyce, James A
2017-01-01
Among the greatest challenges facing the conservation of plants and animal species in protected areas are threats from a rapidly changing climate. An altered climate creates both challenges and opportunities for improving the management of protected areas in networks. Increasingly, quantitative tools like species distribution modeling are used to assess the performance of protected areas and predict potential responses to changing climates for groups of species, within a predictive framework. At larger geographic domains and scales, protected area network units have spatial geoclimatic properties that can be described in the gap analysis typically used to measure or aggregate the geographic distributions of species (stacked species distribution models, or S-SDM). We extend the use of species distribution modeling techniques in order to model the climate envelope (or "footprint") of individual protected areas within a network of protected areas distributed across the 48 conterminous United States and managed by the US National Park System. In our approach we treat each protected area as the geographic range of a hypothetical endemic species, then use MaxEnt and 5 uncorrelated BioClim variables to model the geographic distribution of the climatic envelope associated with each protected area unit (modeling the geographic area of park units as the range of a species). We describe the individual and aggregated climate envelopes predicted by a large network of 163 protected areas and briefly illustrate how macroecological measures of geodiversity can be derived from our analysis of the landscape ecological context of protected areas. To estimate trajectories of change in the temporal distribution of climatic features within a protected area network, we projected the climate envelopes of protected areas in current conditions onto a dataset of predicted future climatic conditions. Our results suggest that the climate envelopes of some parks may be locally unique or have narrow geographic distributions, and are thus prone to future shifts away from the climatic conditions in these parks in current climates. In other cases, some parks are broadly similar to large geographic regions surrounding the park or have climatic envelopes that may persist into near-term climate change. Larger parks predict larger climatic envelopes, in current conditions, but on average the predicted area of climate envelopes are smaller in our single future conditions scenario. Individual units in a protected area network may vary in the potential for climate adaptation, and adaptive management strategies for the network should account for the landscape contexts of the geodiversity or climate diversity within individual units. Conservation strategies, including maintaining connectivity, assessing the feasibility of assisted migration and other landscape restoration or enhancements can be optimized using analysis methods to assess the spatial properties of protected area networks in biogeographic and macroecological contexts.
Compensation of significant parametric uncertainties using sliding mode online learning
NASA Astrophysics Data System (ADS)
Schnetter, Philipp; Kruger, Thomas
An augmented nonlinear inverse dynamics (NID) flight control strategy using sliding mode online learning for a small unmanned aircraft system (UAS) is presented. Because parameter identification for this class of aircraft often is not valid throughout the complete flight envelope, aerodynamic parameters used for model based control strategies may show significant deviations. For the concept of feedback linearization this leads to inversion errors that in combination with the distinctive susceptibility of small UAS towards atmospheric turbulence pose a demanding control task for these systems. In this work an adaptive flight control strategy using feedforward neural networks for counteracting such nonlinear effects is augmented with the concept of sliding mode control (SMC). SMC-learning is derived from variable structure theory. It considers a neural network and its training as a control problem. It is shown that by the dynamic calculation of the learning rates, stability can be guaranteed and thus increase the robustness against external disturbances and system failures. With the resulting higher speed of convergence a wide range of simultaneously occurring disturbances can be compensated. The SMC-based flight controller is tested and compared to the standard gradient descent (GD) backpropagation algorithm under the influence of significant model uncertainties and system failures.
Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates
NASA Astrophysics Data System (ADS)
Moore, Christopher J.; Gair, Jonathan R.
2014-12-01
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.
A Precise Calibration Technique for Measuring High Gas Temperatures
NASA Technical Reports Server (NTRS)
Gokoglu, Suleyman A.; Schultz, Donald F.
2000-01-01
A technique was developed for direct measurement of gas temperatures in the range of 2050 K 2700 K with improved accuracy and reproducibility. The technique utilized the low-emittance of certain fibrous materials, and the uncertainty of the technique was United by the uncertainty in the melting points of the materials, i.e., +/-15 K. The materials were pure, thin, metal-oxide fibers whose diameters varied from 60 microns to 400 microns in the experiments. The sharp increase in the emittance of the fibers upon melting was utilized as indication of reaching a known gas temperature. The accuracy of the technique was confirmed by both calculated low emittance values of transparent fibers, of order 0.01, up to a few degrees below their melting point and by the fiber-diameter independence of the results. This melting-point temperature was approached by increments not larger than 4 K, which was accomplished by controlled increases of reactant flow rates in hydrogen-air and/or hydrogen-oxygen flames. As examples of the applications of the technique, the gas-temperature measurements were used: (a) for assessing the uncertainty in inferring gas temperatures from thermocouple measurements, and (b) for calibrating an IR camera to measure gas temperatures. The technique offers an excellent calibration reference for other gas-temperature measurement methods to improve their accuracy and reliably extending their temperature range of applicability.
A Precise Calibration Technique for Measuring High Gas Temperatures
NASA Technical Reports Server (NTRS)
Gokoglu, Suleyman A.; Schultz, Donald F.
1999-01-01
A technique was developed for direct measurement of gas temperatures in the range of 2050 K - 2700 K with improved accuracy and reproducibility. The technique utilized the low-emittance of certain fibrous Materials, and the uncertainty of the technique was limited by the uncertainty in the melting points of the materials, i.e., +/- 15 K. The materials were pure, thin, metal-oxide fibers whose diameters varied from 60 mm to 400 mm in the experiments. The sharp increase in the emittance of the fibers upon melting was utilized as indication of reaching a known gas temperature. The accuracy of the technique was confirmed by both calculated low emittance values of transparent fibers, of order 0.01, up to a few degrees below their melting point and by the fiber-diameter independence of the results. This melting-point temperature was approached by increments not larger than 4 K, which was accomplished by controlled increases of reactant flow rates in hydrogen-air and/or hydrogen- oxygen flames. As examples of the applications of the technique, the gas-temperature measurements were used (a) for assessing the uncertainty in infering gas temperatures from thermocouple measurements, and (b) for calibrating an IR camera to measure gas temperatures. The technique offers an excellent calibration reference for other gas-temperature measurement methods to improve their accuracy and reliably extending their temperature range of applicability.
Signal processing methods for in-situ creep specimen monitoring
NASA Astrophysics Data System (ADS)
Guers, Manton J.; Tittmann, Bernhard R.
2018-04-01
Previous work investigated using guided waves for monitoring creep deformation during accelerated life testing. The basic objective was to relate observed changes in the time-of-flight to changes in the environmental temperature and specimen gage length. The work presented in this paper investigated several signal processing strategies for possible application in the in-situ monitoring system. Signal processing methods for both group velocity (wave-packet envelope) and phase velocity (peak tracking) time-of-flight were considered. Although the Analytic Envelope found via the Hilbert transform is commonly applied for group velocity measurements, erratic behavior in the indicated time-of-flight was observed when this technique was applied to the in-situ data. The peak tracking strategies tested had generally linear trends, and tracking local minima in the raw waveform ultimately showed the most consistent results.
NASA Astrophysics Data System (ADS)
Sultana, S.; Islam, S.; Mamun, A. A.; Schlickeiser, R.
2018-01-01
A theoretical and numerical investigation has been carried out on amplitude modulated heavy nucleus-acoustic envelope solitons (HNAESs) in a degenerate relativistic quantum plasma (DRQP) system containing relativistically degenerate electrons and light nuclei, and non-degenerate mobile heavy nuclei. The cubic nonlinear Schrödinger equation, describing the nonlinear dynamics of the heavy nucleus-acoustic waves (HNAWs), is derived by employing a multi-scale perturbation technique. The dispersion relation for the HNAWs is derived, and the criteria for the occurrence of modulational instability of the HNAESs are analyzed. The localized structures (viz., envelope solitons and associated rogue waves) are found to be formed in the DRQP system under consideration. The basic features of the amplitude modulated HNAESs and associated rogue waves formed in realistic DRQP systems are briefly discussed.
Effect of Moisture Content on Thermal Properties of Porous Building Materials
NASA Astrophysics Data System (ADS)
Kočí, Václav; Vejmelková, Eva; Čáchová, Monika; Koňáková, Dana; Keppert, Martin; Maděra, Jiří; Černý, Robert
2017-02-01
The thermal conductivity and specific heat capacity of characteristic types of porous building materials are determined in the whole range of moisture content from dry to fully water-saturated state. A transient pulse technique is used in the experiments, in order to avoid the influence of moisture transport on measured data. The investigated specimens include cement composites, ceramics, plasters, and thermal insulation boards. The effect of moisture-induced changes in thermal conductivity and specific heat capacity on the energy performance of selected building envelopes containing the studied materials is then analyzed using computational modeling of coupled heat and moisture transport. The results show an increased moisture content as a substantial negative factor affecting both thermal properties of materials and energy balance of envelopes, which underlines the necessity to use moisture-dependent thermal parameters of building materials in energy-related calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Labany, S. K., E-mail: skellabany@hotmail.com; Zedan, N. A., E-mail: nesreenplasma@yahoo.com; El-Taibany, W. F., E-mail: eltaibany@hotmail.com, E-mail: eltaibany@du.edu.eg
The nonplanar amplitude modulation of dust acoustic (DA) envelope solitary waves in a strongly coupled dusty plasma (SCDP) has been investigated. By using a reductive perturbation technique, a modified nonlinear Schrödinger equation (NLSE) including the effects of geometry, polarization, and ion superthermality is derived. The modulational instability (MI) of the nonlinear DA wave envelopes is investigated in both planar and nonplanar geometries. There are two stable regions for the DA wave propagation strongly affected by polarization and ion superthermality. Moreover, it is found that the nonlinear DA waves in spherical geometry are the more structurally stable. The larger growth ratemore » of the nonlinear DA MI is observed in the cylindrical geometry. The salient characteristics of the MI in the nonplanar geometries cannot be found in the planar one. The DA wave propagation and the NLSE solutions are investigated both analytically and numerically.« less
Constant-Envelope Waveform Design for Optimal Target-Detection and Autocorrelation Performances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
2013-01-01
We propose an algorithm to directly synthesize in time-domain a constant-envelope transmit waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. This approach is in contrast to the traditional indirect methods that synthesize the transmit signal following the computation of the optimal energy spectral density. Additionally, we aim to maintain a good autocorrelation property of the designed signal. Therefore, our waveform design technique solves a bi-objective optimization problem in order to simultaneously improve the detection and autocorrelation performances, which are in general conflicting in nature. We demonstrate this compromising characteristics of themore » detection and autocorrelation performances with numerical examples. Furthermore, in the absence of the autocorrelation criterion, our designed signal is shown to achieve a near-optimum detection performance.« less
Qiu, Yi; Wei, Xiaoming; Du, Shuxin; Wong, Kenneth K Y; Tsia, Kevin K; Xu, Yiqing
2018-04-16
We propose a passively mode-locked fiber optical parametric oscillator assisted with optical time-stretch. Thanks to the lately developed optical time-stretch technique, the onset oscillating spectral components can be temporally dispersed across the pump envelope and further compete for the parametric gain with the other parts of onset oscillating sidebands within the pump envelope. By matching the amount of dispersion in optical time-stretch with the pulse width of the quasi-CW pump and oscillating one of the parametric sidebands inside the fiber cavity, we numerically show that the fiber parametric oscillator can be operated in a single pulse regime. By varying the amount of the intracavity dispersion, we further verify that the origin of this single pulse mode-locking regime is due to the optical pulse stretching and compression.
Processing of spectral and amplitude envelope of animal vocalizations in the human auditory cortex.
Altmann, Christian F; Gomes de Oliveira Júnior, Cícero; Heinemann, Linda; Kaiser, Jochen
2010-08-01
In daily life, we usually identify sounds effortlessly and efficiently. Two properties are particularly salient and of importance for sound identification: the sound's overall spectral envelope and its temporal amplitude envelope. In this study, we aimed at investigating the representation of these two features in the human auditory cortex by using a functional magnetic resonance imaging adaptation paradigm. We presented pairs of sound stimuli derived from animal vocalizations that preserved the time-averaged frequency spectrum of the animal vocalizations and the amplitude envelope. We presented the pairs in four different conditions: (a) pairs with the same amplitude envelope and mean spectral envelope, (b) same amplitude envelope, but different mean spectral envelope, (c) different amplitude envelope, but same mean spectral envelope and (d) both different amplitude envelope and mean spectral envelope. We found fMRI adaptation effects for both the mean spectral envelope and the amplitude envelope of animal vocalizations in overlapping cortical areas in the bilateral superior temporal gyrus posterior to Heschl's gyrus. Areas sensitive to the amplitude envelope extended further anteriorly along the lateral superior temporal gyrus in the left hemisphere, while areas sensitive to the spectral envelope extended further anteriorly along the right lateral superior temporal gyrus. Posterior tonotopic areas within the left superior temporal lobe displayed sensitivity for the mean spectrum. Our findings suggest involvement of primary auditory areas in the representation of spectral cues and encoding of general spectro-temporal features of natural sounds in non-primary posterior and lateral superior temporal cortex. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Introduction to Flight Test Engineering (Introduction aux techniques des essais en vol)
2005-07-01
or aircraft parameters • Calculations in the frequency domain ( Fast Fourier Transform) • Data analysis with dedicated software for: • Signal...density Fast Fourier Transform Transfer function analysis Frequency response analysis Etc. PRESENTATION Color/black & white Display screen...envelope by operating the airplane at increasing ranges - representing increasing risk - of engine operation, airspeeds both fast and slow, altitude
ERIC Educational Resources Information Center
Kantabutra, Sangchan
2009-01-01
This paper examines urban-rural effects on public upper-secondary school efficiency in northern Thailand. In the study, efficiency was measured by a nonparametric technique, data envelopment analysis (DEA). Urban-rural effects were examined through a Mann-Whitney nonparametric statistical test. Results indicate that urban schools appear to have…
A Bag Full of Newspaper Clippings and Other Tricks of the ESL Trade. TECHNIQUES.
ERIC Educational Resources Information Center
Minicz, Elizabeth Watson
1985-01-01
English as a second language (ESL) teachers find the newspaper a terrific source for easy-to-prepare reusable materials. Travel ads with coupons from the travel section can be cut out for beginning level students to complete and mail in envelopes they address. Students can find places on maps. In addition, intermediate and advanced students can…
A Novel Approach to Measuring Efficiency of Scientific Research Projects: Data Envelopment Analysis.
Dilts, David M; Zell, Adrienne; Orwoll, Eric
2015-10-01
Measuring the efficiency of resource allocation for the conduct of scientific projects in medical research is difficult due to, among other factors, the heterogeneity of resources supplied (e.g., dollars or FTEs) and outcomes expected (e.g., grants, publications). While this is an issue in medical science, it has been approached successfully in other fields by using data envelopment analysis (DEA). DEA has a number of advantages over other techniques as it simultaneously uses multiple heterogeneous inputs and outputs to determine which projects are performing most efficiently, referred to as being at the efficiency frontier, when compared to others in the data set. This research uses DEA for the evaluation of supported translational science projects by the Oregon Clinical and Translational Research Institute (OCTRI), a NCATS Clinical & Translational Science Award (CTSA) recipient. These results suggest that the primary determinate of overall project efficiency at OCTRI is the amount of funding, with smaller amounts of funding providing more efficiency than larger funding amounts. These results, and the use of DEA, highlight both the success of using this technique in helping determine medical research efficiency and those factors to consider when distributing funds for new projects at CTSAs. © 2015 Wiley Periodicals, Inc.
The estimation of probable maximum precipitation: the case of Catalonia.
Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel
2008-12-01
A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.
NASA Astrophysics Data System (ADS)
Branger, Flora; Dramais, Guillaume; Horner, Ivan; Le Boursicaud, Raphaël; Le Coz, Jérôme; Renard, Benjamin
2015-04-01
Continuous river discharge data are crucial for the study and management of floods. In most river discharge monitoring networks, these data are obtained at gauging stations, where the stage-discharge relation is modelled with a rating curve to derive discharge from the measurement of water level in the river. Rating curves are usually established using individual ratings (or gaugings). However, using traditional gauging methods during flash floods is challenging for many reasons including hazardous flow conditions (for both equipment and people), short duration of the flood events, transient flows during the time needed to perform the gauging, etc. The lack of gaugings implies that the rating curve is often extrapolated well beyond the gauged range for the highest floods, inducing large uncertainties in the computed discharges. We deployed two remote techniques for gauging floods and improving stage-discharge relations for high flow conditions at several hydrometric stations throughout the Ardèche river catchment in France : (1) permanent video-recording stations enabling the implementation of the image analysis LS-PIV technique (Large Scale Particle Image Velocimetry) ; (2) and mobile gaugings using handheld Surface Velocity Radars (SVR). These gaugings were used to estimate the rating curve and its uncertainty using the Bayesian method BaRatin (Le Coz et al., 2014). Importantly, this method explicitly accounts for the uncertainty of individual gaugings, which is especially relevant for remote gaugings since their uncertainty is generally much higher than that of standard intrusive gauging methods. Then, the uncertainty of streamflow records was derived by combining the uncertainty of the rating curve and the uncertainty of stage records. We assessed the impact of these methodological developments for peak flow estimation and for flood descriptors at various time steps. The combination of field measurement innovation and statistical developments allows efficiently quantifying and reducing the uncertainties of flood peak estimates and flood descriptors at gauging stations. The noncontact streamgauging techniques used in our field campaign strategy have complementary interests. Permanent LSPIV stations, once installed and calibrated, can monitor floods automatically and perform many gaugings during a single event, thus documenting the rise, peak and recession of floods. SVR gaugings are more "one shot" gaugings but can be deployed quickly and at minimal cost over a large territory. Both of these noncontact techniques contribute to a significant reduction of uncertainty on peak hydrographs and flood descriptors at different time steps for a given catchment. Le Coz, J.; Renard, B.; Bonnifait, L.; Branger, F. & Le Boursicaud, R. (2014), 'Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: A Bayesian approach', Journal of Hydrology 509, 573-587.
Dynamic network data envelopment analysis for university hospitals evaluation
Lobo, Maria Stella de Castro; Rodrigues, Henrique de Castro; André, Edgard Caires Gazzola; de Azeredo, Jônatas Almeida; Lins, Marcos Pereira Estellita
2016-01-01
ABSTRACT OBJECTIVE To develop an assessment tool to evaluate the efficiency of federal university general hospitals. METHODS Data envelopment analysis, a linear programming technique, creates a best practice frontier by comparing observed production given the amount of resources used. The model is output-oriented and considers variable returns to scale. Network data envelopment analysis considers link variables belonging to more than one dimension (in the model, medical residents, adjusted admissions, and research projects). Dynamic network data envelopment analysis uses carry-over variables (in the model, financing budget) to analyze frontier shift in subsequent years. Data were gathered from the information system of the Brazilian Ministry of Education (MEC), 2010-2013. RESULTS The mean scores for health care, teaching and research over the period were 58.0%, 86.0%, and 61.0%, respectively. In 2012, the best performance year, for all units to reach the frontier it would be necessary to have a mean increase of 65.0% in outpatient visits; 34.0% in admissions; 12.0% in undergraduate students; 13.0% in multi-professional residents; 48.0% in graduate students; 7.0% in research projects; besides a decrease of 9.0% in medical residents. In the same year, an increase of 0.9% in financing budget would be necessary to improve the care output frontier. In the dynamic evaluation, there was progress in teaching efficiency, oscillation in medical care and no variation in research. CONCLUSIONS The proposed model generates public health planning and programming parameters by estimating efficiency scores and making projections to reach the best practice frontier. PMID:27191158
NASA Astrophysics Data System (ADS)
Flory, Curt A.; Musgrave, Charles B.; Zhang, Zhiyong
2008-05-01
A number of physical processes involving quantum dots depend critically upon the “evanescent” electron eigenstate wave function that extends outside of the material surface into the surrounding region. These processes include electron tunneling through quantum dots, as well as interactions between multiple quantum dot structures. In order to unambiguously determine these evanescent fields, appropriate boundary conditions have been developed to connect the electronic solutions interior to the semiconductor quantum dot to exterior vacuum solutions. In standard envelope function theory, the interior wave function consists of products of band edge and envelope functions, and both must be considered when matching to the external solution. While the envelope functions satisfy tractable equations, the band edge functions are generally not known. In this work, symmetry arguments in the spherically symmetric approximation are used in conjunction with the known qualitative behavior of bonding and antibonding orbitals to catalog the behavior of the band edge functions at the unit cell boundary. This physical approximation allows consolidation of the influence of the band edge functions to two simple surface parameters that are incorporated into the boundary conditions and are straightforwardly computed by using numerical first-principles quantum techniques. These new boundary conditions are employed to analyze an isolated spherically symmetric semiconductor quantum dot in vacuum within the analytical model of Sercel and Vahala [Phys. Rev. Lett. 65, 239 (1990); Phys. Rev. B 42, 3690 (1990)]. Results are obtained for quantum dots made of GaAs and InP, which are compared with ab initio calculations that have appeared in the literature.
Yu, Xiaozhi; Ren, Jindong; Zhang, Qian; Liu, Qun; Liu, Honghao
2017-04-01
Reach envelopes are very useful for the design and layout of controls. In building reach envelopes, one of the key problems is to represent the reach limits accurately and conveniently. Spherical harmonics are proved to be accurate and convenient method for fitting of the reach capability envelopes. However, extensive study are required on what components of spherical harmonics are needed in fitting the envelope surfaces. For applications in the vehicle industry, an inevitable issue is to construct reach limit surfaces with consideration of the seating positions of the drivers, and it is desirable to use population envelopes rather than individual envelopes. However, it is relatively inconvenient to acquire reach envelopes via a test considering the seating positions of the drivers. In addition, the acquired envelopes are usually unsuitable for use with other vehicle models because they are dependent on the current cab packaging parameters. Therefore, it is of great significance to construct reach envelopes for real vehicle conditions based on individual capability data considering seating positions. Moreover, traditional reach envelopes provide little information regarding the assessment of reach difficulty. The application of reach envelopes will improve design quality by providing difficulty-rating information about reach operations. In this paper, using the laboratory data of seated reach with consideration of the subjective difficulty ratings, the method of modeling reach envelopes is studied based on spherical harmonics. The surface fitting using spherical harmonics is conducted for circumstances both with and without seat adjustments. For use with adjustable seat, the seating position model is introduced to re-locate the test data. The surface fitting is conducted for both population and individual reach envelopes, as well as for boundary envelopes. Comparison of the envelopes of adjustable seat and the SAE J287 control reach envelope shows that the latter is nearly at the middle difficulty level. It is also found that the abilities of reach envelope models in expressing the shape of the reach limits based on spherical harmonics depends both on the terms in the model expression and on the data used to fit the envelope surfaces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Plasticity models of material variability based on uncertainty quantification techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Reese E.; Rizzi, Francesco; Boyce, Brad
The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less
Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A
2011-07-01
The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do. © 2011 Acoustical Society of America
Satellite stratospheric aerosol measurement validation
NASA Technical Reports Server (NTRS)
Russell, P. B.; Mccormick, M. P.
1984-01-01
The validity of the stratospheric aerosol measurements made by the satellite sensors SAM II and SAGE was tested by comparing their results with each other and with results obtained by other techniques (lider, dustsonde, filter, and impactor). The latter type of comparison required the development of special techniques that convert the quantity measured by the correlative sensor (e.g. particle backscatter, number, or mass) to that measured by the satellite sensor (extinction) and quantitatively estimate the uncertainty in the conversion process. The results of both types of comparisons show agreement within the measurement and conversion uncertainties. Moreover, the satellite uncertainty is small compared to aerosol natural variability (caused by seasonal changes, volcanoes, sudden warmings, and vortex structure). It was concluded that the satellite measurements are valid.
Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Williams, Mark L
2007-01-01
Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less
Lacey, Ronald E; Faulkner, William Brock
2015-07-01
This work applied a propagation of uncertainty method to typical total suspended particulate (TSP) sampling apparatus in order to estimate the overall measurement uncertainty. The objectives of this study were to estimate the uncertainty for three TSP samplers, develop an uncertainty budget, and determine the sensitivity of the total uncertainty to environmental parameters. The samplers evaluated were the TAMU High Volume TSP Sampler at a nominal volumetric flow rate of 1.42 m3 min(-1) (50 CFM), the TAMU Low Volume TSP Sampler at a nominal volumetric flow rate of 17 L min(-1) (0.6 CFM) and the EPA TSP Sampler at the nominal volumetric flow rates of 1.1 and 1.7 m3 min(-1) (39 and 60 CFM). Under nominal operating conditions the overall measurement uncertainty was found to vary from 6.1x10(-6) g m(-3) to 18.0x10(-6) g m(-3), which represented an uncertainty of 1.7% to 5.2% of the measurement. Analysis of the uncertainty budget determined that three of the instrument parameters contributed significantly to the overall uncertainty: the uncertainty in the pressure drop measurement across the orifice meter during both calibration and testing and the uncertainty of the airflow standard used during calibration of the orifice meter. Five environmental parameters occurring during field measurements were considered for their effect on overall uncertainty: ambient TSP concentration, volumetric airflow rate, ambient temperature, ambient pressure, and ambient relative humidity. Of these, only ambient TSP concentration and volumetric airflow rate were found to have a strong effect on the overall uncertainty. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically. This work addresses measurement uncertainty of TSP samplers used in ambient conditions. Estimation of uncertainty in gravimetric measurements is of particular interest, since as ambient particulate matter (PM) concentrations approach regulatory limits, the uncertainty of the measurement is essential in determining the sample size and the probability of type II errors in hypothesis testing. This is an important factor in determining if ambient PM concentrations exceed regulatory limits. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically.
NASA Astrophysics Data System (ADS)
Chapman, George B.; Johnson, Glenn; Burdick, Robert
1991-09-01
The CounterMeasure Association Technique (CMAT) is discussed which was developed for the Air Force, and is used to automatically recommend countermeasure and maneuver response to a pilot while he is under missile attack. The overall system is discussed, as well as several key technical components. These components include use of fuzzy sets to specify data uncertainty, use of mimic nets to train the CMAT algorithm to make the same resource optimization tradeoffs as made in a data base of library of training scenarios, and use of several data compression techniques to store the countermeasure effectiveness data base.
Analysing uncertainties of supply and demand in the future use of hydrogen as an energy vector
NASA Astrophysics Data System (ADS)
Lenel, U. R.; Davies, D. G. S.; Moore, M. A.
An analytical technique (Analysis with Uncertain Qualities), developed at Fulmer, is being used to examine the sensitivity of the outcome to uncertainties in input quantities in order to highlight which input quantities critically affect the potential role of hydrogen. The work presented here includes an outline of the model and the analysis technique, along with basic considerations of the input quantities to the model (demand, supply and constraints). Some examples are given of probabilistic estimates of input quantities.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Applying large datasets to developing a better understanding of air leakage measurement in homes
Walker, I. S.; Sherman, M. H.; Joh, J.; ...
2013-03-01
Air tightness is an important property of building envelopes. It is a key factor in determining infiltration and related wall-performance properties such as indoor air quality, maintainability and moisture balance. Air leakage in U.S. houses consumes roughly 1/3 of the HVAC energy but provides most of the ventilation used to control IAQ. There are several methods for measuring air tightness that may result in different values and sometimes quite different uncertainties. The two main approaches trade off bias and precision errors and thus result indifferent outcomes for accuracy and repeatability. To interpret results from the two approaches, various questions needmore » to be addressed, such as the need to measure the flow exponent, the need to make both pressurization and depressurization measurements and the role of wind in determining the accuracy and precision of the results. This article uses two large datasets of blower door measurements to reach the following conclusions. For most tests the pressure exponent should be measured but for wind speeds greater than 6 m/s a fixed pressure exponent reduces experimental error. The variability in reported pressure exponents is mostly due to changes in envelope leakage characteristics. Finally, it is preferable to test in both pressurization and depressurization modes due to significant differences between the results in these two modes.« less
The Interiors of Jupiter and Saturn
NASA Astrophysics Data System (ADS)
Helled, Ravit
2018-05-01
Probing the interiors of the giant planets in our Solar System is not an easy task. This requires a set of observations combined with theoretical models that are used to infer the planetary composition and its depth dependence. The masses of Jupiter and Saturn are 318 and 96 Earth masses, respectively, and since a few decades, we know that they mostly consist of hydrogen and helium. It is the mass of heavy elements (all elements heavier than helium) that is not well determined, as well as its distribution within the planets. While the heavy elements are not the dominating materials in Jupiter and Saturn, they are the key for our understanding of their formation and evolution histories. The planetary internal structure is inferred to fit the available observational constraints including the planetary masses, radii, 1-bar temperatures, rotation rates, and gravitational fields. Then, using theoretical equations of states (EOSs) for hydrogen, helium, their mixtures, and heavier elements (typically rocks and/or ices), a structure model is developed. However, there is no unique solution for the planetary structure, and the results depend on the used EOSs and the model assumptions imposed by the modeler. Standard interior models of Jupiter and Saturn include three main regions: (1) the central region (core) that consists of heavy elements, (2) an inner metallic hydrogen envelope that is helium rich, and (3) an outer molecular hydrogen envelope depleted with helium. The distribution of heavy elements can be either homogenous or discontinuous between the two envelopes. Major model assumptions that can affect the derived internal structure include the number of layers, the heat transport mechanism within the planet (and its entropy), the nature of the core (compact vs. diluted), and the location/pressure where the envelopes are divided. Alternative structure models assume a less distinct division between the layers and/or a less non-homogenous distribution of the heavy elements. The fact that the behavior of hydrogen at high pressures and temperatures in not perfectly known, and that helium separates from hydrogen at the deep interior add sources of uncertainties to the interior model. Today, with accurate measurements of the gravitational fields of Jupiter and Saturn from the Juno and Cassini missions, structure models can be further constrained. At the same time, these measurements introduce new challenges and open question for planetary modelers.
Quantifying uncertainty in read-across assessment – an algorithmic approach - (SOT)
Read-across is a popular data gap filling technique within category and analogue approaches for regulatory purposes. Acceptance of read-across remains an ongoing challenge with several efforts underway for identifying and addressing uncertainties. Here we demonstrate an algorithm...
Screening-level estimates of mass discharge uncertainty from point measurement methods
The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...
A Matter of Millimeters: Defining the Processes for Critical Clearances on Curiosity
NASA Technical Reports Server (NTRS)
Florow, Brandon
2013-01-01
The Mars Science Laboratory (MSL) mission presents an immense packaging problem in that it takes a rover the size of a car with a sky crane landing system and packs it tightly into a spacecraft. This creates many areas of close and critical clearances. Critical Clearances are defined as hardware-to-hardware or hardware-to-envelope clearances which fall below a pre-established location dependent threshold and pose a risk of hardware to hardware contact during events such as launch, entry, landing, and operations. Close Clearances, on the other hand, are defined as any clearance value that is chosen to be tracked but is larger than the critical clearance threshold for its region. Close clearances may be tracked for various reasons including uncertainty in design, large expected dynamic motion, etc.
[Quantification of prostate movements during radiotherapy].
Artignan, X; Rastkhah, M; Balosso, J; Fourneret, P; Gilliot, O; Bolla, M
2006-11-01
Decrease treatment uncertainties is one of the most important challenge in radiation oncology. Numerous techniques are available to quantify prostate motion and visualise prostate location day after day before each irradiation: CT-scan, cone-beam-CT-Scan, ultrason, prostatic markers... The knowledge of prostate motion is necessary to define the minimal margin around the target volume needed to avoid mispositioning during treatment session. Different kind of prostate movement have been studied and are reported in the present work: namely, those having a large amplitude extending through out the whole treatment period on one hand; and those with a shorter amplitude happening during treatment session one the other hand. The long lasting movement are mostly anterior-posterior (3 mm standard deviation), secondary in cranial-caudal (1-2 mm standard deviation) and lateral directions (0.5-1 mm standard deviation). They are mostly due to the rectal state of filling and mildly due to bladder filling or inferior limbs position. On the other hand, the shorter movement that occurs during the treatment session is mostly variation of position around a steady point represented by the apex. Ones again, the rectal filling state is the principle cause. This way, during the 20 minutes of a treatment session, including the positioning of the patient, a movement of less than 3 mm could be expected when the rectum is empty. Ideally, real time imaging tools should allow an accurate localisation of the prostate and the adaptation of the dosimetry before each treatment session in a time envelope not exceeding 20 minutes.
NASA Technical Reports Server (NTRS)
Kahn, Ralph A.; Gaitley, Barbara J.; Martonchik, John V.; Diner, David J.; Crean, Kathleen A.; Holben, Brent
2005-01-01
Performance of the Multiangle Imaging Spectroradiometer (MISR) early postlaunch aerosol optical thickness (AOT) retrieval algorithm is assessed quantitatively over land and ocean by comparison with a 2-year measurement record of globally distributed AERONET Sun photometers. There are sufficient coincident observations to stratify the data set by season and expected aerosol type. In addition to reporting uncertainty envelopes, we identify trends and outliers, and investigate their likely causes, with the aim of refining algorithm performance. Overall, about 2/3 of the MISR-retrieved AOT values fall within [0.05 or 20% x AOT] of Aerosol Robotic Network (AERONET). More than a third are within [0.03 or 10% x AOT]. Correlation coefficients are highest for maritime stations (approx.0.9), and lowest for dusty sites (more than approx.0.7). Retrieved spectral slopes closely match Sun photometer values for Biomass burning and continental aerosol types. Detailed comparisons suggest that adding to the algorithm climatology more absorbing spherical particles, more realistic dust analogs, and a richer selection of multimodal aerosol mixtures would reduce the remaining discrepancies for MISR retrievals over land; in addition, refining instrument low-light-level calibration could reduce or eliminate a small but systematic offset in maritime AOT values. On the basis of cases for which current particle models are representative, a second-generation MISR aerosol retrieval algorithm incorporating these improvements could provide AOT accuracy unprecedented for a spaceborne technique.
NASA Astrophysics Data System (ADS)
Aad, G.; Abajyan, T.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Addy, T. N.; Adelman, J.; Adomeit, S.; Adye, T.; Aefsky, S.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmad, A.; Ahmadov, F.; Aielli, G.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alam, M. A.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alessandria, F.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Allwood-Spiers, S. E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alonso, F.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Ammosov, V. V.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A. T. H.; Arfaoui, S.; Arguin, J.-F.; Argyropoulos, S.; Arik, E.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ask, S.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Astbury, A.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Auge, E.; Augsten, K.; Aurousseau, M.; Avolio, G.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Bacci, C.; Bach, A. M.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Backus Mayes, J.; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bailey, D. C.; Bain, T.; Baines, J. T.; Baker, O. K.; Baker, S.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Banfi, D.; Bangert, A.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barber, T.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batkova, L.; Batley, J. R.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belloni, A.; Beloborodova, O. L.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernard, C.; Bernat, P.; Bernhard, R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertolucci, F.; Besana, M. I.; Besjes, G. J.; Bessidskaia, O.; Besson, N.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Bittner, B.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blocki, J.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Boddy, C. R.; Boehler, M.; Boek, J.; Boek, T. T.; Boelaert, N.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bolnet, N. M.; Bomben, M.; Bona, M.; Boonekamp, M.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozovic-Jelisavcic, I.; Bracinik, J.; Branchini, P.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Broggi, F.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, G.; Brown, J.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bryngemark, L.; Buanes, T.; Buat, Q.; Bucci, F.; Buchholz, P.; Buckingham, R. M.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Budick, B.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bundock, A. C.; Bunse, M.; Burckhart, H.; Burdin, S.; Burgess, T.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, V.; Bussey, P.; Buszello, C. P.; Butler, B.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Buttinger, W.; Buzatu, A.; Byszewski, M.; Cabrera Urbán, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L. P.; Caloi, R.; Calvet, D.; Calvet, S.; Camacho Toro, R.; Camarri, P.; Cameron, D.; Caminada, L. M.; Caminal Armadans, R.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, A. A.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Caso, C.; Castaneda-Miranda, E.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caughron, S.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerio, B.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chan, K.; Chang, P.; Chapleau, B.; Chapman, J. D.; Charfeddine, D.; Charlton, D. G.; Chavda, V.; Chavez Barajas, C. A.; Cheatham, S.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, X.; Chen, Y.; Cheng, Y.; Cheplakov, A.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiefari, G.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Chouridou, S.; Chow, B. K. B.; Christidi, I. A.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Ciapetti, G.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciocio, A.; Cirilli, M.; Cirkovic, P.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, P. J.; Clarke, R. N.; Cleland, W.; Clemens, J. C.; Clement, B.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coelli, S.; Coffey, L.; Cogan, J. G.; Coggeshall, J.; Colas, J.; Cole, B.; Cole, S.; Colijn, A. P.; Collins-Tooth, C.; Collot, J.; Colombo, T.; Colon, G.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Conidi, M. C.; Connelly, I. A.; Consonni, S. M.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Courneyea, L.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Crispin Ortuzar, M.; Cristinziani, M.; Crosetti, G.; Cuciuc, C.-M.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dafinca, A.; Dai, T.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Daniells, A. C.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darlea, G. L.; Darmora, S.; Dassoulas, J. A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davignon, O.; Davison, A. R.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Castro, S.; De Cecco, S.; de Graat, J.; De Groot, N.; de Jong, P.; De La Taille, C.; De la Torre, H.; De Lorenzi, F.; De Nooij, L.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; De Zorzi, G.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dechenaux, B.; Dedovich, D. V.; Degenhardt, J.; Del Peso, J.; Del Prete, T.; Delemontex, T.; Deliot, F.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; Demers, S.; Demichev, M.; Demilly, A.; Demirkoz, B.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deviveiros, P. O.; Dewhurst, A.; DeWilde, B.; Dhaliwal, S.; Dhullipudi, R.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Dietzsch, T. A.; Diglio, S.; Dindar Yagci, K.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; do Vale, M. A. B.; Do Valle Wemans, A.; Doan, T. K. O.; Dobos, D.; Dobson, E.; Dodd, J.; Doglioni, C.; Doherty, T.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dotti, A.; Dova, M. T.; Doyle, A. T.; Dris, M.; Dubbert, J.; Dube, S.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudziak, F.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Dwuznik, M.; Ebke, J.; Edson, W.; Edwards, C. A.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Engelmann, R.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ernis, G.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienne, F.; Etienvre, A. I.; Etzion, E.; Evangelakou, D.; Evans, H.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Favareto, A.; Fayard, L.; Federic, P.; Fedin, O. L.; Fedorko, W.; Fehling-Kaschek, M.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrara, V.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, J.; Fisher, M. J.; Fitzgerald, E. A.; Flechl, M.; Fleck, I.; Fleischmann, P.; Fleischmann, S.; Fletcher, G. T.; Fletcher, G.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Florez Bustos, A. C.; Flowerdew, M. J.; Fonseca Martin, T.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fox, H.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; French, S. T.; Friedrich, C.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fulsom, B. G.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gadatsch, S.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallo, V.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gandrajula, R. P.; Gao, J.; Gao, Y. S.; Garay Walls, F. M.; Garberson, F.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geerts, D. A. A.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Gemmell, A.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gianotti, F.; Gibbard, B.; Gibson, S. M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gillman, A. R.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giordano, R.; Giorgi, F. M.; Giovannini, P.; Giraud, P. F.; Giugni, D.; Giuliani, C.; Giunta, M.; Gjelsten, B. K.; Gkialas, I.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glazov, A.; Glonti, G. L.; Goblirsch-Kolb, M.; Goddard, J. R.; Godfrey, J.; Godlewski, J.; Goeringer, C.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gomez Fajardo, L. S.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, L.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez Silva, M. L.; Gonzalez-Sevilla, S.; Goodson, J. J.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Gozpinar, S.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Grafström, P.; Grahn, K.-J.; Gramling, J.; Gramstad, E.; Grancagnolo, F.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Gray, H. M.; Gray, J. A.; Graziani, E.; Grebenyuk, O. G.; Greenwood, Z. D.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grigalashvili, N.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grishkevich, Y. V.; Grivaz, J.-F.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Groth-Jensen, J.; Grout, Z. J.; Grybel, K.; Guescini, F.; Guest, D.; Gueta, O.; Guicheney, C.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Gunther, J.; Guo, J.; Gupta, S.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guttman, N.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haefner, P.; Hageboeck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Hall, D.; Halladjian, G.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamer, M.; Hamilton, A.; Hamilton, S.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Hanke, P.; Hansen, J. R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hansson, P.; Hara, K.; Hard, A. S.; Harenberg, T.; Harkusha, S.; Harper, D.; Harrington, R. D.; Harris, O. M.; Harrison, P. F.; Hartjes, F.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Hejbal, J.; Helary, L.; Heller, C.; Heller, M.; Hellman, S.; Hellmich, D.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Hensel, C.; Herbert, G. H.; Hernandez, C. M.; Hernández Jiménez, Y.; Herrberg-Schubert, R.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hickling, R.; Higón-Rodriguez, E.; Hill, J. C.; Hiller, K. H.; Hillert, S.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoffman, J.; Hoffmann, D.; Hofmann, J. I.; Hohlfeld, M.; Holmes, T. R.; Hong, T. M.; Hooft van Huysduynen, L.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huettmann, A.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Hurwitz, M.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idarraga, J.; Ideal, E.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Iliadis, D.; Ilic, N.; Inamaru, Y.; Ince, T.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Irles Quiles, A.; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ivashin, A. V.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, B.; Jackson, J. N.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansen, H.; Janssen, J.; Janus, M.; Jared, R. C.; Jarlskog, G.; Jeanty, L.; Jeng, G.-Y.; Jen-La Plante, I.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Jha, M. K.; Ji, H.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Joffe, D.; Johansson, K. E.; Johansson, P.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Jung, C. A.; Jungst, R. M.; Jussel, P.; Juste Rozas, A.; Kaci, M.; Kaczmarska, A.; Kadlecik, P.; Kado, M.; Kagan, H.; Kagan, M.; Kajomovitz, E.; Kalinin, S.; Kama, S.; Kanaya, N.; Kaneda, M.; Kaneti, S.; Kanno, T.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kar, D.; Karakostas, K.; Karastathis, N.; Karnevskiy, M.; Karpov, S. N.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasieczka, G.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Katre, A.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Kazarinov, M. Y.; Keeler, R.; Keener, P. T.; Kehoe, R.; Keil, M.; Keller, J. S.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Keung, J.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Khomich, A.; Khoo, T. J.; Khoriauli, G.; Khoroshilov, A.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H.; Kim, S. H.; Kimura, N.; Kind, O.; King, B. T.; King, M.; King, R. S. B.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kitamura, T.; Kittelmann, T.; Kiuchi, K.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klinkby, E. B.; Klioutchnikova, T.; Klok, P. F.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koenig, S.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Koletsou, I.; Koll, J.; Komar, A. A.; Komori, Y.; Kondo, T.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kortner, S.; Kostyukhin, V. V.; Kotov, S.; Kotov, V. M.; Kotwal, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V. A.; Kramberger, G.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, N.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Kruker, T.; Krumnack, N.; Krumshteyn, Z. V.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurochkin, Y. A.; Kurumida, R.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwee, R.; La Rosa, A.; La Rotonda, L.; Labarga, L.; Lablak, S.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Laier, H.; Laisne, E.; Lambourne, L.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Larner, A.; Lassnig, M.; Laurelli, P.; Lavorini, V.; Lavrijsen, W.; Laycock, P.; Le, B. T.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmacher, M.; Lehmann Miotto, G.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Lendermann, V.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leone, R.; Leonhardt, K.; Leontsinis, S.; Leroy, C.; Lessard, J.-R.; Lester, C. G.; Lester, C. M.; Levêque, J.; Levin, D.; Levinson, L. J.; Lewis, A.; Lewis, G. H.; Leyko, A. M.; Leyton, M.; Li, B.; Li, B.; Li, H.; Li, H. L.; Li, S.; Li, X.; Liang, Z.; Liao, H.; Liberti, B.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Limper, M.; Lin, S. C.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Livermore, S. S. A.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loddenkoetter, T.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loginov, A.; Loh, C. W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Lombardo, V. P.; Long, J. D.; Long, R. E.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Loscutoff, P.; Losty, M. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lowe, A. J.; Lu, F.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Ludwig, D.; Ludwig, I.; Luehring, F.; Lukas, W.; Luminari, L.; Lund, E.; Lundberg, J.; Lundberg, O.; Lund-Jensen, B.; Lungwitz, M.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Maček, B.; Machado Miguens, J.; Macina, D.; Mackeprang, R.; Madar, R.; Madaras, R. J.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeno, M.; Maeno, T.; Magnoni, L.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Mahmoud, S.; Mahout, G.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Mal, P.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manfredini, A.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J. A.; Mann, A.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mapelli, L.; March, L.; Marchand, J. F.; Marchese, F.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marques, C. N.; Marroquim, F.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, B.; Martin, J. P.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, H.; Martinez, M.; Martin-Haugh, S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massol, N.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Matsunaga, H.; Matsushita, T.; Mättig, P.; Mättig, S.; Mattmann, J.; Mattravers, C.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazzaferro, L.; Mazzanti, M.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; Mclaughlan, T.; McMahon, S. J.; McPherson, R. A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meehan, S.; Meera-Lebbai, R.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B. R.; Meloni, F.; Mendoza Navas, L.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Meric, N.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Merritt, H.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer, J.; Michal, S.; Middleton, R. P.; Migas, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Milstein, D.; Minaenko, A. A.; Miñano Moya, M.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mirabelli, G.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Mitsui, S.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Moeller, V.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Molfetas, A.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Mora Herrera, C.; Moraes, A.; Morange, N.; Morel, J.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, M.; Morii, M.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moser, H. G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, K.; Mueller, T.; Mueller, T.; Muenstermann, D.; Munwes, Y.; Murillo Quijada, J. A.; Murray, W. J.; Mussche, I.; Musto, E.; Myagkov, A. G.; Myska, M.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagel, M.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Nanava, G.; Napier, A.; Narayan, R.; Nash, M.; Nattermann, T.; Naumann, T.; Navarro, G.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negri, G.; Negrini, M.; Nektarijevic, S.; Nelson, A.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neusiedl, A.; Neves, R. M.; Nevski, P.; Newcomer, F. M.; Newman, P. R.; Nguyen, D. H.; Nguyen Thi Hong, V.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolics, K.; Nikolopoulos, K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Norberg, S.; Nordberg, M.; Novakova, J.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nuncio-Quiroz, A.-E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; O'Brien, B. J.; O'Grady, F.; O'Neil, D. C.; O'Shea, V.; Oakes, L. B.; Oakham, F. G.; Oberlack, H.; Ocariz, J.; Ochi, A.; Ochoa, M. I.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohshima, T.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Olchevski, A. G.; Olivares Pino, S. A.; Oliveira, M.; Oliveira Damazio, D.; Oliver Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Oropeza Barrera, C.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ouellette, E. A.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Owen, S.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Padilla Aranda, C.; Pagan Griso, S.; Paganis, E.; Pahl, C.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Panduro Vazquez, J. G.; Pani, P.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, M. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pashapour, S.; Pasqualucci, E.; Passaggio, S.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Patricelli, S.; Pauly, T.; Pearce, J.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Perez Cavalcanti, T.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrino, R.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, J.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petteni, M.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Piec, S. M.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinder, A.; Pinfold, J. L.; Pingel, A.; Pinto, B.; Pizio, C.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Poddar, S.; Podlyski, F.; Poettgen, R.; Poggioli, L.; Pohl, D.; Pohl, M.; Polesello, G.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pomeroy, D.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Portell Bueso, X.; Pospelov, G. E.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Prabhu, R.; Pralavorio, P.; Pranko, A.; Prasad, S.; Pravahan, R.; Prell, S.; Price, D.; Price, J.; Price, L. E.; Prieur, D.; Primavera, M.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przybycien, M.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qian, J.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Quilty, D.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Rammes, M.; Randle-Conde, A. S.; Rangel-Smith, C.; Rao, K.; Rauscher, F.; Rave, T. C.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reinsch, A.; Reisin, H.; Reisinger, I.; Relich, M.; Rembser, C.; Ren, Z. L.; Renaud, A.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richter, R.; Ridel, M.; Rieck, P.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ritsch, E.; Riu, I.; Rivoltella, G.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Rocha de Lima, J. G.; Roda, C.; Roda Dos Santos, D.; Rodrigues, L.; Roe, S.; Røhne, O.; Rolli, S.; Romaniouk, A.; Romano, M.; Romeo, G.; Romero Adam, E.; Rompotis, N.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, A.; Rose, M.; Rosendahl, P. L.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rumyantsev, L.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Rutherfoord, J. P.; Ruthmann, N.; Ruzicka, P.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Saavedra, A. F.; Sacerdoti, S.; Saddique, A.; Sadeh, I.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Saleem, M.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B. M.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, T.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarkisyan-Grinbaum, E.; Sarrazin, B.; Sartisohn, G.; Sasaki, O.; Sasaki, Y.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Sauvan, E.; Sauvan, J. B.; Savard, P.; Savinov, V.; Savu, D. O.; Sawyer, C.; Sawyer, L.; Saxon, D. H.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaelicke, A.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitt, C.; Schmitt, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schramm, S.; Schreyer, M.; Schroeder, C.; Schroer, N.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwegler, Ph.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Schwoerer, M.; Sciacca, F. G.; Scifo, E.; Sciolla, G.; Scott, W. G.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekula, S. J.; Selbach, K. E.; Seliverstov, D. M.; Sellers, G.; Seman, M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Seuster, R.; Severini, H.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shank, J. T.; Shao, Q. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Sherwood, P.; Shimizu, S.; Shimojima, M.; Shin, T.; Shiyakova, M.; Shmeleva, A.; Shochet, M. J.; Short, D.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Shushkevich, S.; Sicho, P.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silbert, O.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simoniello, R.; Simonyan, M.; Sinervo, P.; Sinev, N. B.; Sipica, V.; Siragusa, G.; Sircar, A.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinnari, L. A.; Skottowe, H. P.; Skovpen, K. Yu.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snow, J.; Snyder, S.; Sobie, R.; Socher, F.; Sodomka, J.; Soffer, A.; Soh, D. A.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Solfaroli Camillocci, E.; Solodkov, A. A.; Solovyanov, O. V.; Solovyev, V.; Soni, N.; Sood, A.; Sopko, V.; Sopko, B.; Sosebee, M.; Soualah, R.; Soueid, P.; Soukharev, A. M.; South, D.; Spagnolo, S.; Spanò, F.; Spearman, W. R.; Spighi, R.; Spigo, G.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R. D.; Stahlman, J.; Stamen, R.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Stavina, P.; Steele, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stern, S.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Stucci, S. A.; Stugu, B.; Stumer, I.; Stupak, J.; Sturm, P.; Styles, N. A.; Su, D.; Su, J.; Subramania, HS.; Subramaniam, R.; Succurro, A.; Sugaya, Y.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, Y.; Svatos, M.; Swedish, S.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tamsett, M. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanasijczuk, A. J.; Tani, K.; Tannoury, N.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, C.; Taylor, F. E.; Taylor, G. N.; Taylor, W.; Teischinger, F. A.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K. K.; Ten Kate, H.; Teng, P. K.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Therhaag, J.; Theveneaux-Pelzer, T.; Thoma, S.; Thomas, J. P.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thong, W. M.; Thun, R. P.; Tian, F.; Tibbetts, M. J.; Tic, T.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Topilin, N. D.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Tran, H. L.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Triplett, N.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; True, P.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsung, J.-W.; Tsuno, S.; Tsybychev, D.; Tua, A.; Tudorache, A.; Tudorache, V.; Tuggle, J. M.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ueno, R.; Ughetto, M.; Ugland, M.; Uhlenbrock, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Urbaniec, D.; Urquijo, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J. A.; Van Berg, R.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; Van Der Leeuw, R.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vassilakopoulos, V. I.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veloso, F.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Virzi, J.; Vitells, O.; Viti, M.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, A.; Vokac, P.; Volpi, G.; Volpi, M.; Volpini, G.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, W.; Wagner, P.; Wahrmund, S.; Wakabayashi, J.; Walch, S.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Waller, P.; Walsh, B.; Wang, C.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, X.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Warsinsky, M.; Washbrook, A.; Wasicki, C.; Watanabe, I.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, A. T.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weigell, P.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wendland, D.; Weng, Z.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Whittington, D.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wijeratne, P. A.; Wildauer, A.; Wildt, M. A.; Wilhelm, I.; Wilkens, H. G.; Will, J. Z.; Williams, H. H.; Williams, S.; Willis, W.; Willocq, S.; Wilson, J. A.; Wilson, A.; Wingerter-Seez, I.; Winkelmann, S.; Winklmeier, F.; Wittgen, M.; Wittig, T.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wong, W. C.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wraight, K.; Wright, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wulf, E.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xiao, M.; Xu, C.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yamada, M.; Yamaguchi, H.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, U. K.; Yang, Y.; Yanush, S.; Yao, L.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yen, A. L.; Yildirim, E.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J.; Yuan, L.; Yurkewicz, A.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zaytsev, A.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zevi della Porta, G.; Zhang, D.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, X.; Zhang, Z.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, L.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Zinonos, Z.; Ziolkowski, M.; Zitoun, R.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zurzolo, G.; Zutshi, V.; Zwalinski, L.
2015-01-01
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton-proton collision data with a centre-of-mass energy of TeV corresponding to an integrated luminosity of . Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti- algorithm with distance parameters or , and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transverse momentum balance between a jet and a reference object such as a photon or a boson, for and pseudorapidities . The effect of multiple proton-proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region () for jets with . For central jets at lower , the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton-proton collisions and test-beam data, which also provide the estimate for TeV. The calibration of forward jets is derived from dijet balance measurements. The resulting uncertainty reaches its largest value of 6 % for low- jets at . Additional JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or gluons, are also discussed. The magnitude of these uncertainties depends on the event sample used in a given physics analysis, but typically amounts to 0.5-3 %.
A low tritium hydride bed inventory estimation technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, J.E.; Shanahan, K.L.; Baker, R.A.
2015-03-15
Low tritium hydride beds were developed and deployed into tritium service in Savannah River Site. Process beds to be used for low concentration tritium gas were not fitted with instrumentation to perform the steady-state, flowing gas calorimetric inventory measurement method. Low tritium beds contain less than the detection limit of the IBA (In-Bed Accountability) technique used for tritium inventory. This paper describes two techniques for estimating tritium content and uncertainty for low tritium content beds to be used in the facility's physical inventory (PI). PI are performed periodically to assess the quantity of nuclear material used in a facility. Themore » first approach (Mid-point approximation method - MPA) assumes the bed is half-full and uses a gas composition measurement to estimate the tritium inventory and uncertainty. The second approach utilizes the bed's hydride material pressure-composition-temperature (PCT) properties and a gas composition measurement to reduce the uncertainty in the calculated bed inventory.« less
Non-Parametric Collision Probability for Low-Velocity Encounters
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2007-01-01
An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.
Comprehensive analysis of transport aircraft flight performance
NASA Astrophysics Data System (ADS)
Filippone, Antonio
2008-04-01
This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance, atmospheric effects, economic Mach number and noise trajectories at F.A.R. landing points.
NASA Astrophysics Data System (ADS)
Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang
2014-08-01
Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubey, P. K., E-mail: premkdubey@gmail.com; Kumar, Yudhisther; Gupta, Reeta
2014-05-15
The Radiation Force Balance (RFB) technique is well established and most widely used for the measurement of total ultrasonic power radiated by ultrasonic transducer. The technique is used as a primary standard for calibration of ultrasonic transducers with relatively fair uncertainty in the low power (below 1 W) regime. In this technique, uncertainty comparatively increases in the range of few watts wherein the effects such as thermal heating of the target, cavitations, and acoustic streaming dominate. In addition, error in the measurement of ultrasonic power is also caused due to movement of absorber at relatively high radiated force which occursmore » at high power level. In this article a new technique is proposed which does not measure the balance output during transducer energized state as done in RFB. It utilizes the change in buoyancy of the absorbing target due to local thermal heating. The linear thermal expansion of the target changes the apparent mass in water due to buoyancy change. This forms the basis for the measurement of ultrasonic power particularly in watts range. The proposed method comparatively reduces uncertainty caused by various ultrasonic effects that occur at high power such as overshoot due to momentum of target at higher radiated force. The functionality of the technique has been tested and compared with the existing internationally recommended RFB technique.« less
NASA Astrophysics Data System (ADS)
Dubey, P. K.; Kumar, Yudhisther; Gupta, Reeta; Jain, Anshul; Gohiya, Chandrashekhar
2014-05-01
The Radiation Force Balance (RFB) technique is well established and most widely used for the measurement of total ultrasonic power radiated by ultrasonic transducer. The technique is used as a primary standard for calibration of ultrasonic transducers with relatively fair uncertainty in the low power (below 1 W) regime. In this technique, uncertainty comparatively increases in the range of few watts wherein the effects such as thermal heating of the target, cavitations, and acoustic streaming dominate. In addition, error in the measurement of ultrasonic power is also caused due to movement of absorber at relatively high radiated force which occurs at high power level. In this article a new technique is proposed which does not measure the balance output during transducer energized state as done in RFB. It utilizes the change in buoyancy of the absorbing target due to local thermal heating. The linear thermal expansion of the target changes the apparent mass in water due to buoyancy change. This forms the basis for the measurement of ultrasonic power particularly in watts range. The proposed method comparatively reduces uncertainty caused by various ultrasonic effects that occur at high power such as overshoot due to momentum of target at higher radiated force. The functionality of the technique has been tested and compared with the existing internationally recommended RFB technique.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
NASA Astrophysics Data System (ADS)
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
Reliability analysis of a robotic system using hybridized technique
NASA Astrophysics Data System (ADS)
Kumar, Naveen; Komal; Lather, J. S.
2017-09-01
In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.
UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E
A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...
Major uncertainties remain in our ability to identify the key reactions and primary oxidation products of volatile hydrocarbons that contribute to ozone formation in the troposphere. To reduce these uncertainties, computational chemistry, mechanistic and process analysis techniqu...
Uncertainty in georeferencing current and historic plant locations
McEachern, K.; Niessen, K.
2009-01-01
With shrinking habitats, weed invasions, and climate change, repeated surveys are becoming increasingly important for rare plant conservation and ecological restoration. We often need to relocate historical sites or provide locations for newly restored sites. Georeferencing is the technique of giving geographic coordinates to the location of a site. Georeferencing has been done historically using verbal descriptions or field maps that accompany voucher collections. New digital technology gives us more exact techniques for mapping and storing location information. Error still exists, however, and even georeferenced locations can be uncertain, especially if error information is not included with the observation. We review the concept of uncertainty in georeferencing and compare several institutional database systems for cataloging error and uncertainty with georeferenced locations. These concepts are widely discussed among geographers, but ecologists and restorationists need to become more aware of issues related to uncertainty to improve our use of spatial information in field studies. ?? 2009 by the Board of Regents of the University of Wisconsin System.
Gaussian process regression for sensor networks under localization uncertainty
Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming
2013-01-01
In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.
Construction of protocellular structures under simulated primitive earth conditions
NASA Astrophysics Data System (ADS)
Yanagawa, Hiroshi; Ogawa, Yoko; Kojima, Kiyotsugu; Ito, Masahiko
1988-09-01
We have developed experimental approaches for the construction of protocellular structures under simulated primitive earth conditions and studied their formation and characteristics. Three types of envelopes; protein envelopes, lipid envelopes, and lipid-protein envelopes are considered as candidates for protocellular structures. Simple protein envelopes and lipid envelopes are presumed to have originated at an early stage of chemical evolution, interaction mutually and then evolved into more complex envelopes composed of both lipids and proteins. Three kinds of protein envelopes were constructedin situ from amino acids under simulated primitive earth conditions such as a fresh water tide pool, a warm sea, and a submarine hydrothermal vent. One protein envelope was formed from a mixture of amino acid amides at 80 °C using multiple hydration-dehydration cycles. Marigranules, protein envelope structures, were produced from mixtures of glycine and acidic, basic and aromatic amino acids at 105 °C in a modified sea medium enriched with essential transition elements. Thermostable microspheres were also formed from a mixture of glycine, alanine, valine, and aspartic acid at 250 °C and above. The microspheres did not form at lower temperatures and consist of silicates and peptide-like polymers containing imide bonds and amino acid residues enriched in valine. Amphiphilic proteins with molecular weights of 2000 were necessary for the formation of the protein envelopes. Stable lipid envelopes were formed from different dialkyl phospholipids and fatty acids. Large, stable, lipid-protein envelopes were formed from egg lecithin and the solubilized marigranules. Polycations such as polylysine and polyhistidine, or basic proteins such as lysozyme and cytochromec also stabilized lipid-protein envelopes.
Destructive effects of butyrate on the cell envelope of Helicobacter pylori.
Yonezawa, Hideo; Osaki, Takako; Hanawa, Tomoko; Kurata, Satoshi; Zaman, Cynthia; Woo, Timothy Derk Hoong; Takahashi, Motomichi; Matsubara, Sachie; Kawakami, Hayato; Ochiai, Kuniyasu; Kamiya, Shigeru
2012-04-01
Helicobacter pylori can be found in the oral cavity and is mostly detected by the use of PCR techniques. Growth of H. pylori is influenced by various factors in the mouth, such as the oral microflora, saliva and other antimicrobial substances, all of which make colonization of the oral cavity by H. pylori difficult. In the present study, we analysed the effect of the cell supernatant of a representative periodontal bacterium Porphyromonas gingivalis on H. pylori and found that the cell supernatant destroyed the H. pylori cell envelope. As P. gingivalis produces butyric acid, we focused our research on the effects of butyrate and found that it significantly inhibited the growth of H. pylori. H. pylori cytoplasmic proteins and DNA were detected in the extracellular environment after treatment with butyrate, suggesting that the integrity of the cell envelope was compromised and indicating that butyrate has a bactericidal effect on H. pylori. In addition, levels of extracellular H. pylori DNA increased following treatment with the cell supernatant of butyric acid-producing bacteria, indicating that the cell supernatant also has a bactericidal effect and that this may be due to its butyric acid content. In conclusion, butyric acid-producing bacteria may play a role in affecting H. pylori colonization of the oral cavity.
Phase-Locked Responses to Speech in Human Auditory Cortex are Enhanced During Comprehension
Peelle, Jonathan E.; Gross, Joachim; Davis, Matthew H.
2013-01-01
A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners’ ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction. PMID:22610394
Phase-locked responses to speech in human auditory cortex are enhanced during comprehension.
Peelle, Jonathan E; Gross, Joachim; Davis, Matthew H
2013-06-01
A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.
Analysis Techniques, Materials, and Methods for Treatment of Thermal Bridges in Building Envelopes
2013-08-01
effects of the R-value for given increment of time ............................................. 89 64 Crystals on a post-conditioned Aspen Aerogel ... aerogel on specific sites compared to conventional polyurethane foam insulation. Figures 55 and 56 show two examples of preliminary parametric... Aerogel , and (4) Honeywell’s polyurethane. Table 14 lists the four tested insulation ma- terials, their experimental thermal properties (derived
Coherent-Phase Monitoring Of Cavitation In Turbomachines
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
1996-01-01
Digital electronic signal-processing system analyzes outputs of accelerometers mounted on turbomachine to detect vibrations characteristic of cavitation. Designed to overcome limitation imposed by interference from discrete components. System digitally implements technique called "coherent-phase wide-band demodulation" (CPWBD), using phase-only (PO) filtering along envelope detection to search for unique coherent-phase relationship associated with cavitation and to minimize influence of large-amplitude discrete components.
The structure of common-envelope remnants
NASA Astrophysics Data System (ADS)
Hall, Philip D.
2015-05-01
We investigate the structure and evolution of the remnants of common-envelope evolution in binary star systems. In a common-envelope phase, two stars become engulfed in a gaseous envelope and, under the influence of drag forces, spiral to smaller separations. They may merge to form a single star or the envelope may be ejected to leave the stars in a shorter period orbit. This process explains the short orbital periods of many observed binary systems, such as cataclysmic variables and low-mass X-ray binary systems. Despite the importance of these systems, and of common-envelope evolution to their formation, it remains poorly understood. Specifically, we are unable to confidently predict the outcome of a common-envelope phase from the properties at its onset. After presenting a review of work on stellar evolution, binary systems, common-envelope evolution and the computer programs used, we describe the results of three computational projects on common-envelope evolution. Our work specifically relates to the methods and prescriptions which are used for predicting the outcome. We use the Cambridge stellar-evolution code STARS to produce detailed models of the structure and evolution of remnants of common-envelope evolution. We compare different assumptions about the uncertain end-of-common envelope structure and envelope mass of remnants which successfully eject their common envelopes. In the first project, we use detailed remnant models to investigate whether planetary nebulae are predicted after common-envelope phases initiated by low-mass red giants. We focus on the requirement that a remnant evolves rapidly enough to photoionize the nebula and compare the predictions for different ideas about the structure at the end of a common-envelope phase. We find that planetary nebulae are possible for some prescriptions for the end-of-common envelope structure. In our second contribution, we compute a large set of single-star models and fit new formulae to the core radii of evolved stars. These formulae can be used to better compute the outcome of common-envelope evolution with rapid evolution codes. We find that the new formulae are necessary for accurate predictions of the properties of post-common envelope systems. Finally, we use detailed remnant models of massive stars to investigate whether hydrogen may be retained after a common-envelope phase to the point of core-collapse and so be observable in supernovae. We find that this is possible and thus common-envelope evolution may contribute to the formation of Type IIb supernovae.
Dynamic stability and handling qualities tests on a highly augmented, statically unstable airplane
NASA Technical Reports Server (NTRS)
Gera, Joseph; Bosworth, John T.
1987-01-01
Novel flight test and analysis techniques in the flight dynamics and handling qualities area are described. These techniques were utilized at NASA Ames-Dryden during the initial flight envelope clearance of the X-29A aircraft. It is shown that the open-loop frequency response of an aircraft with highly relaxed static stability can be successfully computed on the ground from telemetry data. Postflight closed-loop frequency response data were obtained from pilot-generated frequency sweeps and it is found that the current handling quality requirements for high-maneuverability aircraft are generally applicable to the X-29A.
Combining Satellite Ocean Color and Hydrodynamic Model Uncertainties in Bio-Optical Forecasts
2014-04-03
observed chlorophyll distribution for that day (MODIS Image for October 17, 2011), without regard to sign, I.e., IFigs. 11(c)-11(a)l. Black pixels indicate...time using the current field from the model. Uncertainties in both the satellite chlorophyll values and the currents from the circulation model impact...ensemole techniques to partition the chlorophyll uncertainties into components due to atmospheric correction and bio-optical inversion. By combining
Gajewski, Byron J.; Lee, Robert; Dunton, Nancy
2012-01-01
Data Envelopment Analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency (Hollingsworth, 2008), but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized (Gajewski, Lee, Bott, Piamjariyakul and Taunton, 2009; Ruggiero, 2004). We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® (NDNQI®) to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible. PMID:23328796
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, D. A., E-mail: david.walsh@stfc.ac.uk; Snedden, E. W.; Jamison, S. P.
The time-resolved detection of ultrashort pulsed THz-band electric field temporal profiles without an ultrashort laser probe is demonstrated. A non-linear interaction between a narrow-bandwidth optical probe and the THz pulse transposes the THz spectral intensity and phase information to the optical region, thereby generating an optical pulse whose temporal electric field envelope replicates the temporal profile of the real THz electric field. This optical envelope is characterised via an autocorrelation based FROG (frequency resolved optical gating) measurement, hence revealing the THz temporal profile. The combination of a narrow-bandwidth, long duration, optical probe, and self-referenced FROG makes the technique inherently immunemore » to timing jitter between the optical probe and THz pulse and may find particular application where the THz field is not initially generated via ultrashort laser methods, such as the measurement of longitudinal electron bunch profiles in particle accelerators.« less
Byrne, Richard D; Larijani, Banafshé; Poccia, Dominic L
2016-01-01
FRET-FLIM techniques have wide application in the study of protein and protein-lipid interactions in cells. We have pioneered an imaging platform for accurate detection of functional states of proteins and their interactions in fixed cells. This platform, two-site-amplified Förster resonance energy transfer (a-FRET), allows greater signal generation while retaining minimal noise thus enabling application of fluorescence lifetime imaging microscopy (FLIM) to be routinely deployed in different types of cells and tissue. We have used the method described here, time-resolved FRET monitored by two-photon FLIM, to demonstrate the direct interaction of Phospholipase Cγ (PLCγ) by Src Family Kinase 1 (SFK1) during nuclear envelope formation and during male and female pronuclear membrane fusion in fertilized sea urchin eggs. We describe here a generic method that can be applied to monitor any proteins of interest.
The Implications of Encoder/Modulator/ Phased Array Designs for Future Broadband LEO Communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Jensen, Chris A.; Terry, John D.
1997-01-01
In this paper we summarize the effects of modulation and channel coding on the design of wide angle scan, broadband, phased army antennas. In the paper we perform several trade studies. First, we investigate the amplifier back-off requirement as a function of variability of modulation envelope. Specifically, we contrast constant and non-constant envelope modulations, as well as single and multiple carrier schemes. Additionally, we address the issues an(f concerns of using pulse shaping filters with the above modulation types. Second, we quantify the effects of beam steering on the quality of data, recovery using selected modulation techniques. In particular, we show that the frequency response of the array introduces intersymbol interference for broadband signals and that the mode of operation for the beam steering controller may introduce additional burst or random errors. Finally, we show that the encoder/modulator design must be performed in conjunction with the phased array antenna design.
A stabilized optical frequency comb based on an Er-doped fiber femtosecond laser
NASA Astrophysics Data System (ADS)
Xia, Chuanqing; Wu, Tengfei; Zhao, Chunbo; Xing, Shuai
2018-03-01
An optical frequency comb based on a 250 MHz home-made Er-doped fiber femtosecond laser is presented in this paper. The Er-doped fiber laser has a ring cavity and operates mode-locked in femtosecond regime with the technique of nonlinear polarization rotation. The pulse duration is 118 fs and the spectral width is 30 nm. A part of the femtosecond laser is amplified in Er-doped fiber amplifier before propagating through a piece of highly nonlinear fiber for expanding the spectrum. The carrier-envelope offset frequency of the comb which has a signal-to-noise ratio more than 35 dB is extracted by means of f-2f beating. It demonstrates that both carrier-envelope offset frequency and repetition frequency keep phase locked to a Rubidium atomic clock simultaneously for 2 hours. The frequency stabilized fiber combs will be increasingly applied in optical metrology, attosecond pulse generation, and absolute distance measurement.
O'Neill, Liam; Dexter, Franklin
2005-11-01
We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.
The F-18 High Alpha Research Vehicle: A High-Angle-of-Attack Testbed Aircraft
NASA Technical Reports Server (NTRS)
Regenie, Victoria; Gatlin, Donald; Kempel, Robert; Matheny, Neil
1992-01-01
The F-18 High Alpha Research Vehicle is the first thrust-vectoring testbed aircraft used to study the aerodynamics and maneuvering available in the poststall flight regime and to provide the data for validating ground prediction techniques. The aircraft includes a flexible research flight control system and full research instrumentation. The capability to control the vehicle at angles of attack up to 70 degrees is also included. This aircraft was modified by adding a pitch and yaw thrust-vectoring system. No significant problems occurred during the envelope expansion phase of the program. This aircraft has demonstrated excellent control in the wing rock region and increased rolling performance at high angles of attack. Initial pilot reports indicate that the increased capability is desirable although some difficulty in judging the size and timing of control inputs was observed. The aircraft, preflight ground testing and envelope expansion flight tests are described.
First uncertainty evaluation of the FoCS-2 primary frequency standard
NASA Astrophysics Data System (ADS)
Jallageas, A.; Devenoges, L.; Petersen, M.; Morel, J.; Bernier, L. G.; Schenker, D.; Thomann, P.; Südmeyer, T.
2018-06-01
We report the uncertainty evaluation of the Swiss continuous primary frequency standard FoCS-2 (Fontaine Continue Suisse). Unlike other primary frequency standards which are working with clouds of cold atoms, this fountain uses a continuous beam of cold caesium atoms bringing a series of metrological advantages and specific techniques for the evaluation of the uncertainty budget. Recent improvements of FoCS-2 have made possible the evaluation of the frequency shifts and of their uncertainties in the order of . When operating in an optimal regime a relative frequency instability of is obtained. The relative standard uncertainty reported in this article, , is strongly dominated by the statistics of the frequency measurements.
Schwarz, L.K.; Runge, M.C.
2009-01-01
Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.
Feasibility Study of Endo- and Exo-skeletal Framed Structures with Envelopes for LTA Platforms
2011-02-15
pathway for design and fabrication of Endo- and Exoskeleton framed elliptical envelopes was demonstrated. Envelope sizes of 2 ft x 0.5 ft and 5 ft x...Lighter than air, Endoskeleton, Exoskeleton , Helium filled envelope, Design, Fabrication Robert Sadler and Raghu Panduranga ARIS Inc 115-C, South...Structures with Envelopes for LTA Platforms Report Title ABSTRACT A pathway for design and fabrication of Endo- and Exoskeleton framed elliptical envelopes
Cortical processing of dynamic sound envelope transitions.
Zhou, Yi; Wang, Xiaoqin
2010-12-08
Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.
Mofid, Omid; Mobayen, Saleh
2018-01-01
Adaptive control methods are developed for stability and tracking control of flight systems in the presence of parametric uncertainties. This paper offers a design technique of adaptive sliding mode control (ASMC) for finite-time stabilization of unmanned aerial vehicle (UAV) systems with parametric uncertainties. Applying the Lyapunov stability concept and finite-time convergence idea, the recommended control method guarantees that the states of the quad-rotor UAV are converged to the origin with a finite-time convergence rate. Furthermore, an adaptive-tuning scheme is advised to guesstimate the unknown parameters of the quad-rotor UAV at any moment. Finally, simulation results are presented to exhibit the helpfulness of the offered technique compared to the previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo
2017-08-01
The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.
UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.
2012-01-01
UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.
DOT National Transportation Integrated Search
2011-06-14
This paper presents a novel analytical approach to and techniques for translating characteristics of uncertainty in predicting sector entry times and times in sector for individual flights into characteristics of uncertainty in predicting one-minute ...
Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.
2015-01-01
The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.
Świderski, Zdzisław; Miquel, Jordi; Azzouz-Maache, Samira; Pétavy, Anne-Françoise
2017-07-01
The origin, differentiation and functional ultrastructure of oncospheral or egg envelopes in Echinococcus multilocularis Leuckart, 1863 were studied by transmission electron microscopy (TEM) and cytochemistry. The purpose of our study is to describe the formation of the four primary embryonic envelopes, namely vitelline capsule, outer envelope, inner envelope and oncospheral membrane, and their transformation into the oncospheral or egg envelopes surrounding the mature hexacanth. This transformation takes place in the preoncospheral phase of embryonic development. The vitelline capsule and oncospheral membrane are thin membranes, while the outer and inner envelopes are thick cytoplasmic layers formed by two specific types of blastomeres: the outer envelope by cytoplasmic fusion of two macromeres and the inner envelope by cytoplasmic fusion of three mesomeres. Both outer and inner envelopes are therefore cellular in origin and syncytial in nature. During the advanced phase of embryonic development, the outer and inner envelopes undergo great modifications. The outer envelope remains as a metabolically active layer involved in the storage of glycogen and lipids for the final stages of egg development and survival. The inner envelope is the most important protective layer because of its thick layer of embryophoric blocks that assures oncospheral protection and survival. This embryophore is the principal layer of mature eggs, affording physical and physiological protection for the differentiated embryo or oncosphere, since the outer envelope is stripped from the egg before it is liberated. The embryophore is very thick and impermeable, consisting of polygonal blocks of an inert keratin-like protein held together by a cementing substance. The embryophore therefore assures extreme resistance of eggs, enabling them to withstand a wide range of environmental temperatures and physicochemical conditions.
NASA Astrophysics Data System (ADS)
Akram, Muhammad Farooq Bin
The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.
Doppler centroid estimation ambiguity for synthetic aperture radars
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1989-01-01
A technique for estimation of the Doppler centroid of an SAR in the presence of large uncertainty in antenna boresight pointing is described. Also investigated is the image degradation resulting from data processing that uses an ambiguous centroid. Two approaches for resolving ambiguities in Doppler centroid estimation (DCE) are presented: the range cross-correlation technique and the multiple-PRF (pulse repetition frequency) technique. Because other design factors control the PRF selection for SAR, a generalized algorithm is derived for PRFs not containing a common divisor. An example using the SIR-C parameters illustrates that this algorithm is capable of resolving the C-band DCE ambiguities for antenna pointing uncertainties of about 2-3 deg.
Nonstationary envelope process and first excursion probability
NASA Technical Reports Server (NTRS)
Yang, J.
1972-01-01
A definition of the envelope of nonstationary random processes is proposed. The establishment of the envelope definition makes it possible to simulate the nonstationary random envelope directly. Envelope statistics, such as the density function, joint density function, moment function, and level crossing rate, which are relevent to analyses of catastrophic failure, fatigue, and crack propagation in structures, are derived. Applications of the envelope statistics to the prediction of structural reliability under random loadings are discussed in detail.
Performance analysis of multiple PRF technique for ambiguity resolution
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
For short wavelength spaceborne synthetic aperture radar (SAR), ambiguity in Doppler centroid estimation occurs when the azimuth squint angle uncertainty is larger than the azimuth antenna beamwidth. Multiple pulse recurrence frequency (PRF) hopping is a technique developed to resolve the ambiguity by operating the radar in different PRF's in the pre-imaging sequence. Performance analysis results of the multiple PRF technique are presented, given the constraints of the attitude bound, the drift rate uncertainty, and the arbitrary numerical values of PRF's. The algorithm performance is derived in terms of the probability of correct ambiguity resolution. Examples, using the Shuttle Imaging Radar-C (SIR-C) and X-SAR parameters, demonstrate that the probability of correct ambiguity resolution obtained by the multiple PRF technique is greater than 95 percent and 80 percent for the SIR-C and X-SAR applications, respectively. The success rate is significantly higher than that achieved by the range cross correlation technique.
Multi-scale signed envelope inversion
NASA Astrophysics Data System (ADS)
Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang
2018-06-01
Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.
Discharge lamp with reflective jacket
MacLennan, Donald A.; Turner, Brian P.; Kipling, Kent
2001-01-01
A discharge lamp includes an envelope, a fill which emits light when excited disposed in the envelope, a source of excitation power coupled to the fill to excite the fill and cause the fill to emit light, and a reflector disposed around the envelope and defining an opening, the reflector being configured to reflect some of the light emitted by the fill back into the fill while allowing some light to exit through the opening. The reflector may be made from a material having a similar thermal index of expansion as compared to the envelope and which is closely spaced to the envelope. The envelope material may be quartz and the reflector material may be either silica or alumina. The reflector may be formed as a jacket having a rigid structure which does not adhere to the envelope. The lamp may further include an optical clement spaced from the envelope and configured to reflect an unwanted component of light which exited the envelope back into the envelope through the opening in the reflector. Light which can be beneficially recaptured includes selected wavelength regions, a selected polarization, and selected angular components.
Uncertainty in tsunami sediment transport modeling
Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.
2016-01-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.
The uncertainty room: strategies for managing uncertainty in a surgical waiting room.
Stone, Anne M; Lammers, John C
2012-01-01
To describe experiences of uncertainty and management strategies for staff working with families in a hospital waiting room. A 288-bed, nonprofit community hospital in a Midwestern city. Data were collected during individual, semistructured interviews with 3 volunteers, 3 technical staff members, and 1 circulating nurse (n = 7), and during 40 hours of observation in a surgical waiting room. Interview transcripts were analyzed using constant comparative techniques. The surgical waiting room represents the intersection of several sources of uncertainty that families experience. Findings also illustrate the ways in which staff manage the uncertainty of families in the waiting room by communicating support. Staff in surgical waiting rooms are responsible for managing family members' uncertainty related to insufficient information. Practically, this study provided some evidence that staff are expected to help manage the uncertainty that is typical in a surgical waiting room, further highlighting the important role of communication in improving family members' experiences.
A tool for efficient, model-independent management optimization under uncertainty
White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.
2018-01-01
To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.
Simulating a binary system that experiences the grazing envelope evolution
NASA Astrophysics Data System (ADS)
Shiber, Sagiv; Soker, Noam
2018-06-01
We conduct three-dimensional hydrodynamical simulations, and show that when a secondary star launches jets while performing spiral-in motion into the envelope of a giant star, the envelope is inflated, some mass is ejected by the jets, and the common envelope phase is postponed. We simulate this grazing envelope evolution (GEE) under the assumption that the secondary star accretes mass from the envelope of the asymptotic giant branch (AGB) star and launches jets. In these simulations we do not yet include the gravitational energy that is released by the spiraling-in binary system. Neither do we include the spinning of the envelope. Considering these omissions, we conclude that our results support the idea that jets might play a crucial role in the common envelope evolution or in preventing it.
2010-01-01
Background Detection of autoantibodies giving nuclear rim pattern by immunofluorescence (anti-nuclear envelope antibodies - ANEA) in sera from patients with primary biliary cirrhosis (PBC) is a useful tool for the diagnosis and prognosis of the disease. Differences in the prevalence of ANEA in PBC sera so far reported have been attributed to the methodology used for the detection as well as to ethnic/geographical variations. Therefore, we evaluated the prevalence of ANEA in sera of Greek patients with PBC by using methods widely used by clinical laboratories and a combination of techniques and materials. Methods We screened 103 sera by immunoblotting on nuclear envelopes and indirect immunofluorescence (IIF) using cells and purified nuclei. Reactivities against specific autoantigens were assessed using purified proteins, ELISA, immunoprecipitation and mass spectrometry. Results We found higher prevalence of ANEA when sera were assayed by IIF on purified nuclei or cultured cells (50%) compared to Hep2 commercially available slides (15%). Anti-gp210 antibodies were identified in 22.3% and 33% of sera using ELISA for the C-terminal of gp210 or both ELISA and immunoprecipitation, respectively. Immunoblotting on nuclear envelopes revealed that immunoreactivity for the 210 kDa zone is related to anti-gp210 antibodies (p < 0.0001). Moreover, we found that sera had antibodies for lamins A (6.8%), B (1%) and C (1%) and LBR (8.7%), whereas none at all had detectable anti-p62 antibodies. Conclusions The prevalence of ANEA or anti-gp210 antibodies is under-estimated in PBC sera which are analyzed by conventional commercially available IIF or ELISA, respectively. Therefore, new substrates for IIF and ELISA should be included by clinical laboratories in the analysis of ANEA in autoimmune sera. PMID:20205958
Bagdonaite, Ieva; Nordén, Rickard; Joshi, Hiren J.; King, Sarah L.; Vakhrushev, Sergey Y.; Olofsson, Sigvard; Wandall, Hans H.
2016-01-01
Herpesviruses are among the most complex and widespread viruses, infection and propagation of which depend on envelope proteins. These proteins serve as mediators of cell entry as well as modulators of the immune response and are attractive vaccine targets. Although envelope proteins are known to carry glycans, little is known about the distribution, nature, and functions of these modifications. This is particularly true for O-glycans; thus we have recently developed a “bottom up” mass spectrometry-based technique for mapping O-glycosylation sites on herpes simplex virus type 1. We found wide distribution of O-glycans on herpes simplex virus type 1 glycoproteins and demonstrated that elongated O-glycans were essential for the propagation of the virus. Here, we applied our proteome-wide discovery platform for mapping O-glycosites on representative and clinically significant members of the herpesvirus family: varicella zoster virus, human cytomegalovirus, and Epstein-Barr virus. We identified a large number of O-glycosites distributed on most envelope proteins in all viruses and further demonstrated conserved patterns of O-glycans on distinct homologous proteins. Because glycosylation is highly dependent on the host cell, we tested varicella zoster virus-infected cell lysates and clinically isolated virus and found evidence of consistent O-glycosites. These results present a comprehensive view of herpesvirus O-glycosylation and point to the widespread occurrence of O-glycans in regions of envelope proteins important for virus entry, formation, and recognition by the host immune system. This knowledge enables dissection of specific functional roles of individual glycosites and, moreover, provides a framework for design of glycoprotein vaccines with representative glycosylation. PMID:27129252
Proteomics of the Autographa californica Nucleopolyhedrovirus Budded Virions ▿
Wang, RanRan; Deng, Fei; Hou, Dianhai; Zhao, Yong; Guo, Lin; Wang, Hualin; Hu, Zhihong
2010-01-01
Baculoviruses produce two progeny phenotypes during their replication cycles. The occlusion-derived virus (ODV) is responsible for initiating primary infection in the larval midgut, and the budded virus (BV) phenotype is responsible for the secondary infection. The proteomics of several baculovirus ODVs have been revealed, but so far, no extensive analysis of BV-associated proteins has been conducted. In this study, the protein composition of the BV of Autographa californica nucleopolyhedrovirus (AcMNPV), the type species of baculoviruses, was analyzed by various mass spectrometry (MS) techniques, including liquid chromatography-triple quadrupole linear ion trap (LC-Qtrap), liquid chromatography-quadrupole time of flight (LC-Q-TOF), and matrix-assisted laser desorption ionization-time of flight (MALDI-TOF). SDS-PAGE and MALDI-TOF analyses showed that the three most abundant proteins of the AcMNPV BV were GP64, VP39, and P6.9. A total of 34 viral proteins associated with the AcMNPV BV were identified by the indicated methods. Thirteen of these proteins, PP31, AC58/59, AC66, IAP-2, AC73, AC74, AC114, AC124, chitinase, polyhedron envelope protein (PEP), AC132, ODV-E18, and ODV-E56, were identified for the first time to be BV-associated proteins. Western blot analyses showed that ODV-E18 and ODV-E25, which were previously thought to be ODV-specific proteins, were also present in the envelop fraction of BV. In addition, 11 cellular proteins were found to be associated with the AcMNPV BV by both LC-Qtrap and LC-Q-TOF analyses. Interestingly, seven of these proteins were also identified in other enveloped viruses, suggesting that many enveloped viruses may commonly utilize certain conserved cellular pathways. PMID:20444894
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
Design Optimization of Composite Structures under Uncertainty
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
2003-01-01
Design optimization under uncertainty is computationally expensive and is also challenging in terms of alternative formulation. The work under the grant focused on developing methods for design against uncertainty that are applicable to composite structural design with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and simultaneous design of structure and inspection periods for fail-safe structures.
Singh, Nahar; Ahuja, Tarushee; Ojha, Vijay Narain; Soni, Daya; Tripathy, S Swarupa; Leito, Ivo
2013-01-01
As a result of rapid industrialization several chemical forms of organic and inorganic mercury are constantly introduced to the environment and affect humans and animals directly. All forms of mercury have toxic effects; therefore accurate measurement of mercury is of prime importance especially in suspended particulate matter (SPM) collected through high volume sampler (HVS). In the quantification of mercury in SPM samples several steps are involved from sampling to final result. The quality, reliability and confidence level of the analyzed data depends upon the measurement uncertainty of the whole process. Evaluation of measurement uncertainty of results is one of the requirements of the standard ISO/IEC 17025:2005 (European Standard EN IS/ISO/IEC 17025:2005, issue1:1-28, 2006). In the presented study the uncertainty estimation in mercury determination in suspended particulate matter (SPM) has been carried out using cold vapor Atomic Absorption Spectrometer-Hydride Generator (AAS-HG) technique followed by wet chemical digestion process. For the calculation of uncertainty, we have considered many general potential sources of uncertainty. After the analysis of data of seven diverse sites of Delhi, it has been concluded that the mercury concentration varies from 1.59 ± 0.37 to 14.5 ± 2.9 ng/m(3) with 95% confidence level (k = 2).
Hayashi, Kiyotada; Nagumo, Yoshifumi; Domoto, Akiko
2016-11-15
In comparative life cycle assessments of agricultural production systems, analyses of both the trade-offs between environmental impacts and crop productivity and of the uncertainties specific to agriculture such as fluctuations in greenhouse gas (GHG) emissions and crop yields are crucial. However, these two issues are usually analyzed separately. In this paper, we present a framework to link trade-off and uncertainty analyses; correlated uncertainties are integrated into environment-productivity trade-off analyses. We compared three rice production systems in Japan: a system using a pelletized, nitrogen-concentrated organic fertilizer made from poultry manure using closed-air composting techniques (high-N system), a system using a conventional organic fertilizer made from poultry manure using open-air composting techniques (low-N system), and a system using a chemical compound fertilizer (conventional system). We focused on two important sources of uncertainties in paddy rice cultivation-methane emissions from paddy fields and crop yields. We found trade-offs between the conventional and high-N systems and the low-N system and the existence of positively correlated uncertainties in the conventional and high-N systems. We concluded that our framework is effective in recommending the high-N system compared with the low-N system, although the performance of the former is almost the same as the conventional system. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yishen; Zhou, Zhi; Liu, Cong
2016-08-01
As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less
76 FR 47564 - Procurement List; Additions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-05
...-117-9886--Envelope, Bubble Padded, 14\\1/2\\'' x 20''. NSN: 8105-00-290-0340--Envelope, Macerated Paper Padded, 6'' x 10''. NSN: 8105-00-290-0343--Envelope, Macerated Paper Padded, 8\\1/2\\'' x 12''. NSN: 8105-00-281-1168--Envelope, Macerated Paper Padded, 9\\1/2\\'' x 14\\1/2\\''. NSN: 8105-00-281-1436--Envelope...
Blanton, Brian; Dresback, Kendra; Colle, Brian; Kolar, Randy; Vergara, Humberto; Hong, Yang; Leonardo, Nicholas; Davidson, Rachel; Nozick, Linda; Wachtendorf, Tricia
2018-04-25
Hurricane track and intensity can change rapidly in unexpected ways, thus making predictions of hurricanes and related hazards uncertain. This inherent uncertainty often translates into suboptimal decision-making outcomes, such as unnecessary evacuation. Representing this uncertainty is thus critical in evacuation planning and related activities. We describe a physics-based hazard modeling approach that (1) dynamically accounts for the physical interactions among hazard components and (2) captures hurricane evolution uncertainty using an ensemble method. This loosely coupled model system provides a framework for probabilistic water inundation and wind speed levels for a new, risk-based approach to evacuation modeling, described in a companion article in this issue. It combines the Weather Research and Forecasting (WRF) meteorological model, the Coupled Routing and Excess STorage (CREST) hydrologic model, and the ADvanced CIRCulation (ADCIRC) storm surge, tide, and wind-wave model to compute inundation levels and wind speeds for an ensemble of hurricane predictions. Perturbations to WRF's initial and boundary conditions and different model physics/parameterizations generate an ensemble of storm solutions, which are then used to drive the coupled hydrologic + hydrodynamic models. Hurricane Isabel (2003) is used as a case study to illustrate the ensemble-based approach. The inundation, river runoff, and wind hazard results are strongly dependent on the accuracy of the mesoscale meteorological simulations, which improves with decreasing lead time to hurricane landfall. The ensemble envelope brackets the observed behavior while providing "best-case" and "worst-case" scenarios for the subsequent risk-based evacuation model. © 2018 Society for Risk Analysis.
Rapid shelf-wide cooling response of a stratified coastal ocean to hurricanes.
Seroka, Greg; Miles, Travis; Xu, Yi; Kohut, Josh; Schofield, Oscar; Glenn, Scott
2017-06-01
Large uncertainty in the predicted intensity of tropical cyclones (TCs) persists compared to the steadily improving skill in the predicted TC tracks. This intensity uncertainty has its most significant implications in the coastal zone, where TC impacts to populated shorelines are greatest. Recent studies have demonstrated that rapid ahead-of-eye-center cooling of a stratified coastal ocean can have a significant impact on hurricane intensity forecasts. Using observation-validated, high-resolution ocean modeling, the stratified coastal ocean cooling processes observed in two U.S. Mid-Atlantic hurricanes were investigated: Hurricane Irene (2011)-with an inshore Mid-Atlantic Bight (MAB) track during the late summer stratified coastal ocean season-and Tropical Storm Barry (2007)-with an offshore track during early summer. For both storms, the critical ahead-of-eye-center depth-averaged force balance across the entire MAB shelf included an onshore wind stress balanced by an offshore pressure gradient. This resulted in onshore surface currents opposing offshore bottom currents that enhanced surface to bottom current shear and turbulent mixing across the thermocline, resulting in the rapid cooling of the surface layer ahead-of-eye-center. Because the same baroclinic and mixing processes occurred for two storms on opposite ends of the track and seasonal stratification envelope, the response appears robust. It will be critical to forecast these processes and their implications for a wide range of future storms using realistic 3-D coupled atmosphere-ocean models to lower the uncertainty in predictions of TC intensities and impacts and enable coastal populations to better respond to increasing rapid intensification threats in an era of rising sea levels.
Uncertainties in atmospheric muon-neutrino fluxes arising from cosmic-ray primaries
NASA Astrophysics Data System (ADS)
Evans, Justin; Garcia Gamez, Diego; Porzio, Salvatore Davide; Söldner-Rembold, Stefan; Wren, Steven
2017-01-01
We present an updated calculation of the uncertainties on the atmospheric muon-neutrino flux arising from cosmic-ray primaries. For the first time, we include recent measurements of the cosmic-ray primaries collected since 2005. We apply a statistical technique that allows the determination of correlations between the parameters of the Gaisser, Stanev, Honda, and Lipari primary-flux parametrization and the incorporation of these correlations into the uncertainty on the muon-neutrino flux. We obtain an uncertainty related to the primary cosmic rays of around (5-15)%, depending on energy, which is about a factor of 2 smaller than the previously determined uncertainty. The hadron production uncertainty is added in quadrature to obtain the total uncertainty on the neutrino flux, which is reduced by ≈5 % . To take into account an unexpected hardening of the spectrum of primaries above energies of 100 GeV observed in recent measurements, we propose an alternative parametrization and discuss its impact on the neutrino flux uncertainties.
NASA Astrophysics Data System (ADS)
Singh, Vinay Kumar; Dalal, U. D.
2017-10-01
In this research literature we present a unique optical OFDM system for Visible Light Communication (VLC) intended for indoor application which uses a non conventional transform-Fast Hartley Transform and an effective method to reduce the peak to average power ratio (PAPR) of the OFDM signal based on frequency modulation leading to a constant envelope (CE) signal. The proposed system is analyzed by a complete mathematical model and verified by the concurrent simulations results. The use of the non conventional transform makes the system computationally more desirable as it does not require the Hermitian symmetry constraint to yield real signals. The frequency modulation of the baseband signal converge random peaks into a CE signal. This leads to alleviation of the non linearity effects of the LED used in the link for electrical to optical conversion. The PAPR is reduced to 2 dB by this technique in this work. The impact of the modulation index on the performance of the system is also investigated. An optimum modulation depth of 30% gives better results. The additional phase discontinuity incurring on the demodulated signal at the receiver is also significantly reduced. A comparison of the improvement in phase discontinuity of the proposed technique of combating the PAPR with the previously known phase modulation technique is also presented in this work. Based on the channel metrics we evaluate the system performance and report an improvement of 1.2 dB at the FEC threshold. The proposed system is simple in design and computationally efficient and this can be incorporated into the present VLC system without much alteration thereby making it a cost effective solution.
Davey, Sanjeev; Raghav, Santosh Kumar; Singh, Jai Vir; Davey, Anuradha; Singh, Nirankar
2015-01-01
Background: The evaluation of primary healthcare services provided by health training centers of a private medical college has not been studied in comparison with government health facilities in Indian context. Data envelopment analysis (DEA) is one such technique of operations research, which can be used on health facilities for identifying efficient operating practices and strategies for relatively efficient or inefficient health centers by calculating their efficiency scores. Materials and Methods: This study was carried out by DEA technique by using basic radial models (constant ratio to scale (CRS)) in linear programming via DEAOS free online Software among four decision making units (DMUs; by comparing efficiency of two private health centers of a private medical college of India with two public health centers) in district Muzaffarnagar of state Uttar Pradesh. The input and output records of all these health facilities (two from private and two from Government); for 6 months duration from 1st Jan 2014 to 1st July 2014 was taken for deciding their efficiency scores. Results: The efficiency scores of primary healthcare services in presence of doctors (100 vs 30%) and presence of health staff (100 vs 92%) were significantly better from government health facilities as compared to private health facilities (P < 0.0001). Conclusions: The evaluation of primary healthcare services delivery by DEA technique reveals that the government health facilities group were more efficient in delivery of primary healthcare services as compared to private training health facilities group, which can be further clarified in by more in-depth studies in future. PMID:26435598
NASA Technical Reports Server (NTRS)
Kojima, Jun; Nguyen, Quang-Viet
2007-01-01
An alternative optical thermometry technique that utilizes the low-resolution (order 10(exp 1)/cm) pure-rotational spontaneous Raman scattering of air is developed to aid single-shot multiscalar measurements in turbulent combustion studies. Temperature measurements are realized by correlating the measured envelope bandwidth of the pure-rotational manifold of the N2/O2 spectrum with a theoretical prediction of a species-weighted bandwidth. By coupling this thermometry technique with conventional vibrational Raman scattering for species determination, we demonstrate quantitative spatially resolved, single-shot measurements of the temperature and fuel/oxidizer concentrations in a high-pressure turbulent Cf4-air flame. Our technique provides not only an effective means of validating other temperature measurement methods, but also serves as a secondary thermometry technique in cases where the anti-Stokes vibrational N2 Raman signals are too low for a conventional vibrational temperature analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T
2015-01-01
Turbine discharges at low-head short converging intakes are difficult to measure accurately. The proximity of the measurement section to the intake entrance admits large uncertainties related to asymmetry of the velocity profile, swirl, and turbulence. Existing turbine performance codes [10, 24] do not address this special case and published literature is largely silent on rigorous evaluation of uncertainties associated with this measurement context. The American Society of Mechanical Engineers (ASME) Committee investigated the use of Acoustic transit time (ATT), Acoustic scintillation (AS), and Current meter (CM) in a short converging intake at the Kootenay Canal Generating Station in 2009. Basedmore » on their findings, a standardized uncertainty analysis (UA) framework for velocity-area method (specifically for CM measurements) is presented in this paper given the fact that CM is still the most fundamental and common type of measurement system. Typical sources of systematic and random errors associated with CM measurements are investigated, and the major sources of uncertainties associated with turbulence and velocity fluctuations, numerical velocity integration technique (bi-cubic spline), and the number and placement of current meters are being considered for an evaluation. Since the velocity measurements in a short converging intake are associated with complex nonlinear and time varying uncertainties (e.g., Reynolds stress in fluid dynamics), simply applying the law of propagation of uncertainty is known to overestimate the measurement variance while the Monte Carlo method does not. Therefore, a pseudo-Monte Carlo simulation method (random flow generation technique [8]) which was initially developed for the purpose of establishing upstream or initial conditions in the Large-Eddy Simulation (LES) and the Direct Numerical Simulation (DNS) is used to statistically determine uncertainties associated with turbulence and velocity fluctuations. This technique is then combined with a bi-cubic spline interpolation method which converts point velocities into a continuous velocity distribution over the measurement domain. Subsequently the number and placement of current meters are simulated to investigate the accuracy of the estimated flow rates using the numerical velocity-area integration method outlined in ISO 3354 [12]. The authors herein consider that statistics on generated flow rates processed with bi-cubic interpolation and sensor simulations are the combined uncertainties which already accounted for the effects of all those three uncertainty sources. A preliminary analysis based on the current meter data obtained through an upgrade acceptance test of a single unit located in a mainstem plant has been presented.« less
An improved switching converter model. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Shortt, D. J.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.
Flight test derived heating math models for critical locations on the orbiter during reentry
NASA Technical Reports Server (NTRS)
Hertzler, E. K.; Phillips, P. W.
1983-01-01
An analysis technique was developed for expanding the aerothermodynamic envelope of the Space Shuttle without subjecting the vehicle to sustained flight at more stressing heating conditions. A transient analysis program was developed to take advantage of the transient maneuvers that were flown as part of this analysis technique. Heat rates were derived from flight test data for various locations on the orbiter. The flight derived heat rates were used to update heating models based on predicted data. Future missions were then analyzed based on these flight adjusted models. A technique for comparing flight and predicted heating rate data and the extrapolation of the data to predict the aerothermodynamic environment of future missions is presented.
Characterizing sources of uncertainty from global climate models and downscaling techniques
Wootten, Adrienne; Terando, Adam; Reich, Brian J.; Boyles, Ryan; Semazzi, Fred
2017-01-01
In recent years climate model experiments have been increasingly oriented towards providing information that can support local and regional adaptation to the expected impacts of anthropogenic climate change. This shift has magnified the importance of downscaling as a means to translate coarse-scale global climate model (GCM) output to a finer scale that more closely matches the scale of interest. Applying this technique, however, introduces a new source of uncertainty into any resulting climate model ensemble. Here we present a method, based on a previously established variance decomposition method, to partition and quantify the uncertainty in climate model ensembles that is attributable to downscaling. We apply the method to the Southeast U.S. using five downscaled datasets that represent both statistical and dynamical downscaling techniques. The combined ensemble is highly fragmented, in that only a small portion of the complete set of downscaled GCMs and emission scenarios are typically available. The results indicate that the uncertainty attributable to downscaling approaches ~20% for large areas of the Southeast U.S. for precipitation and ~30% for extreme heat days (> 35°C) in the Appalachian Mountains. However, attributable quantities are significantly lower for time periods when the full ensemble is considered but only a sub-sample of all models are available, suggesting that overconfidence could be a serious problem in studies that employ a single set of downscaled GCMs. We conclude with recommendations to advance the design of climate model experiments so that the uncertainty that accrues when downscaling is employed is more fully and systematically considered.
Feature Extraction for Bearing Prognostics and Health Management (PHM) - A Survey (Preprint)
2008-05-01
Envelope analysis • Cepstrum analysis • Higher order spectrum • Short-time Fourier Transform (STFT) • Wigner - Ville distribution ( WVD ) • Empirical mode...techniques are the short-time Fourier transform (STFT), the Wigner - Ville distribution , and the wavelet transform. In this paper we categorize wavelets...diagnosis have shown in many publications, for example, [22]. b) Wigner – Ville distribution : The afore-mentioned STFT is conceptually simple. However
Wave envelope technique for multimode wave guide problems
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Sudharsanan, S. I.
1986-01-01
A fast method for solving wave guide problems is proposed. In particular, the guide is considered to be inhomogeneous allowing propagation of waves of higher order modes. Such problems have been handled successfully for acoustic wave propagation problems with single mode and finite length. This paper extends this concept to electromagnetic wave guides with several modes and infinite length. The method is described and results of computations are presented.
NASA Technical Reports Server (NTRS)
Harrison, Phil; LaVerde, Bruce; Teague, David
2009-01-01
Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.
A Novel Approach to Measuring Efficiency of Scientific Research Projects: Data Envelopment Analysis
Zell, Adrienne; Orwoll, Eric
2015-01-01
Abstract Purpose Measuring the efficiency of resource allocation for the conduct of scientific projects in medical research is difficult due to, among other factors, the heterogeneity of resources supplied (e.g., dollars or FTEs) and outcomes expected (e.g., grants, publications). While this is an issue in medical science, it has been approached successfully in other fields by using data envelopment analysis (DEA). DEA has a number of advantages over other techniques as it simultaneously uses multiple heterogeneous inputs and outputs to determine which projects are performing most efficiently, referred to as being at the efficiency frontier, when compared to others in the data set. Method This research uses DEA for the evaluation of supported translational science projects by the Oregon Clinical and Translational Research Institute (OCTRI), a NCATS Clinical & Translational Science Award (CTSA) recipient. Results These results suggest that the primary determinate of overall project efficiency at OCTRI is the amount of funding, with smaller amounts of funding providing more efficiency than larger funding amounts. Conclusion These results, and the use of DEA, highlight both the success of using this technique in helping determine medical research efficiency and those factors to consider when distributing funds for new projects at CTSAs. PMID:26243147
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Birkefeld, Anja Britta; Bertermann, Rüdiger; Eckert, Hellmut; Pfleiderer, Bettina
2003-01-01
To investigate aging processes of silicone gel breast implants, which may include migration of free unreacted material from the gel and rubber to local (e.g. connective tissue capsule) or distant sites in the body, chemical alteration of the polymer and infiltration of body compounds, various approaches of multinuclear nuclear magnetic resonance (NMR) experiments (29Si, 13C, 1H) were evaluated. While 29Si, 13C, and 1H solid-state magic angle spinning (MAS) NMR techniques performed on virgin and explanted envelopes of silicone prostheses provided only limited information, high-resolution liquid-state NMR techniques of CDCl(3) extracts were highly sensitive analytical tools for the detection of aging related changes in the materials. Using 2D 1H, 1H correlation spectroscopy (COSY) and 29Si, 1H heteronuclear multiple bond coherence (HMBC) experiments with gradient selection, it was possible to detect lipids (mainly phospholipids) as well as silicone oligomer species in explanted envelopes and gels. Silicone oligomers were also found in connective tissue capsules, indicating that cyclic polysiloxanes can migrate from intact implants to adjacent and distant sites. Furthermore, lipids can permeate the implant and modify its chemical composition. Copyright 2002 Elsevier Science Ltd.
Kenouche, S; Perrier, M; Bertin, N; Larionova, J; Ayadi, A; Zanca, M; Long, J; Bezzi, N; Stein, P C; Guari, Y; Cieslak, M; Godin, C; Goze-Bac, C
2014-12-01
Nondestructive studies of physiological processes in agronomic products require increasingly higher spatial and temporal resolutions. Nuclear Magnetic Resonance (NMR) imaging is a non-invasive technique providing physiological and morphological information on biological tissues. The aim of this study was to design a robust and accurate quantitative measurement method based on NMR imaging combined with contrast agent (CA) for mapping and quantifying water transport in growing cherry tomato fruits. A multiple flip-angle Spoiled Gradient Echo (SGE) imaging sequence was used to evaluate the intrinsic parameters maps M0 and T1 of the fruit tissues. Water transport and paths flow were monitored using Gd(3+)/[Fe(CN)6](3-)/D-mannitol nanoparticles as a tracer. This dynamic study was carried out using a compartmental modeling. The CA was preferentially accumulated in the surrounding tissues of columella and in the seed envelopes. The total quantities and the average volume flow of water estimated are: 198 mg, 1.76 mm(3)/h for the columella and 326 mg, 2.91 mm(3)/h for the seed envelopes. We demonstrate in this paper that the NMR imaging technique coupled with efficient and biocompatible CA in physiological medium has the potential to become a major tool in plant physiology research. Copyright © 2014 Elsevier Inc. All rights reserved.
Screening mail for powders using terahertz technology
NASA Astrophysics Data System (ADS)
Kemp, Mike
2011-11-01
Following the 2001 Anthrax letter attacks in the USA, there has been a continuing interest in techniques that can detect or identify so-called 'white powder' concealed in envelopes. Electromagnetic waves (wavelengths 100-500 μm) in the terahertz frequency range penetrate paper and have short enough wavelengths to provide good resolution images; some materials also have spectroscopic signatures in the terahertz region. We report on an experimental study into the use of terahertz imaging and spectroscopy for mail screening. Spectroscopic signatures of target powders were measured and, using a specially designed test rig, a number of imaging methods based on reflection, transmission and scattering were investigated. It was found that, contrary to some previous reports, bacterial spores do not appear to have any strong spectroscopic signatures which would enable them to be identified. Imaging techniques based on reflection imaging and scattering are ineffective in this application, due to the similarities in optical properties between powders of interest and paper. However, transmission imaging using time-of-flight of terahertz pulses was found to be a very simple and sensitive method of detecting small quantities (25 mg) of powder, even in quite thick envelopes. An initial feasibility study indicates that this method could be used as the basis of a practical mail screening system.
Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Gumbert, Clyde
2017-01-01
The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.
NASA Astrophysics Data System (ADS)
Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.
2017-12-01
NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.
NASA Technical Reports Server (NTRS)
Wang, T.; Simon, T. W.
1988-01-01
Development of a recent experimental program to investigate the effects of streamwise curvature on boundary layer transition required making a bendable, heated and instrumented test wall, a rather nonconventional surface. The present paper describes this surface, the design choices made in its development and how uncertainty analysis was used, beginning early in the test program, to make such design choices. Published uncertainty analysis techniques were found to be of great value; but, it became clear that another step, one herein called the pre-test analysis, would aid the program development. Finally, it is shown how the uncertainty analysis was used to determine whether the test surface was qualified for service.
Sousa, Ivanildo P; Carvalho, Carlos A M; Ferreira, Davis F; Weissmüller, Gilberto; Rocha, Gustavo M; Silva, Jerson L; Gomes, Andre M O
2011-01-21
Alphaviruses are enveloped arboviruses. The viral envelope is derived from the host cell and is positioned between two icosahedral protein shells (T = 4). Because the viral envelope contains glycoproteins involved in cell recognition and entry, the integrity of the envelope is critical for the success of the early events of infection. Differing levels of cholesterol in different hosts leads to the production of alphaviruses with distinct levels of this sterol loaded in the envelope. Using Mayaro virus, a New World alphavirus, we investigated the role of cholesterol on the envelope of alphavirus particles assembled in either mammalian or mosquito cells. Our results show that although quite different in their cholesterol content, Mayaro virus particles obtained from both cells share a similar high level of lateral organization in their envelopes. This organization, as well as viral stability and infectivity, is severely compromised when cholesterol is depleted from the envelope of virus particles isolated from mammalian cells, but virus particles isolated from mosquito cells are relatively unaffected by cholesterol depletion. We suggest that it is not cholesterol itself, but rather the organization of the viral envelope, that is critical for the biological activity of alphaviruses.
Energizing the last phase of common-envelope removal
NASA Astrophysics Data System (ADS)
Soker, Noam
2017-11-01
We propose a scenario where a companion that is about to exit a common-envelope evolution (CEE) with a giant star accretes mass from the remaining envelope outside its deep orbit and launches jets that facilitate the removal of the remaining envelope. The jets that the accretion disc launches collide with the envelope and form hot bubbles that energize the envelope. Due to gravitational interaction with the envelope, which might reside in a circumbinary disc, the companion migrates farther in, but the inner boundary of the circumbinary disc continues to feed the accretion disc. While near the equatorial plane mass leaves the system at a very low velocity, along the polar directions velocities are very high. When the primary is an asymptotic giant branch star, this type of flow forms a bipolar nebula with very narrow waists. We compare this envelope-removal process with four other last-phase common-envelope-removal processes. We also note that the accreted gas from the envelope outside the orbit in the last phase of the CEE might carry with it angular momentum that is anti-aligned to the orbital angular momentum. We discuss the implications to the possibly anti-aligned spins of the merging black hole event GW170104.
Sano, Kaori; Kawaguchi, Mari; Katano, Keita; Tomita, Kenji; Inokuchi, Mayu; Nagasawa, Tatsuki; Hiroi, Junya; Kaneko, Toyoji; Kitagawa, Takashi; Fujimoto, Takafumi; Arai, Katsutoshi; Tanaka, Masaru; Yasumasu, Shigeki
2017-05-01
Teleost egg envelope generally consists of a thin outer layer and a thick inner layer. The inner layer of the Pacific herring egg envelope is further divided into distinct inner layers I and II. In our previous study, we cloned four zona pellucida (ZP) proteins (HgZPBa, HgZPBb, HgZPCa, and HgZPCb) from Pacific herring, two of which (HgZPBa and HgZPCa) were synthesized in the liver and two (HgZPBb and HgZPCb) in the ovary. In this study, we raised antibodies against these four proteins to identify their locations using immunohistochemistry. Our results suggest that inner layer I is constructed primarily of HgZPBa and Ca, whereas inner layer II consists primarily of HgZPBa. HgZPBb and Cb were minor components of the envelope. Therefore, the egg envelope of Pacific herring is primarily composed of liver-synthesized ZP proteins. A comparison of the thickness of the fertilized egg envelopes of 55 species suggested that egg envelopes derived from liver-synthesized ZP proteins tended to be thicker in demersal eggs than those in pelagic eggs, whereas egg envelopes derived from ovarian-synthesized ZP proteins had no such tendency. Our comparison suggests that the prehatching period of an egg with a thick egg envelope is longer than that of an egg with a thin egg envelope. We hypothesized that acquisition of liver-synthesized ZP proteins during evolution conferred the ability to develop a thick egg envelope, which allowed species with demersal eggs to adapt to mechanical stress in the prehatching environment by thickening the egg envelope, while pelagic egg envelopes have remained thin. © 2017 Wiley Periodicals, Inc.
Arthos, James; Rubbert, Andrea; Rabin, Ronald L.; Cicala, Claudia; Machado, Elizabeth; Wildt, Kathryne; Hanbach, Meredith; Steenbeke, Tavis D.; Swofford, Ruth; Farber, Joshua M.; Fauci, Anthony S.
2000-01-01
The capacity of human immunodeficiency virus (HIV) and simian immunodeficiency virus (SIV) envelopes to transduce signals through chemokine coreceptors on macrophages was examined by measuring the ability of recombinant envelope proteins to mobilize intracellular calcium stores. Both HIV and SIV envelopes mobilized calcium via interactions with CCR5. The kinetics of these responses were similar to those observed when macrophages were treated with MIP-1β. Distinct differences in the capacity of envelopes to mediate calcium mobilization were observed. Envelopes derived from viruses capable of replicating in macrophages mobilized relatively high levels of calcium, while envelopes derived from viruses incapable of replicating in macrophages mobilized relatively low levels of calcium. The failure to efficiently mobilize calcium was not restricted to envelopes derived from CXCR4-utilizing isolates but also included envelopes derived from CCR5-utilizing isolates that fail to replicate in macrophages. We characterized one CCR5-utilizing isolate, 92MW959, which entered macrophages but failed to replicate. A recombinant envelope derived from this virus mobilized low levels of calcium. When macrophages were inoculated with 92MW959 in the presence of MIP-1α, viral replication was observed, indicating that a CC chemokine-mediated signal provided the necessary stimulus to allow the virus to complete its replication cycle. Although the role that envelope-CCR5 signal transduction plays in viral replication is not yet understood, it has been suggested that envelope-mediated signals facilitate early postfusion events in viral replication. The data presented here are consistent with this hypothesis and suggest that the differential capacity of viral envelopes to signal through CCR5 may influence their ability to replicate in macrophages. PMID:10864653
Arthos, J; Rubbert, A; Rabin, R L; Cicala, C; Machado, E; Wildt, K; Hanbach, M; Steenbeke, T D; Swofford, R; Farber, J M; Fauci, A S
2000-07-01
The capacity of human immunodeficiency virus (HIV) and simian immunodeficiency virus (SIV) envelopes to transduce signals through chemokine coreceptors on macrophages was examined by measuring the ability of recombinant envelope proteins to mobilize intracellular calcium stores. Both HIV and SIV envelopes mobilized calcium via interactions with CCR5. The kinetics of these responses were similar to those observed when macrophages were treated with MIP-1beta. Distinct differences in the capacity of envelopes to mediate calcium mobilization were observed. Envelopes derived from viruses capable of replicating in macrophages mobilized relatively high levels of calcium, while envelopes derived from viruses incapable of replicating in macrophages mobilized relatively low levels of calcium. The failure to efficiently mobilize calcium was not restricted to envelopes derived from CXCR4-utilizing isolates but also included envelopes derived from CCR5-utilizing isolates that fail to replicate in macrophages. We characterized one CCR5-utilizing isolate, 92MW959, which entered macrophages but failed to replicate. A recombinant envelope derived from this virus mobilized low levels of calcium. When macrophages were inoculated with 92MW959 in the presence of MIP-1alpha, viral replication was observed, indicating that a CC chemokine-mediated signal provided the necessary stimulus to allow the virus to complete its replication cycle. Although the role that envelope-CCR5 signal transduction plays in viral replication is not yet understood, it has been suggested that envelope-mediated signals facilitate early postfusion events in viral replication. The data presented here are consistent with this hypothesis and suggest that the differential capacity of viral envelopes to signal through CCR5 may influence their ability to replicate in macrophages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less
Rotorcraft control system design for uncertain vehicle dynamics using quantitative feedback theory
NASA Technical Reports Server (NTRS)
Hess, R. A.
1994-01-01
Quantitative Feedback Theory describes a frequency-domain technique for the design of multi-input, multi-output control systems which must meet time or frequency domain performance criteria when specified uncertainty exists in the linear description of the vehicle dynamics. This theory is applied to the design of the longitudinal flight control system for a linear model of the BO-105C rotorcraft. Uncertainty in the vehicle model is due to the variation in the vehicle dynamics over a range of airspeeds from 0-100 kts. For purposes of exposition, the vehicle description contains no rotor or actuator dynamics. The design example indicates the manner in which significant uncertainty exists in the vehicle model. The advantage of using a sequential loop closure technique to reduce the cost of feedback is demonstrated by example.
NASA Astrophysics Data System (ADS)
Kowalewski, M. G.; Janz, S. J.
2015-02-01
Methods of absolute radiometric calibration of backscatter ultraviolet (BUV) satellite instruments are compared as part of an effort to minimize pre-launch calibration uncertainties. An internally illuminated integrating sphere source has been used for the Shuttle Solar BUV, Total Ozone Mapping Spectrometer, Ozone Mapping Instrument, and Global Ozone Monitoring Experiment 2 using standardized procedures traceable to national standards. These sphere-based spectral responsivities agree to within the derived combined standard uncertainty of 1.87% relative to calibrations performed using an external diffuser illuminated by standard irradiance sources, the customary spectral radiance responsivity calibration method for BUV instruments. The combined standard uncertainty for these calibration techniques as implemented at the NASA Goddard Space Flight Center’s Radiometric Calibration and Development Laboratory is shown to less than 2% at 250 nm when using a single traceable calibration standard.
NASA Astrophysics Data System (ADS)
Hamidi, Mohammadreza; Shahanaghi, Kamran; Jabbarzadeh, Armin; Jahani, Ehsan; Pousti, Zahra
2017-12-01
In every production plant, it is necessary to have an estimation of production level. Sometimes there are many parameters affective in this estimation. In this paper, it tried to find an appropriate estimation of production level for an industrial factory called Barez in an uncertain environment. We have considered a part of production line, which has different production time for different kind of products, which means both environmental and system uncertainty. To solve the problem we have simulated the line and because of the uncertainty in the times, fuzzy simulation is considered. Required fuzzy numbers are estimated by the use of bootstrap technique. The results are used in production planning process by factory experts and have had satisfying consequences. Opinions of these experts about the efficiency of using this methodology, has been attached.
Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties
NASA Astrophysics Data System (ADS)
Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.
2014-10-01
We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.
Autistic Heterogeneity: Linking Uncertainties and Indeterminacies
Hollin, Gregory
2017-01-01
Abstract Autism is a highly uncertain entity and little is said about it with any degree of certainty. Scientists must, and do, work through these uncertainties in the course of their work. Scientists explain uncertainty in autism research through discussion of epistemological uncertainties which suggest that diverse methods and techniques make results hard to reconcile, ontological uncertainties which suggest doubt over taxonomic coherence, but also through reference to autism’s indeterminacy which suggests that the condition is inherently heterogeneous. Indeed, indeterminacy takes two forms—an inter-personal form which suggests that there are fundamental differences between individuals with autism and an intra-personal form which suggests that no one factor is able to explain all features of autism within a given individual. What is apparent in the case of autism is that scientists put uncertainty and indeterminacy into discussion with one another and, rather than a well-policed epistemic-ontic boundary, there is a movement between, and an entwinement of, the two. Understanding scientists’ dialogue concerning uncertainty and indeterminacy is of importance for understanding autism and autistic heterogeneity but also for understanding uncertainty and ‘uncertainty work’ within science more generally. PMID:28515574
Generating partially correlated noise—A comparison of methods
Hartmann, William M.; Cho, Yun Jin
2011-01-01
There are three standard methods for generating two channels of partially correlated noise: the two-generator method, the three-generator method, and the symmetric-generator method. These methods allow an experimenter to specify a target cross correlation between the two channels, but actual generated noises show statistical variability around the target value. Numerical experiments were done to compare the variability for those methods as a function of the number of degrees of freedom. The results of the experiments quantify the stimulus uncertainty in diverse binaural psychoacoustical experiments: incoherence detection, perceived auditory source width, envelopment, noise localization∕lateralization, and the masking level difference. The numerical experiments found that when the elemental generators have unequal powers, the different methods all have similar variability. When the powers are constrained to be equal, the symmetric-generator method has much smaller variability than the other two. PMID:21786899
NASA Astrophysics Data System (ADS)
LUO, Jianchun; WANG, Yunyu; YANG, Jun; RAN, hong; PENG, Xiaodong; HUANG, Ming; FENG, Hao; LIU, Meijun
2018-03-01
The vulnerability assessment of power grid is of great significance in the current research. Power system faces many kinds of uncertainty factors, and the disturbance caused by them has become one of the main factors which restrict the safe operation of power grid. To solve this problem, considering the anti-interference ability of the system when the system is disturbed and the effect of the system when the node is out of operation, a set of index to reflect the anti-interference ability and the influence of nodes are set up. On this basis, a new comprehensive vulnerability assessment method of nodes is put forward by using super efficiency data envelopment analysis to scientific integration. Finally, the simulative results of IEEE30-bus system indicated that the proposed model is rational and valid.
Robust Gain-Scheduled Fault Tolerant Control for a Transport Aircraft
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Gregory, Irene
2007-01-01
This paper presents an application of robust gain-scheduled control concepts using a linear parameter-varying (LPV) control synthesis method to design fault tolerant controllers for a civil transport aircraft. To apply the robust LPV control synthesis method, the nonlinear dynamics must be represented by an LPV model, which is developed using the function substitution method over the entire flight envelope. The developed LPV model associated with the aerodynamic coefficient uncertainties represents nonlinear dynamics including those outside the equilibrium manifold. Passive and active fault tolerant controllers (FTC) are designed for the longitudinal dynamics of the Boeing 747-100/200 aircraft in the presence of elevator failure. Both FTC laws are evaluated in the full nonlinear aircraft simulation in the presence of the elevator fault and the results are compared to show pros and cons of each control law.
Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.
2016-01-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354
Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B
2016-04-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.
A brief review of 210Pb sediment dating models and uncertainties in a world of global change
NASA Astrophysics Data System (ADS)
Sanchez-Cabeza, J. A.; Ruiz-Fernandez, A. C.
2016-12-01
Irrespective of the model names used, assumptions and (usually forgotten) uncertainties, the fact is that 210Pb sediment dating is an increasingly relevant tool in our world of global change. 210Pb dating results are needed to assess historical trends of sea level rise, quantify blue carbon fluxes and reconstruct environmental records of biogeochemical proxies for diverse processes in the aquatic ecosystems (such as ocean acidification, hypoxia and pollution). Although in the past 210Pb profiles departing from "ideal" decay trends were usually discarded, all profiles have useful information. In this work we review the principles and assumptions of the most common 210Pb dating models, and propose a logical formulation and classification of the models. 210Pb dating models provide two kinds of results: chronologies (i.e. age models) and accumulation rates. In many cases, the use of sediment and/or mass accumulation rates (SAR and MAR) is needed to assess environmental fluxes or, simply, to describe changes, such as catchment erosion or saltmarsh accretion. Although uncertainty quadratic propagation is a well-known technique, it requires that all variables are fully independent and requires demanding mathematical expressions which might lead to wrong results. We present here a Monte Carlo method that makes calculation easier and, likely, error-free. Not unexpectedly, the most important uncertainty sources are measurement uncertainties, which impose limitations on common techniques such as gamma spectrometry. 210Pb chronology does not cover all anthropogenic impacts, such as those caused by ancient civilizations, so radiocarbon also plays an important role in this kind of work. We also conceptually revise the limitations of both techniques and encourage scientists to link both dating techniques with a symmetrically open mind. Acknowledgements: projects CONACYT PDCPN2013-01/214349 and CB2010/153492, UNAM PAPIIT-IN203313, PRODEP network "Aquatic contamination: levels and effects" (year 3).
Chemical Characterization of an Envelope B/D Sample from Hanford Tank 241-AZ-102
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M.S.
2000-08-23
A sample from Hanford waste tank 241-AZ-102 was received at the Savannah River Technology Center (SRTC) and chemically characterized. The sample containing supernate and a small amount of sludge solids was analyzed as-received. The filtered supernatant liquid, the total dried solids of the sample, and the washed insoluble solids obtained from filtration of the sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results were more than one method of determination was employed and formore » species present in low concentrations. The actinides isotopes, plutonium, americium, and curium, present analytical challenges due to the low concentration of these species and the potential for introduction of small amounts of contamination during sampling handling resulting in large uncertainties. A direct comparison to previous analyses of material from tank 241-AZ-102 showed good agreement with the filtered supernatant liquid. However, the comparison of solids data showed poor agreement. The poor agreement shown between the current results for the solids samples and previous analyses most likely results from the uncertainties associated with obtaining small solids samples from a large non-homogenized waste tank.« less
NASA Astrophysics Data System (ADS)
McKenney, D.; Pedlar, J.
2011-12-01
Climate is one of the major influences on forests and much effort has gone into projecting the impacts of rapid climate change on forest distribution and productivity. Such efforts are premised on the notion that the current generation of Global Climate Models (GCMs) provide reasonably accurate representations of future climate. But what is the appropriate level of faith to put in these projections when making relatively fine-scale resource management decisions such as the movement of plant genetic material? In this talk we review recent outcomes of climate envelope models for North American tree species that suggest optimal climate regimes could move on average ~700km within the next 100 years. Newer generation GCMs seem to confirm these results but much uncertainty remains for practical decision-making. Despite these uncertainties, assisted migration has been suggested as a climate change adaptation tool wherein populations of trees are moved up to a few hundred kilometers north (or a few hundred meters upslope) to keep pace with the anticipated changes in optimal climate regimes. A continent-wide web based tool (SEEDWHERE) is presented, which assists in identifying appropriate translocation distances for assisted migration initiatives. We finish with some suggestions for future work on the topic of forest regeneration decisions under an evolving and uncertain future climate.
Gu, Yingxin; Howard, Daniel M.; Wylie, Bruce K.; Zhang, Li
2012-01-01
Flux tower networks (e. g., AmeriFlux, Agriflux) provide continuous observations of ecosystem exchanges of carbon (e. g., net ecosystem exchange), water vapor (e. g., evapotranspiration), and energy between terrestrial ecosystems and the atmosphere. The long-term time series of flux tower data are essential for studying and understanding terrestrial carbon cycles, ecosystem services, and climate changes. Currently, there are 13 flux towers located within the Great Plains (GP). The towers are sparsely distributed and do not adequately represent the varieties of vegetation cover types, climate conditions, and geophysical and biophysical conditions in the GP. This study assessed how well the available flux towers represent the environmental conditions or "ecological envelopes" across the GP and identified optimal locations for future flux towers in the GP. Regression-based remote sensing and weather-driven net ecosystem production (NEP) models derived from different extrapolation ranges (10 and 50%) were used to identify areas where ecological conditions were poorly represented by the flux tower sites and years previously used for mapping grassland fluxes. The optimal lands suitable for future flux towers within the GP were mapped. Results from this study provide information to optimize the usefulness of future flux towers in the GP and serve as a proxy for the uncertainty of the NEP map.
Envelopes in eclipsing binary stars
NASA Technical Reports Server (NTRS)
Huang, S.
1972-01-01
Theoretical research on eclipsing binaries is presented. The specific areas of investigation are the following: (1) the relevance of envelopes to the study of the light curves of eclipsing binaries, (2) the disk envelope, and (3) the spherical envelope.
Ramp time synchronization. [for NASA Deep Space Network
NASA Technical Reports Server (NTRS)
Hietzke, W.
1979-01-01
A new method of intercontinental clock synchronization has been developed and proposed for possible use by NASA's Deep Space Network (DSN), using a two-way/three-way radio link with a spacecraft. Analysis of preliminary data indicates that the real-time method has an uncertainty of 0.6 microsec, and it is very likely that further work will decrease the uncertainty. Also, the method is compatible with a variety of nonreal-time analysis techniques, which may reduce the uncertainty down to the tens of nanosecond range.
Uncertainties in predicting solar panel power output
NASA Technical Reports Server (NTRS)
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
Measuring functional connectivity using MEG: Methodology and comparison with fcMRI
Brookes, Matthew J.; Hale, Joanne R.; Zumer, Johanna M.; Stevenson, Claire M.; Francis, Susan T.; Barnes, Gareth R.; Owen, Julia P.; Morris, Peter G.; Nagarajan, Srikantan S.
2011-01-01
Functional connectivity (FC) between brain regions is thought to be central to the way in which the brain processes information. Abnormal connectivity is thought to be implicated in a number of diseases. The ability to study FC is therefore a key goal for neuroimaging. Functional connectivity (fc) MRI has become a popular tool to make connectivity measurements but the technique is limited by its indirect nature. A multimodal approach is therefore an attractive means to investigate the electrodynamic mechanisms underlying hemodynamic connectivity. In this paper, we investigate resting state FC using fcMRI and magnetoencephalography (MEG). In fcMRI, we exploit the advantages afforded by ultra high magnetic field. In MEG we apply envelope correlation and coherence techniques to source space projected MEG signals. We show that beamforming provides an excellent means to measure FC in source space using MEG data. However, care must be taken when interpreting these measurements since cross talk between voxels in source space can potentially lead to spurious connectivity and this must be taken into account in all studies of this type. We show good spatial agreement between FC measured independently using MEG and fcMRI; FC between sensorimotor cortices was observed using both modalities, with the best spatial agreement when MEG data are filtered into the β band. This finding helps to reduce the potential confounds associated with each modality alone: while it helps reduce the uncertainties in spatial patterns generated by MEG (brought about by the ill posed inverse problem), addition of electrodynamic metric confirms the neural basis of fcMRI measurements. Finally, we show that multiple MEG based FC metrics allow the potential to move beyond what is possible using fcMRI, and investigate the nature of electrodynamic connectivity. Our results extend those from previous studies and add weight to the argument that neural oscillations are intimately related to functional connectivity and the BOLD response. PMID:21352925
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Analysis Technique for OMEGA Dante Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M J; Widmann, K; Sorce, C
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Quantification for Robust Control of Wind Turbines using Sliding Mode Observer
NASA Astrophysics Data System (ADS)
Schulte, Horst
2016-09-01
A new quantification method of uncertain models for robust wind turbine control using sliding-mode techniques is presented with the objective to improve active load mitigation. This approach is based on the so-called equivalent output injection signal, which corresponds to the average behavior of the discontinuous switching term, establishing and maintaining a motion on a so-called sliding surface. The injection signal is directly evaluated to obtain estimates of the uncertainty bounds of external disturbances and parameter uncertainties. The applicability of the proposed method is illustrated by the quantification of a four degree-of-freedom model of the NREL 5MW reference turbine containing uncertainties.
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Parametric Robust Control and System Identification: Unified Approach
NASA Technical Reports Server (NTRS)
Keel, L. H.
1996-01-01
During the period of this support, a new control system design and analysis method has been studied. This approach deals with control systems containing uncertainties that are represented in terms of its transfer function parameters. Such a representation of the control system is common and many physical parameter variations fall into this type of uncertainty. Techniques developed here are capable of providing nonconservative analysis of such control systems with parameter variations. We have also developed techniques to deal with control systems when their state space representations are given rather than transfer functions. In this case, the plant parameters will appear as entries of state space matrices. Finally, a system modeling technique to construct such systems from the raw input - output frequency domain data has been developed.
Uncertainties in s -process nucleosynthesis in low mass stars determined from Monte Carlo variations
NASA Astrophysics Data System (ADS)
Cescutti, G.; Hirschi, R.; Nishimura, N.; den Hartogh, J. W.; Rauscher, T.; Murphy, A. St J.; Cristallo, S.
2018-05-01
The main s-process taking place in low mass stars produces about half of the elements heavier than iron. It is therefore very important to determine the importance and impact of nuclear physics uncertainties on this process. We have performed extensive nuclear reaction network calculations using individual and temperature-dependent uncertainties for reactions involving elements heavier than iron, within a Monte Carlo framework. Using this technique, we determined the uncertainty in the main s-process abundance predictions due to nuclear uncertainties link to weak interactions and neutron captures on elements heavier than iron. We also identified the key nuclear reactions dominating these uncertainties. We found that β-decay rate uncertainties affect only a few nuclides near s-process branchings, whereas most of the uncertainty in the final abundances is caused by uncertainties in neutron capture rates, either directly producing or destroying the nuclide of interest. Combined total nuclear uncertainties due to reactions on heavy elements are in general small (less than 50%). Three key reactions, nevertheless, stand out because they significantly affect the uncertainties of a large number of nuclides. These are 56Fe(n,γ), 64Ni(n,γ), and 138Ba(n,γ). We discuss the prospect of reducing uncertainties in the key reactions identified in this study with future experiments.
NASA Astrophysics Data System (ADS)
Behr, Y.; Cua, G. B.; Clinton, J. F.; Racine, R.; Meier, M.; Cauzzi, C.
2013-12-01
The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland, western Greece and Istanbul. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, Romania, and Iceland are planned or underway. The possible use cases for an EEW system will be determined by the speed and reliability of earthquake source parameter estimates. A thorough understanding of both is therefore essential to evaluate the usefulness of VS. For California, we present state-wide theoretical alert times for hypothetical earthquakes by analyzing time delays introduced by the different components in the VS EEW system. Taking advantage of the fully probabilistic formulation of the VS algorithm we further present an improved way to describe the uncertainties of every magnitude estimate by evaluating the width and shape of the probability density function that describes the relationship between waveform envelope amplitudes and magnitude. We evaluate these new uncertainty values for past seismicity in California through off-line playbacks and compare them to the previously defined static definitions of uncertainty based on real-time detections. Our results indicate where VS alerts are most useful in California and also suggest where most effective improvements to the VS EEW system can be made.
Studies on the System Regulating Proton Movement across the Chloroplast Envelope 1
Peters, Jeanne S.; Berkowitz, Gerald A.
1991-01-01
Studies were undertaken to further characterize the spinach (Spinacea oleracea) chloroplast envelope system, which facilitates H+ movement into and out of the stroma, and, hence, modulates photosynthetic activity by regulating stromal pH. It was demonstrated that high envelope-bound Mg2+ causes stromal acidification and photosynthetic inhibition. High envelope-bound Mg2+ was also found to necessitate the activity of a digitoxinand oligomycin-sensitive ATPase for the maintenance of high stromal pH and photosynthesis in the illuminated chloroplast. In chloroplasts that had high envelope Mg2+ and inhibited envelope ATPase activity, 2-(diethylamino)-N-(2,6-dimethylphenyl)acetamide was found to raise stromal pH and stimulate photosynthesis. 2-(Diethylamino)-N-(2,6-dimethylphenyl)acetamide is an amine anesthetic that is known to act as a monovalent cation channel blocker in mammalian systems. We postulate that the system regulating cation and H+ fluxes across the plastid envelope includes a monovalent cation channel in the envelope, some degree of (envelope-bound Mg2+ modulated) H+ flux linked to monovalent cation antiport, and ATPase-dependent H+ efflux. PMID:16668116
Medicine and the Silent Oracle: An Exercise in Uncertainty
ERIC Educational Resources Information Center
Belling, Catherine
2006-01-01
This article describes a simple in-class exercise in reading and writing that, by asking participants to write their own endings for a short narrative taken from the "Journal of the American Medical Association," prompts them to reflect on the problem of uncertainty in medicine and to apply the literary-critical techniques of close…
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
NASA Astrophysics Data System (ADS)
Raza, Syed Ali; Zaighum, Isma; Shah, Nida
2018-02-01
This paper examines the relationship between economic policy uncertainty and equity premium in G7 countries over a period of the monthly data from January 1989 to December 2015 using a novel technique namely QQ regression proposed by Sim and Zhou (2015). Based on QQ approach, we estimate how the quantiles of the economic policy uncertainty affect the quantiles of the equity premium. Thus, it provides a comprehensive insight into the overall dependence structure between the equity premium and economic policy uncertainty as compared to traditional techniques like OLS or quantile regression. Overall, our empirical evidence suggests the existence of a negative association between equity premium and EPU predominately in all G7 countries, especially in the extreme low and extreme high tails. However, differences exist among countries and across different quantiles of EPU and the equity premium within each country. The existence of this heterogeneity among countries is due to the differences in terms of dependency on economic policy, other stock markets, and the linkages with other country's equity market.
Active Subspaces for Wind Plant Surrogate Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Ryan N; Quick, Julian; Dykes, Katherine L
Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less
3D interferometric shape measurement technique using coherent fiber bundles
NASA Astrophysics Data System (ADS)
Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen
2017-06-01
In-situ 3-D shape measurements with submicron shape uncertainty of fast rotating objects in a cutting lathe are expected, which can be achieved by simultaneous distance and velocity measurements. Conventional tactile methods, coordinate measurement machines, only support ex-situ measurements. Optical measurement techniques such as triangulation and conoscopic holography offer only the distance, so that the absolute diameter cannot be retrieved directly. In comparison, laser Doppler distance sensors (P-LDD sensor) enable simultaneous and in-situ distance and velocity measurements for monitoring the cutting process in a lathe. In order to achieve shape measurement uncertainties below 1 μm, a P-LDD sensor with a dual camera based scattered light detection has been investigated. Coherent fiber bundles (CFB) are employed to forward the scattered light towards cameras. This enables a compact and passive sensor head in the future. Compared with a photo detector based sensor, the dual camera based sensor allows to decrease the measurement uncertainty by the order of one magnitude. As a result, the total shape uncertainty of absolute 3-D shape measurements can be reduced to about 100 nm.
An uncertainty model of acoustic metamaterials with random parameters
NASA Astrophysics Data System (ADS)
He, Z. C.; Hu, J. Y.; Li, Eric
2018-01-01
Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2014-11-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less
Conversion and matched filter approximations for serial minimum-shift keyed modulation
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.
1982-01-01
Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.
Strategies and Challenges in Simultaneous Augmentation Mastopexy.
Spring, Michelle A; Hartmann, Emily C; Stevens, W Grant
2015-10-01
Simultaneous breast augmentation and mastopexy is a common procedure often considered to be one of the most difficult cosmetic breast surgeries. One-stage augmentation mastopexy was initially described more than 50 years ago. The challenge lies in the fact that the surgery has multiple opposing goals: to increasing the volume of a breast, enhance the shape, and simultaneously decrease the skin envelope. Successful outcomes in augmentation can be expected with proper planning, technique, and patient education. This article focuses on common indications for simultaneous augmentation mastopexy, techniques for safe and effective combined procedures, challenges of the procedure, and potential complications. Copyright © 2015 Elsevier Inc. All rights reserved.
Carrier-envelope phase dynamics and noise analysis in octave-spanning Ti:sapphire lasers.
Matos, Lia; Mücke, Oliver D; Chen, Jian; Kärtner, Franz X
2006-03-20
We investigate the carrier-envelope phase dynamics of octave-spanning Ti:sapphire lasers and perform a complete noise analysis of the carrier-envelope phase stabilization. We model the effect of the laser dynamics on the residual carrier-envelope phase noise by deriving a transfer function representation of the octave-spanning frequency comb. The modelled phase noise and the experimental results show excellent agreement. This greatly enhances our capability of predicting the dependence of the residual carrier-envelope phase noise on the feedback loop filter, the carrier-envelope frequency control mechanism and the pump laser used.
Blaise, Sandra; Ruggieri, Alessia; Dewannieux, Marie; Cosset, François-Loic; Heidmann, Thierry
2004-01-01
A member of the HERV-W family of human endogenous retroviruses (HERV) had previously been demonstrated to encode a functional envelope which can form pseudotypes with human immunodeficiency virus type 1 virions and confer infectivity on the resulting retrovirus particles. Here we show that a second envelope protein sorted out by a systematic search for fusogenic proteins that we made among all the HERV coding envelope genes and belonging to the HERV-FRD family can also make pseudotypes and confer infectivity. We further show that the orthologous envelope genes that were isolated from simians—from New World monkeys to humans—are also functional in the infectivity assay, with one singular exception for the gibbon HERV-FRD gene, which is found to be fusogenic in a cell-cell fusion assay, as observed for the other simian envelopes, but which is not infectious. Sequence comparison of the FRD envelopes revealed a limited number of mutations among simians, and one point mutation—located in the TM subunit—was shown to be responsible for the loss of infectivity of the gibbon envelope. The functional characterization of the identified envelopes is strongly indicative of an ancestral retrovirus infection and endogenization, with some of the envelope functions subsequently retained in evolution. PMID:14694139
Blaise, Sandra; Ruggieri, Alessia; Dewannieux, Marie; Cosset, François-Loic; Heidmann, Thierry
2004-01-01
A member of the HERV-W family of human endogenous retroviruses (HERV) had previously been demonstrated to encode a functional envelope which can form pseudotypes with human immunodeficiency virus type 1 virions and confer infectivity on the resulting retrovirus particles. Here we show that a second envelope protein sorted out by a systematic search for fusogenic proteins that we made among all the HERV coding envelope genes and belonging to the HERV-FRD family can also make pseudotypes and confer infectivity. We further show that the orthologous envelope genes that were isolated from simians-from New World monkeys to humans-are also functional in the infectivity assay, with one singular exception for the gibbon HERV-FRD gene, which is found to be fusogenic in a cell-cell fusion assay, as observed for the other simian envelopes, but which is not infectious. Sequence comparison of the FRD envelopes revealed a limited number of mutations among simians, and one point mutation-located in the TM subunit-was shown to be responsible for the loss of infectivity of the gibbon envelope. The functional characterization of the identified envelopes is strongly indicative of an ancestral retrovirus infection and endogenization, with some of the envelope functions subsequently retained in evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H.-W.; Chang, N.-B., E-mail: nchang@mail.ucf.ed; Chen, J.-C.
2010-07-15
Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19more » large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.« less
NASA Astrophysics Data System (ADS)
Parsons, S. G.; Hermes, J. J.; Marsh, T. R.; Gänsicke, B. T.; Tremblay, P.-E.; Littlefair, S. P.; Sahman, D. I.; Ashley, R. P.; Green, M.; Rattanasoon, S.; Dhillon, V. S.; Burleigh, M. R.; Casewell, S. L.; Buckley, D. A. H.; Braker, I. P.; Irawati, P.; Dennihy, E.; Rodríguez-Gil, P.; Winget, D. E.; Winget, K. I.; Bell, Keaton J.; Kilic, Mukremin
2017-10-01
Using data from the extended Kepler mission in K2 Campaign 10, we identify two eclipsing binaries containing white dwarfs with cool companions that have extremely short orbital periods of only 71.2 min (SDSS J1205-0242, a.k.a. EPIC 201283111) and 72.5 min (SDSS J1231+0041, a.k.a. EPIC 248368963). Despite their short periods, both systems are detached with small, low-mass companions, in one case a brown dwarf and in the other case either a brown dwarf or a low-mass star. We present follow-up photometry and spectroscopy of both binaries, as well as phase-resolved spectroscopy of the brighter system, and use these data to place preliminary estimates on the physical and binary parameters. SDSS J1205-0242 is composed of a 0.39 ± 0.02 M⊙ helium-core white dwarf that is totally eclipsed by a 0.049 ± 0.006 M⊙ (51 ± 6MJ) brown-dwarf companion, while SDSS J1231+0041 is composed of a 0.56 ± 0.07 M⊙ white dwarf that is partially eclipsed by a companion of mass ≲0.095 M⊙. In the case of SDSS J1205-0242, we look at the combined constraints from common-envelope evolution and brown-dwarf models; the system is compatible with similar constraints from other post-common-envelope binaries, given the current parameter uncertainties, but has potential for future refinement.
This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...
NASA Astrophysics Data System (ADS)
Guillaume, Joseph H. A.; Helgeson, Casey; Elsawah, Sondoss; Jakeman, Anthony J.; Kummu, Matti
2017-08-01
Uncertainty is recognized as a key issue in water resources research, among other sciences. Discussions of uncertainty typically focus on tools and techniques applied within an analysis, e.g., uncertainty quantification and model validation. But uncertainty is also addressed outside the analysis, in writing scientific publications. The language that authors use conveys their perspective of the role of uncertainty when interpreting a claim—what we call here "framing" the uncertainty. This article promotes awareness of uncertainty framing in four ways. (1) It proposes a typology of eighteen uncertainty frames, addressing five questions about uncertainty. (2) It describes the context in which uncertainty framing occurs. This is an interdisciplinary topic, involving philosophy of science, science studies, linguistics, rhetoric, and argumentation. (3) We analyze the use of uncertainty frames in a sample of 177 abstracts from the Water Resources Research journal in 2015. This helped develop and tentatively verify the typology, and provides a snapshot of current practice. (4) We make provocative recommendations to achieve a more influential, dynamic science. Current practice in uncertainty framing might be described as carefully considered incremental science. In addition to uncertainty quantification and degree of belief (present in ˜5% of abstracts), uncertainty is addressed by a combination of limiting scope, deferring to further work (˜25%) and indicating evidence is sufficient (˜40%)—or uncertainty is completely ignored (˜8%). There is a need for public debate within our discipline to decide in what context different uncertainty frames are appropriate. Uncertainty framing cannot remain a hidden practice evaluated only by lone reviewers.
Anticipated uncertainty budgets of PRARETIME and T2L2 techniques as applied to ExTRAS
NASA Technical Reports Server (NTRS)
Thomas, Claudine; Wolf, Peter; Uhrich, Pierre J. M.; Schaefer, W.; Nau, H.; Veillet, Christian
1995-01-01
The Experiment on Timing Ranging and Atmospheric Soundings, ExTRAS, was conceived jointly by the European Space Agency, ESA, and the Russian Space Agency, RSA. It is also designated the 'Hydrogen-maser in Space/Meteor-3M project'. The launch of the satellite is scheduled for early 1997. The package, to be flown on board a Russian meteorological satellite includes ultra-stable frequency and time sources, namely two active and auto-tuned hydrogen masers. Communication between the on-board hydrogen masers and the ground station clocks is effected by means of a microwave link using the modified version for time transfer of the Precise Range And Range-rate Equipment, PRARETIME, technique, and an optical link which uses the Time Transfer by Laser Link, T2L2, method. Both the PRARETIME and T2L2 techniques operate in a two-directional mode, which makes it possible to carry out accurate transmissions without precise knowledge of the satellite and station positions. Due to the exceptional quality of the on-board clocks and to the high performance of the communication techniques with the satellite, satellite clock monitoring and ground clocks synchronization are anticipated to be performed with uncertainties below 0.5 ns (1 sigma). Uncertainty budgets and related comments are presented.
Assessing uncertainties in superficial water provision by different bootstrap-based techniques
NASA Astrophysics Data System (ADS)
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo Mario
2014-05-01
An assessment of water security can incorporate several water-related concepts, characterizing the interactions between societal needs, ecosystem functioning, and hydro-climatic conditions. The superficial freshwater provision level depends on the methods chosen for 'Environmental Flow Requirement' estimations, which integrate the sources of uncertainty in the understanding of how water-related threats to aquatic ecosystem security arise. Here, we develop an uncertainty assessment of superficial freshwater provision based on different bootstrap techniques (non-parametric resampling with replacement). To illustrate this approach, we use an agricultural basin (291 km2) within the Cantareira water supply system in Brazil monitored by one daily streamflow gage (24-year period). The original streamflow time series has been randomly resampled for different times or sample sizes (N = 500; ...; 1000), then applied to the conventional bootstrap approach and variations of this method, such as: 'nearest neighbor bootstrap'; and 'moving blocks bootstrap'. We have analyzed the impact of the sampling uncertainty on five Environmental Flow Requirement methods, based on: flow duration curves or probability of exceedance (Q90%, Q75% and Q50%); 7-day 10-year low-flow statistic (Q7,10); and presumptive standard (80% of the natural monthly mean ?ow). The bootstrap technique has been also used to compare those 'Environmental Flow Requirement' (EFR) methods among themselves, considering the difference between the bootstrap estimates and the "true" EFR characteristic, which has been computed averaging the EFR values of the five methods and using the entire streamflow record at monitoring station. This study evaluates the bootstrapping strategies, the representativeness of streamflow series for EFR estimates and their confidence intervals, in addition to overview of the performance differences between the EFR methods. The uncertainties arisen during EFR methods assessment will be propagated through water security indicators referring to water scarcity and vulnerability, seeking to provide meaningful support to end-users and water managers facing the incorporation of uncertainties in the decision making process.