Advanced Fuel Cycle Economic Sensitivity Analysis
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Advancing sensitivity analysis to precisely characterize temporal parameter dominance
NASA Astrophysics Data System (ADS)
Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola
2016-04-01
Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological
Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers
NASA Astrophysics Data System (ADS)
Martynov, Denis
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in
Recent advances in steady compressible aerodynamic sensitivity analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene J.-W.; Jones, Henry E.
1992-01-01
Sensitivity analysis methods are classified as belonging to either of the two broad categories: the discrete (quasi-analytical) approach and the continuous approach. The two approaches differ by the order in which discretization and differentiation of the governing equations and boundary conditions is undertaken. The discussion focuses on the discrete approach. Basic equations are presented, and the major difficulties are reviewed in some detail, as are the proposed solutions. Recent research activity concerned with the continuous approach is also discussed.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
NASA Astrophysics Data System (ADS)
Guse, Björn; Pfannerstill, Matthias; Gafurov, Abror; Fohrer, Nicola; Gupta, Hoshin
2016-04-01
The hydrologic response variable most often used in sensitivity analysis is discharge which provides an integrated value of all catchment processes. The typical sensitivity analysis evaluates how changes in the model parameters affect the model output. However, due to discharge being the aggregated effect of all hydrological processes, the sensitivity signal of a certain model parameter can be strongly masked. A more advanced form of sensitivity analysis would be achieved if we could investigate how the sensitivity of a certain modelled process variable relates to the changes in a parameter. Based on this, the controlling parameters for different hydrological components could be detected. Towards this end, we apply the approach of temporal dynamics of parameter sensitivity (TEDPAS) to calculate the daily sensitivities for different model outputs with the FAST method. The temporal variations in parameter dominance are then analysed for both the modelled hydrological components themselves, and also for the rates of change (derivatives) in the modelled hydrological components. The daily parameter sensitivities are then compared with the modelled hydrological components using regime curves. Application of this approach shows that when the corresponding modelled process is investigated instead of discharge, we obtain both an increased indication of parameter sensitivity, and also a clear pattern showing how the seasonal patterns of parameter dominance change over time for each hydrological process. By relating these results with the model structure, we can see that the sensitivity of model parameters is influenced by the function of the parameter. While capacity parameters show more sensitivity to the modelled hydrological component, flux parameters tend to have a higher sensitivity to rates of change in the modelled hydrological component. By better disentangling the information hidden in the discharge values, we can use sensitivity analyses to obtain a clearer signal
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
Sensitivity analysis of infectious disease models: methods, advances and their application.
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V
2013-09-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods-scatter plots, the Morris and Sobol' methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method-and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Advanced Sensitivity Analysis of the Danish Eulerian Model in Parallel and Grid Environment
NASA Astrophysics Data System (ADS)
Ostromsky, Tz.; Dimov, I.; Marinov, P.; Georgieva, R.; Zlatev, Z.
2011-11-01
A 3-stage sensitivity analysis approach, based on analysis of variances technique for calculating Sobol's global sensitivity indices and computationaly efficient Monte Carlo integration techniques is considered and applied to a large-scale air pollurion model, the Danish Eulerian Model. On the first stage it is necessary to carry out a set of computationally expensive numerical experiments and to extract the necessary sensitivity analysis data. The output is used to construct mesh-functions of ozone concentration ratios to be used in the next stages for evaluating the necessary variances. Here we use a specially adapted for the purpose version of the model, called SA-DEM. It has been successfully implemented and run on the most powerful parallel supercomputer in Bulgaria—IBM Blue Gene/P. A more advanced version, capable of using efficiently the full capacity of this powerful supercomputer, is described in this paper, followed by some performance analysis of the numerical experiments. Another source of computational power for solving such a tuff numerical problem is the computational grid. That is why another version of SA-DEM has been adapted to exploit efficiently the capacity of our Grid infrastructure. The numerical results from both the parallel and Grid implementation are presented, compared and analysed.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1995-01-01
Three recent developments in the sensitivity analysis for thermomechanical postbuckling response of composite panels are reviewed. The three developments are: (1) effective computational procedure for evaluating hierarchical sensitivity coefficients of the various response quantities with respect to the different laminate, layer, and micromechanical characteristics; (2) application of reduction methods to the sensitivity analysis of the postbuckling response; and (3) accurate evaluation of the sensitivity coefficients to transverse shear stresses. Sample numerical results are presented to demonstrate the effectiveness of the computational procedures presented. Some of the future directions for research on sensitivity analysis for the thermomechanical postbuckling response of composite and smart structures are outlined.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.
NASA Astrophysics Data System (ADS)
McKinney, S. W.
2015-12-01
Effectiveness of uncertainty quantification (UQ) and sensitivity analysis (SA) has been improved in ASCEM by choosing from a variety of methods to best suit each model. Previously, ASCEM had a small toolset for UQ and SA, leaving out benefits of the many unincluded methods. Many UQ and SA methods are useful for analyzing models with specific characteristics; therefore, programming these methods into ASCEM would have been inefficient. Embedding the R programming language into ASCEM grants access to a plethora of UQ and SA methods. As a result, programming required is drastically decreased, and runtime efficiency and analysis effectiveness are increased relative to each unique model.
Advances in Sensitivity Analysis Capabilities with SCALE 6.0 and 6.1
Rearden, Bradley T; Petrie Jr, Lester M; Williams, Mark L
2010-01-01
The sensitivity and uncertainty analysis sequences of SCALE compute the sensitivity of k{sub eff} to each constituent multigroup cross section using perturbation theory based on forward and adjoint transport computations with several available codes. Versions 6.0 and 6.1 of SCALE, released in 2009 and 2010, respectively, include important additions to the TSUNAMI-3D sequence, which computes forward and adjoint solutions in multigroup with the KENO Monte Carlo codes. Previously, sensitivity calculations were performed with the simple and efficient geometry capabilities of KENO V.a, but now calculations can also be performed with the generalized geometry code KENO-VI. TSUNAMI-3D requires spatial refinement of the angular flux moment solutions for the forward and adjoint calculations. These refinements are most efficiently achieved with the use of a mesh accumulator. For SCALE 6.0, a more flexible mesh accumulator capability has been added to the KENO codes, enabling varying granularity of the spatial refinement to optimize the calculation for different regions of the system model. The new mesh capabilities allow the efficient calculation of larger models than were previously possible. Additional improvements in the TSUNAMI calculations were realized in the computation of implicit effects of resonance self-shielding on the final sensitivity coefficients. Multigroup resonance self-shielded cross sections are accurately computed with SCALE's robust deterministic continuous-energy treatment for the resolved and thermal energy range and with Bondarenko shielding factors elsewhere, including the unresolved resonance range. However, the sensitivities of the self-shielded cross sections to the parameters input to the calculation are quantified using only full-range Bondarenko factors.
NASA Astrophysics Data System (ADS)
Wagener, Thorsten; Pianosi, Francesca
2016-04-01
Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.
Dreicer, J.S.
1999-07-15
During the past year this component of the Advanced Nuclear Measurements LDRD-DR has focused on emerging safeguards problems and proliferation risk by investigating problems in two domains. The first is related to the analysis, quantification, and characterization of existing inventories of fissile materials, in particular, the minor actinides (MA) formed in the commercial fuel cycle. Understanding material forms and quantities helps identify and define future measurement problems, instrument requirements, and assists in prioritizing safeguards technology development. The second problem (dissertation research) has focused on the development of a theoretical foundation for sensor array anomaly detection. Remote and unattended monitoring or verification of safeguards activities is becoming a necessity due to domestic and international budgetary constraints. However, the ability to assess the trustworthiness of a sensor array has not been investigated. This research is developing an anomaly detection methodology to assess the sensor array.
The high-order decoupled direct method in three dimensions for particular matter (HDDM-3D/PM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity...
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
Skinner, Nathan P; Kurpad, Shekar N; Schmit, Brian D; Budde, Matthew D
2015-11-01
Diffusion-weighted imaging (DWI) is a powerful tool to investigate the microscopic structure of the central nervous system (CNS). Diffusion tensor imaging (DTI), a common model of the DWI signal, has a demonstrated sensitivity to detect microscopic changes as a result of injury or disease. However, DTI and other similar models have inherent limitations that reduce their specificity for certain pathological features, particularly in tissues with complex fiber arrangements. Methods such as double pulsed field gradient (dPFG) and q-vector magic angle spinning (qMAS) have been proposed to specifically probe the underlying microscopic anisotropy without interference from the macroscopic tissue organization. This is particularly important for the study of acute injury, where abrupt changes in the microscopic morphology of axons and dendrites manifest as focal enlargements known as beading. The purpose of this work was to assess the relative sensitivity of DWI measures to beading in the context of macroscopic fiber organization and edema. Computational simulations of DWI experiments in normal and beaded axons demonstrated that, although DWI models can be highly specific for the simulated pathologies of beading and volume fraction changes in coherent fiber pathways, their sensitivity to a single idealized pathology is considerably reduced in crossing and dispersed fibers. However, dPFG and qMAS have a high sensitivity for beading, even in complex fiber tracts. Moreover, in tissues with coherent arrangements, such as the spinal cord or nerve fibers in which tract orientation is known a priori, a specific dPFG sequence variant decreases the effects of edema and improves specificity for beading. Collectively, the simulation results demonstrate that advanced DWI methods, particularly those which sample diffusion along multiple directions within a single acquisition, have improved sensitivity to acute axonal injury over conventional DTI metrics and hold promise for more
Davidson, J.W.; Dudziak, D.J.; Higgs, C.E.; Stepanek, J.
1988-01-01
AARE, a code package to perform Advanced Analysis for Reactor Engineering, is a linked modular system for fission reactor core and shielding, as well as fusion blanket, analysis. Its cross-section sensitivity and uncertainty path presently includes the cross-section processing and reformatting code TRAMIX, cross-section homogenization and library reformatting code MIXIT, the 1-dimensional transport code ONEDANT, the 2-dimensional transport code TRISM, and the 1- and 2- dimensional cross-section sensitivity and uncertainty code SENSIBL. IN the present work, a short description of the whole AARE system is given, followed by a detailed description of the cross-section sensitivity and uncertainty path. 23 refs., 2 figs.
Energy Science and Technology Software Center (ESTSC)
1992-02-20
SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less
Advanced protein crystal growth programmatic sensitivity study
NASA Technical Reports Server (NTRS)
1992-01-01
The purpose of this study is to define the costs of various APCG (Advanced Protein Crystal Growth) program options and to determine the parameters which, if changed, impact the costs and goals of the programs and to what extent. This was accomplished by developing and evaluating several alternate programmatic scenarios for the microgravity Advanced Protein Crystal Growth program transitioning from the present shuttle activity to the man tended Space Station to the permanently manned Space Station. These scenarios include selected variations in such sensitivity parameters as development and operational costs, schedules, technology issues, and crystal growth methods. This final report provides information that will aid in planning the Advanced Protein Crystal Growth Program.
LISA Telescope Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2001-01-01
The results of a LISA telescope sensitivity analysis will be presented, The emphasis will be on the outgoing beam of the Dall-Kirkham' telescope and its far field phase patterns. The computed sensitivity analysis will include motions of the secondary with respect to the primary, changes in shape of the primary and secondary, effect of aberrations of the input laser beam and the effect the telescope thin film coatings on polarization. An end-to-end optical model will also be discussed.
Rearden, B.T.; Anderson, W.J.; Harms, G.A.
2005-08-15
Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.
Fan, L; He, C; Jiang, L; Bi, Y; Dong, Y; Jia, Y
2016-04-01
This review focuses on the causes of sensitive skin and elaborates on the relationship between skin sensitivity and skin irritations and allergies, which has puzzled cosmetologists. Here, an overview is presented of the research on active ingredients in cosmetic products for sensitive skin (anti-sensitive ingredients), which is followed by a discussion of their experimental efficacy. Moreover, several evaluation methods for the efficacy of anti-sensitive ingredients are classified and summarized. Through this review, we aim to provide the cosmetic industry with a better understanding of sensitive skin, which could in turn provide some theoretical guidance to the research on targeted cosmetic products. PMID:26444676
Sensitivity Analysis Without Assumptions
VanderWeele, Tyler J.
2016-01-01
Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder. PMID:26841057
Witholder, R.E.
1980-04-01
The Solar Energy Research Institute has conducted a limited sensitivity analysis on a System for Projecting the Utilization of Renewable Resources (SPURR). The study utilized the Domestic Policy Review scenario for SPURR agricultural and industrial process heat and utility market sectors. This sensitivity analysis determines whether variations in solar system capital cost, operation and maintenance cost, and fuel cost (biomass only) correlate with intuitive expectations. The results of this effort contribute to a much larger issue: validation of SPURR. Such a study has practical applications for engineering improvements in solar technologies and is useful as a planning tool in the R and D allocation process.
RESRAD parameter sensitivity analysis
Cheng, J.J.; Yu, C.; Zielen, A.J.
1991-08-01
Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F., Jr.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Advances in Identifying Beryllium Sensitization and Disease
Middleton, Dan; Kowalski, Peter
2010-01-01
Beryllium is a lightweight metal with unique qualities related to stiffness, corrosion resistance, and conductivity. While there are many useful applications, researchers in the 1930s and l940s linked beryllium exposure to a progressive occupational lung disease. Acute beryllium disease is a pulmonary irritant response to high exposure levels, whereas chronic beryllium disease (CBD) typically results from a hypersensitivity response to lower exposure levels. A blood test, the beryllium lymphocyte proliferation test (BeLPT), was an important advance in identifying individuals who are sensitized to beryllium (BeS) and thus at risk for developing CBD. While there is no true “gold standard” for BeS, basic epidemiologic concepts have been used to advance our understanding of the different screening algorithms. PMID:20195436
LISA Telescope Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2002-01-01
The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.
Sensitivity testing and analysis
Neyer, B.T.
1991-01-01
New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.
Sensitivity analysis in computational aerodynamics
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1984-01-01
Information on sensitivity analysis in computational aerodynamics is given in outline, graphical, and chart form. The prediction accuracy if the MCAERO program, a perturbation analysis method, is discussed. A procedure for calculating perturbation matrix, baseline wing paneling for perturbation analysis test cases and applications of an inviscid sensitivity matrix are among the topics covered.
Geothermal well cost sensitivity analysis: current status
Carson, C.C.; Lin, Y.T.
1980-01-01
The geothermal well-cost model developed by Sandia National Laboratories is being used to analyze the sensitivity of well costs to improvements in geothermal drilling technology. Three interim results from this modeling effort are discussed. The sensitivity of well costs to bit parameters, rig parameters, and material costs; an analysis of the cost reduction potential of an advanced bit; and a consideration of breakeven costs for new cementing technology. All three results illustrate that the well-cost savings arising from any new technology will be highly site-dependent but that in specific wells the advances considered can result in significant cost reductions.
Advanced PFBC transient analysis
White, J.S.; Bonk, D.L.
1997-05-01
Transient modeling and analysis of advanced Pressurized Fluidized Bed Combustion (PFBC) systems is a research area that is currently under investigation by the US Department of Energy`s Federal Energy Technology Center (FETC). The object of the effort is to identify key operating parameters that affect plant performance and then quantify the basic response of major sub-systems to changes in operating conditions. PC-TRAX{trademark}, a commercially available dynamic software program, was chosen and applied in this modeling and analysis effort. This paper describes the development of a series of TRAX-based transient models of advanced PFBC power plants. These power plants burn coal or other suitable fuel in a PFBC, and the high temperature flue gas supports low-Btu fuel gas or natural gas combustion in a gas turbine topping combustor. When it is utilized, the low-Btu fuel gas is produced in a bubbling bed carbonizer. High temperature, high pressure combustion products exiting the topping combustor are expanded in a modified gas turbine to generate electrical power. Waste heat from the system is used to raise and superheat steam for a reheat steam turbine bottoming cycle that generates additional electrical power. Basic control/instrumentation models were developed and modeled in PC-TRAX and used to investigate off-design plant performance. System performance for various transient conditions and control philosophies was studied.
Wu, Jenny Chia-Yun; Hakama, Matti; Anttila, Ahti; Yen, Amy Ming-Fang; Malila, Nea; Sarkeala, Tytti; Auvinen, Anssi; Chiu, Sherry Yueh-Hsia; Chen, Hsiu-Hsi
2010-07-01
Estimating the natural history parameters of breast cancer not only elucidates the disease progression but also make contributions to assessing the impact of inter-screening interval, sensitivity, and attendance rate on reducing advanced breast cancer. We applied three-state and five-state Markov models to data on a two-yearly routine mammography screening in Finland between 1988 and 2000. The mean sojourn time (MST) was computed from estimated transition parameters. Computer simulation was implemented to examine the effect of inter-screening interval, sensitivity, and attendance rate on reducing advanced breast cancers. In three-state model, the MST was 2.02 years, and the sensitivity for detecting preclinical breast cancer was 84.83%. In five-state model, the MST was 2.21 years for localized tumor and 0.82 year for non-localized tumor. Annual, biennial, and triennial screening programs can reduce 53, 37, and 28% of advanced cancer. The effectiveness of intensive screening with poor attendance is the same as that of infrequent screening with high attendance rate. We demonstrated how to estimate the natural history parameters using a service screening program and applied these parameters to assess the impact of inter-screening interval, sensitivity, and attendance rate on reducing advanced cancer. The proposed method makes contribution to further cost-effectiveness analysis. However, these findings had better be validated by using a further long-term follow-up data. PMID:20054645
Probabilistic sensitivity analysis in health economics.
Baio, Gianluca; Dawid, A Philip
2015-12-01
Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. PMID:21930515
Recent advances in sensitized mesoscopic solar cells.
Grätzel, Michael
2009-11-17
-intensive high vacuum and materials purification steps that are currently employed in the fabrication of all other thin-film solar cells. Organic materials are abundantly available, so that the technology can be scaled up to the terawatt scale without running into feedstock supply problems. This gives organic-based solar cells an advantage over the two major competing thin-film photovoltaic devices, i.e., CdTe and CuIn(As)Se, which use highly toxic materials of low natural abundance. However, a drawback of the current embodiment of OPV cells is that their efficiency is significantly lower than that for single and multicrystalline silicon as well as CdTe and CuIn(As)Se cells. Also, polymer-based OPV cells are very sensitive to water and oxygen and, hence, need to be carefully sealed to avoid rapid degradation. The research discussed within the framework of this Account aims at identifying and providing solutions to the efficiency problems that the OPV field is still facing. The discussion focuses on mesoscopic solar cells, in particular, dye-sensitized solar cells (DSCs), which have been developed in our laboratory and remain the focus of our investigations. The efficiency problem is being tackled using molecular science and nanotechnology. The sensitizer constitutes the heart of the DSC, using sunlight to pump electrons from a lower to a higher energy level, generating in this fashion an electric potential difference, which can exploited to produce electric work. Currently, there is a quest for sensitizers that achieve effective harnessing of the red and near-IR part of sunlight, converting these photons to electricity better than the currently used generation of dyes. Progress in this area has been significant over the past few years, resulting in a boost in the conversion efficiency of the DSC that will be reviewed. PMID:19715294
Sensitive oil industry: users of advanced technology
NASA Astrophysics Data System (ADS)
Lindsey, Rhonda P.; Barnes, James L.
1999-01-01
The oil industry exemplifies mankind's search for resource sin a harsh environment here on the earth. Traditionally, the oil industry has created technological solutions to increasingly difficult exploration, drilling, and production activities as the need has arisen. The depths to which a well must be drilled to produce the finite hydrocarbon resources are increasing and the surface environments during oil and gas activities is the key to success, not information that is hours old or incomplete; but 'real-time' data that responds to the variable environment downhole and allows prediction and prevention. The difference that information makes can be the difference between a successfully drilled well and a blowout that causes permanent damage to the reservoir and may reduce the value of the reserves downhole. The difference that information makes can make the difference between recovering 22 percent of the hydrocarbon reserves in a profitable field and recovering none of the reserves because of an uneconomic bottom line. Sensors of every type are essential in the new oil and gas industry and they must be rugged, accurate, affordable, and long lived. It is not just for the sophisticated majors exploring the very deep waters of the world but for the thousands of independent producers who provide a lion's share of the oil and gas produced in the US domestic market. The Department of Energy has been instrumental in keeping reserves from being lost by funding advancements in sensor technology. Due to sponsorship by the Federal Government, the combined efforts of researchers in the National Laboratories, academic institutions, and industry research centers are producing increasingly accurate tools capable of functioning in extreme conditions with economics acceptable to the accountants of the industry. Three examples of such senors developed with Federal funding are given.
Nursing-sensitive indicators: a concept analysis
Heslop, Liza; Lu, Sai
2014-01-01
Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388
An analysis of sensitivity tests
Neyer, B.T.
1992-03-06
A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, {mu}, and the standard deviation, {sigma}) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.
Pain sensitivity profiles in patients with advanced knee osteoarthritis.
Frey-Law, Laura A; Bohr, Nicole L; Sluka, Kathleen A; Herr, Keela; Clark, Charles R; Noiseux, Nicolas O; Callaghan, John J; Zimmerman, M Bridget; Rakel, Barbara A
2016-09-01
The development of patient profiles to subgroup individuals on a variety of variables has gained attention as a potential means to better inform clinical decision making. Patterns of pain sensitivity response specific to quantitative sensory testing (QST) modality have been demonstrated in healthy subjects. It has not been determined whether these patterns persist in a knee osteoarthritis population. In a sample of 218 participants, 19 QST measures along with pain, psychological factors, self-reported function, and quality of life were assessed before total knee arthroplasty. Component analysis was used to identify commonalities across the 19 QST assessments to produce standardized pain sensitivity factors. Cluster analysis then grouped individuals who exhibited similar patterns of standardized pain sensitivity component scores. The QST resulted in 4 pain sensitivity components: heat, punctate, temporal summation, and pressure. Cluster analysis resulted in 5 pain sensitivity profiles: a "low pressure pain" group, an "average pain" group, and 3 "high pain" sensitivity groups who were sensitive to different modalities (punctate, heat, and temporal summation). Pain and function differed between pain sensitivity profiles, along with sex distribution; however, no differences in osteoarthritis grade, medication use, or psychological traits were found. Residualizing QST data by age and sex resulted in similar components and pain sensitivity profiles. Furthermore, these profiles are surprisingly similar to those reported in healthy populations, which suggests that individual differences in pain sensitivity are a robust finding even in an older population with significant disease. PMID:27152688
NASA Technical Reports Server (NTRS)
Greenberg, Marc W.; Laing, William
2013-01-01
An Economic Analysis (EA) is a systematic approach to the problem of choosing the best method of allocating scarce resources to achieve a given objective. An EA helps guide decisions on the "worth" of pursuing an action that departs from status quo ... an EA is the crux of decision-support.
Advanced tests for skin and respiratory sensitization assessment.
Rovida, Costanza; Martin, Stefan F; Vivier, Manon; Weltzien, Hans Ulrich; Roggen, Erwin
2013-01-01
Sens-it-iv is an FP6 Integrated Project that finished in March 2011 after 66 months of activity, thanks to 12 million € of funding. The ultimate goal of the Sens-it-iv project was the development of a set of in vitro methods for the assessment of the skin and respiratory sensitization potential of chemicals and proteins. The level of development was intended to be at the point to enter the pre-validation phase. At the end of the project it can be concluded that the goal has been largely accomplished. Several advanced methods were evaluated extensively, and for some of them a detailed Standard Operating Procedure (SOP) was established. Other, less advanced methods also contributed to our understanding of the mechanisms driving sensitization. The present contribution, which has been prepared with the support of CAAT-Europe, represents a short summary of what was discussed during the 3-day end congress of the Sens-it-iv project in Brussels. It presents a list of methods that are ready for skin sensitization hazard assessment. Potency evaluation and the possibility of distinguishing skin from respiratory sensitizers are also well advanced. PMID:23665811
Sensitivity and Uncertainty Analysis Shell
Energy Science and Technology Software Center (ESTSC)
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Using Dynamic Sensitivity Analysis to Assess Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey; Morell, Larry; Miller, Keith
1990-01-01
This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.
Stiff DAE integrator with sensitivity analysis capabilities
Energy Science and Technology Software Center (ESTSC)
2007-11-26
IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.
Point Source Location Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Cox, J. Allen
1986-11-01
This paper presents the results of an analysis of point source location accuracy and sensitivity as a function of focal plane geometry, optical blur spot, and location algorithm. Five specific blur spots are treated: gaussian, diffraction-limited circular aperture with and without central obscuration (obscured and clear bessinc, respectively), diffraction-limited rectangular aperture, and a pill box distribution. For each blur spot, location accuracies are calculated for square, rectangular, and hexagonal detector shapes of equal area. The rectangular detectors are arranged on a hexagonal lattice. The two location algorithms consist of standard and generalized centroid techniques. Hexagonal detector arrays are shown to give the best performance under a wide range of conditions.
Advances in total scattering analysis
Proffen, Thomas E; Kim, Hyunjeong
2008-01-01
In recent years the analysis of the total scattering pattern has become an invaluable tool to study disordered crystalline and nanocrystalline materials. Traditional crystallographic structure determination is based on Bragg intensities and yields the long range average atomic structure. By including diffuse scattering into the analysis, the local and medium range atomic structure can be unravelled. Here we give an overview of recent experimental advances, using X-rays as well as neutron scattering as well as current trends in modelling of total scattering data.
A review of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.
Design sensitivity analysis of nonlinear structural response
NASA Technical Reports Server (NTRS)
Cardoso, J. B.; Arora, J. S.
1987-01-01
A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.
Sensitivity analysis of a wing aeroelastic response
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.
1991-01-01
A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.
Recent developments in structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Adelman, Howard M.
1988-01-01
Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed
Coal Transportation Rate Sensitivity Analysis
2005-01-01
On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.
Sensitivity analysis for solar plates
NASA Technical Reports Server (NTRS)
Aster, R. W.
1986-01-01
Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.
Sensitivity analysis for solar plates
NASA Astrophysics Data System (ADS)
Aster, R. W.
1986-02-01
Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Adjoint sensitivity analysis of an ultrawideband antenna
Stephanson, M B; White, D A
2011-07-28
The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.
Sensitivity Analysis in the Model Web
NASA Astrophysics Data System (ADS)
Jones, R.; Cornford, D.; Boukouvalas, A.
2012-04-01
The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In
ADVANCED POWER SYSTEMS ANALYSIS TOOLS
Robert R. Jensen; Steven A. Benson; Jason D. Laumb
2001-08-31
The use of Energy and Environmental Research Center (EERC) modeling tools and improved analytical methods has provided key information in optimizing advanced power system design and operating conditions for efficiency, producing minimal air pollutant emissions and utilizing a wide range of fossil fuel properties. This project was divided into four tasks: the demonstration of the ash transformation model, upgrading spreadsheet tools, enhancements to analytical capabilities using the scanning electron microscopy (SEM), and improvements to the slag viscosity model. The ash transformation model, Atran, was used to predict the size and composition of ash particles, which has a major impact on the fate of the combustion system. To optimize Atran key factors such as mineral fragmentation and coalescence, the heterogeneous and homogeneous interaction of the organically associated elements must be considered as they are applied to the operating conditions. The resulting model's ash composition compares favorably to measured results. Enhancements to existing EERC spreadsheet application included upgrading interactive spreadsheets to calculate the thermodynamic properties for fuels, reactants, products, and steam with Newton Raphson algorithms to perform calculations on mass, energy, and elemental balances, isentropic expansion of steam, and gasifier equilibrium conditions. Derivative calculations can be performed to estimate fuel heating values, adiabatic flame temperatures, emission factors, comparative fuel costs, and per-unit carbon taxes from fuel analyses. Using state-of-the-art computer-controlled scanning electron microscopes and associated microanalysis systems, a method to determine viscosity using the incorporation of grey-scale binning acquired by the SEM image was developed. The image analysis capabilities of a backscattered electron image can be subdivided into various grey-scale ranges that can be analyzed separately. Since the grey scale's intensity is
Recent Advances in Multidisciplinary Analysis and Optimization, part 3
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M. (Editor)
1989-01-01
This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: aircraft design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.
Recent Advances in Multidisciplinary Analysis and Optimization, part 1
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M. (Editor)
1989-01-01
This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.
Recent Advances in Multidisciplinary Analysis and Optimization, part 2
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M. (Editor)
1989-01-01
This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.
Sensitivity analysis and application in exploration geophysics
NASA Astrophysics Data System (ADS)
Tang, R.
2013-12-01
In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.
1998-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, James C., III; Taylor, Arthur C., III; Barnwell, Richard W.; Newman, Perry A.; Hou, Gene J.-W.
1999-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics (CFD). The focus here is on those methods particularly well-suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid CFD algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid CFDs in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
Comparative Sensitivity Analysis of Muscle Activation Dynamics.
Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
A numerical comparison of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.
Bayesian sensitivity analysis of a nonlinear finite element model
NASA Astrophysics Data System (ADS)
Becker, W.; Oakley, J. E.; Surace, C.; Gili, P.; Rowson, J.; Worden, K.
2012-10-01
A major problem in uncertainty and sensitivity analysis is that the computational cost of propagating probabilistic uncertainty through large nonlinear models can be prohibitive when using conventional methods (such as Monte Carlo methods). A powerful solution to this problem is to use an emulator, which is a mathematical representation of the model built from a small set of model runs at specified points in input space. Such emulators are massively cheaper to run and can be used to mimic the "true" model, with the result that uncertainty analysis and sensitivity analysis can be performed for a greatly reduced computational cost. The work here investigates the use of an emulator known as a Gaussian process (GP), which is an advanced probabilistic form of regression. The GP is particularly suited to uncertainty analysis since it is able to emulate a wide class of models, and accounts for its own emulation uncertainty. Additionally, uncertainty and sensitivity measures can be estimated analytically, given certain assumptions. The GP approach is explained in detail here, and a case study of a finite element model of an airship is used to demonstrate the method. It is concluded that the GP is a very attractive way of performing uncertainty and sensitivity analysis on large models, provided that the dimensionality is not too high.
Advanced Technology Lifecycle Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Mankins, John C.
2004-01-01
Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is
Pressure-Sensitive Paints Advance Rotorcraft Design Testing
NASA Technical Reports Server (NTRS)
2013-01-01
The rotors of certain helicopters can spin at speeds as high as 500 revolutions per minute. As the blades slice through the air, they flex, moving into the wind and back out, experiencing pressure changes on the order of thousands of times a second and even higher. All of this makes acquiring a true understanding of rotorcraft aerodynamics a difficult task. A traditional means of acquiring aerodynamic data is to conduct wind tunnel tests using a vehicle model outfitted with pressure taps and other sensors. These sensors add significant costs to wind tunnel testing while only providing measurements at discrete locations on the model's surface. In addition, standard sensor solutions do not work for pulling data from a rotor in motion. "Typical static pressure instrumentation can't handle that," explains Neal Watkins, electronics engineer in Langley Research Center s Advanced Sensing and Optical Measurement Branch. "There are dynamic pressure taps, but your costs go up by a factor of five to ten if you use those. In addition, recovery of the pressure tap readings is accomplished through slip rings, which allow only a limited amount of sensors and can require significant maintenance throughout a typical rotor test." One alternative to sensor-based wind tunnel testing is pressure sensitive paint (PSP). A coating of a specialized paint containing luminescent material is applied to the model. When exposed to an LED or laser light source, the material glows. The glowing material tends to be reactive to oxygen, explains Watkins, which causes the glow to diminish. The more oxygen that is present (or the more air present, since oxygen exists in a fixed proportion in air), the less the painted surface glows. Imaged with a camera, the areas experiencing greater air pressure show up darker than areas of less pressure. "The paint allows for a global pressure map as opposed to specific points," says Watkins. With PSP, each pixel recorded by the camera becomes an optical pressure
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Pediatric Pain, Predictive Inference, and Sensitivity Analysis.
ERIC Educational Resources Information Center
Weiss, Robert
1994-01-01
Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…
Advanced materials: Information and analysis needs
Curlee, T.R.; Das, S.; Lee, R.; Trumble, D.
1990-09-01
This report presents the findings of a study to identify the types of information and analysis that are needed for advanced materials. The project was sponsored by the US Bureau of Mines (BOM). It includes a conceptual description of information needs for advanced materials and the development and implementation of a questionnaire on the same subject. This report identifies twelve fundamental differences between advanced and traditional materials and discusses the implications of these differences for data and analysis needs. Advanced and traditional materials differ significantly in terms of physical and chemical properties. Advanced material properties can be customized more easily. The production of advanced materials may differ from traditional materials in terms of inputs, the importance of by-products, the importance of different processing steps (especially fabrication), and scale economies. The potential for change in advanced materials characteristics and markets is greater and is derived from the marriage of radically different materials and processes. In addition to the conceptual study, a questionnaire was developed and implemented to assess the opinions of people who are likely users of BOM information on advanced materials. The results of the questionnaire, which was sent to about 1000 people, generally confirm the propositions set forth in the conceptual part of the study. The results also provide data on the categories of advanced materials and the types of information that are of greatest interest to potential users. 32 refs., 1 fig., 12 tabs.
Ultra-sensitive transducer advances micro-measurement range
NASA Technical Reports Server (NTRS)
Rogallo, V. L.
1964-01-01
An ultrasensitive piezoelectric transducer, that converts minute mechanical forces into electrical impulses, measures the impact of micrometeoroids against space vehicles. It has uniform sensitivity over the entire target area and a high degree of stability.
NIR sensitivity analysis with the VANE
NASA Astrophysics Data System (ADS)
Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.
2016-05-01
Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.
Sensitivity analysis for magnetic induction tomography.
Soleimani, Manuchehr; Jersey-Willuhn, Karen
2004-01-01
This work focuses on sensitivity analysis of magnetic induction tomography in terms of theoretical modelling and numerical implementation. We will explain a new and efficient method to determine the Jacobian matrix, directly from the results of the forward solution. The results presented are for the eddy current approximation, and are given in terms of magnetic vector potential, which is computationally convenient, and which may be extracted directly from the FE solution of the forward problem. Examples of sensitivity maps for an opposite sensor geometry are also shown. PMID:17271947
Advanced nanoscale separations and mass spectrometry for sensitive high-throughput proteomics
Shen, Yufeng; Smith, Richard D.
2005-06-01
We review recent development in separations and mass spectrometric instrumentation for sensitive and high-throughput proteomic analyses. These efforts have been primarily focused on the development of high-efficiency (separation peak capacity of ~103) nanoscale liquid chromatography (nanoLC; e.g., flow rates extending down to ~20 nL/min at optimal separation linear velocities through narrow packed capillaries) in combination with advanced mass spectrometry (MS), including high sensitivity and high resolution Fourier transform ion cyclotron resonance (FTICR) MS. This technology enables MS analysis of low nanogram-level proteomic samples (i.e., nanoscale proteomics) with individual protein identification sensitivity at the low zeptomole-level. The resultant protein measurement dynamic range can reach 106 for nanogram-sized proteomic samples, while more abundant proteins can be detected from complex sub-picogram size proteome samples. The average proteome identification throughput using MS/MS is >200 proteins/h for a ~3 h analysis. These qualities provide the foundation for proteomics studies of single or small populations of cells. The instrumental robustness required for automation and providing high quality routine performance nanoscale proteomic analyses is also discussed.
Rotary absorption heat pump sensitivity analysis
NASA Astrophysics Data System (ADS)
Bamberger, J. A.; Zalondek, F. R.
1990-03-01
Conserve Resources, Incorporated is currently developing an innovative, patented absorption heat pump. The heat pump uses rotation and thin film technology to enhance the absorption process and to provide a more efficient, compact system. The results are presented of a sensitivity analysis of the rotary absorption heat pump (RAHP) performance conducted to further the development of a 1-ton RAHP. The objective of the uncertainty analysis was to determine the sensitivity of RAHP steady state performance to uncertainties in design parameters. Prior to conducting the uncertainty analysis, a computer model was developed to describe the performance of the RAHP thermodynamic cycle. The RAHP performance is based on many interrelating factors, not all of which could be investigated during the sensitivity analysis. Confirmatory measurements of LiBr/H2O properties during absorber/generator operation will provide experimental verification that the system is operating as it was designed to operate. Quantities to be measured include: flow rate in the absorber and generator, film thickness, recirculation rate, and the effects of rotational speed on these parameters.
Advanced analysis methods in particle physics
Bhat, Pushpalatha C.; /Fermilab
2010-10-01
Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.
Advanced Power System Analysis Capabilities
NASA Technical Reports Server (NTRS)
1997-01-01
As a continuing effort to assist in the design and characterization of space power systems, the NASA Lewis Research Center's Power and Propulsion Office developed a powerful computerized analysis tool called System Power Analysis for Capability Evaluation (SPACE). This year, SPACE was used extensively in analyzing detailed operational timelines for the International Space Station (ISS) program. SPACE was developed to analyze the performance of space-based photovoltaic power systems such as that being developed for the ISS. It is a highly integrated tool that combines numerous factors in a single analysis, providing a comprehensive assessment of the power system's capability. Factors particularly critical to the ISS include the orientation of the solar arrays toward the Sun and the shadowing of the arrays by other portions of the station.
Diagnostic Analysis of Middle Atmosphere Climate Sensitivity
NASA Astrophysics Data System (ADS)
Zhu, X.; Cai, M.; Swartz, W. H.; Coy, L.; Yee, J.; Talaat, E. R.
2013-12-01
Both the middle atmosphere climate sensitivity associated with the cooling trend and its uncertainty due to a complex system of drivers increase with altitude. Furthermore, the combined effect of middle atmosphere cooling due to long-lived greenhouse gases and ozone is also associated with natural climate variations due to solar activity. To understand and predict climate change from a global perspective, we use the recently developed climate feedback-response analysis method (CFRAM) to identify and isolate the signals from the external forcing and from different feedback processes in the middle atmosphere climate system. By use of the JHU/APL middle atmosphere radiation algorithm, the CFRAM is applied to the model output fields of the high-altitude GEOS-5 climate model in the middle atmosphere to delineate the individual contributions of radiative forcing to middle atmosphere climate sensitivity.
NASA Astrophysics Data System (ADS)
Xue, Y.; Forman, B. A.
2013-12-01
Snow is a significant contributor to the earth's hydrologic cycle, energy cycle, and climate system. Further, up to 80% of freshwater supply in the western United States originates as snow (and ice). Characterization of the mass of snow, or snow water equivalent (SWE), across regional and continental scales has commonly been conducted using satellite-based passive microwave (PMW) brightness temperatures (Tb) within a SWE retrieval algorithm. However, SWE retrievals often suffer from deficiencies related to deep snow, wet snow, snow evolution, snow aging, overlying vegetation, surface and internal ice lenses, depth hoar, and sub-grid scale lakes. As an alternative to SWE retrievals, this study explores the potential for using PMW Tb and machine learning within a data assimilation framework. An artificial neural network (ANN) is presented for eventual use as an observation operator to map the land surface model states into Tb space. This study explores the sensitivity of an ANN as a computationally efficient measurement model operator for the prediction of PMW Tb across North America. The analysis employs normalized sensitivity coefficients and a one-at-a-time approach such that each of the 11 different inputs could be examined separately in order to quantify the impact of perturbations to each input on the multi-frequency, multi-polarization Tb output from the ANN. Spatiotemporal variability in the Tb predictions across regional spatial scales and seasonal timescales is investigated from 2002 to 2011. Preliminary results suggest ANN-based Tb predictions are sensitive to certain snow states, such as SWE, snow density, and snow temperature in non-vegetated or sparsely vegetated regions. Further, sensitivity of ANN prediction of ΔTb=Tb, 18v*-Tb, 36v* to changes in SWE suggest the likelihood for success when the ANN is eventually implemented into a data assimilation framework. Despite the promise in these initial results, challenges remain at enhancing ANN sensitivity
Lee, Mei Ching; Hinderer, Katherine A; Friedmann, Erika
2015-08-01
Ethnic minority groups are less engaged than Caucasian American adults in advance care planning (ACP). Knowledge deficits, language, and culture are barriers to ACP. Limited research exists on ACP and advance directives in the Chinese American adult population. Using a pre-posttest, repeated measures design, the current study explored the effectiveness of a nurseled, culturally sensitive ACP seminar for Chinese American adults on (a) knowledge, completion, and discussion of advance directives; and (b) the relationship between demographic variables, advance directive completion, and ACP discussions. A convenience sample of 72 urban, community-dwelling Chinese American adults (mean age=61 years) was included. Knowledge, advance directive completion, and ACP discussions increased significantly after attending the nurse-led seminar (p<0.01). Increased age correlated with advance directive completion and ACP discussions; female gender correlated with ACP discussions. Nursing education in a community setting increased advance directive knowledge and ACP engagement in Chinese American adults. PMID:25912237
Sensitivity analysis of coexistence in ecological communities: theory and application.
Barabás, György; Pásztor, Liz; Meszéna, Géza; Ostling, Annette
2014-12-01
Sensitivity analysis, the study of how ecological variables of interest respond to changes in external conditions, is a theoretically well-developed and widely applied approach in population ecology. Though the application of sensitivity analysis to predicting the response of species-rich communities to disturbances also has a long history, derivation of a mathematical framework for understanding the factors leading to robust coexistence has only been a recent undertaking. Here we suggest that this development opens up a new perspective, providing advances ranging from the applied to the theoretical. First, it yields a framework to be applied in specific cases for assessing the extinction risk of community modules in the face of environmental change. Second, it can be used to determine trait combinations allowing for coexistence that is robust to environmental variation, and limits to diversity in the presence of environmental variation, for specific community types. Third, it offers general insights into the nature of communities that are robust to environmental variation. We apply recent community-level extensions of mathematical sensitivity analysis to example models for illustration. We discuss the advantages and limitations of the method, and some of the empirical questions the theoretical framework could help answer. PMID:25252135
The Theoretical Foundation of Sensitivity Analysis for GPS
NASA Astrophysics Data System (ADS)
Shikoska, U.; Davchev, D.; Shikoski, J.
2008-10-01
In this paper the equations of sensitivity analysis are derived and established theoretical underpinnings for the analyses. Paper propounds a land-vehicle navigation concepts and definition for sensitivity analysis. Equations of sensitivity analysis are presented for a linear Kalman filter and case study is given to illustrate the use of sensitivity analysis to the reader. At the end of the paper, extensions that are required for this research are made to the basic equations of sensitivity analysis specifically; the equations of sensitivity analysis are re-derived for a linearized Kalman filter.
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094
Advanced Placement: Model Policy Components. Policy Analysis
ERIC Educational Resources Information Center
Zinth, Jennifer
2016-01-01
Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
A Post-Monte-Carlo Sensitivity Analysis Code
Energy Science and Technology Software Center (ESTSC)
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Advances in Mid-Infrared Spectroscopy for Chemical Analysis.
Haas, Julian; Mizaikoff, Boris
2016-06-12
Infrared spectroscopy in the 3-20 μm spectral window has evolved from a routine laboratory technique into a state-of-the-art spectroscopy and sensing tool by benefitting from recent progress in increasingly sophisticated spectra acquisition techniques and advanced materials for generating, guiding, and detecting mid-infrared (MIR) radiation. Today, MIR spectroscopy provides molecular information with trace to ultratrace sensitivity, fast data acquisition rates, and high spectral resolution catering to demanding applications in bioanalytics, for example, and to improved routine analysis. In addition to advances in miniaturized device technology without sacrificing analytical performance, selected innovative applications for MIR spectroscopy ranging from process analysis to biotechnology and medical diagnostics are highlighted in this review. PMID:27070183
Advances in Mid-Infrared Spectroscopy for Chemical Analysis
NASA Astrophysics Data System (ADS)
Haas, Julian; Mizaikoff, Boris
2016-06-01
Infrared spectroscopy in the 3–20 μm spectral window has evolved from a routine laboratory technique into a state-of-the-art spectroscopy and sensing tool by benefitting from recent progress in increasingly sophisticated spectra acquisition techniques and advanced materials for generating, guiding, and detecting mid-infrared (MIR) radiation. Today, MIR spectroscopy provides molecular information with trace to ultratrace sensitivity, fast data acquisition rates, and high spectral resolution catering to demanding applications in bioanalytics, for example, and to improved routine analysis. In addition to advances in miniaturized device technology without sacrificing analytical performance, selected innovative applications for MIR spectroscopy ranging from process analysis to biotechnology and medical diagnostics are highlighted in this review.
Scalable analysis tools for sensitivity analysis and UQ (3160) results.
Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.
2009-09-01
The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.
Recent advances in morphological cell image analysis.
Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei
2012-01-01
This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215
Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
Updated Chemical Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
2005-01-01
An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.
Multicomponent dynamical nucleation theory and sensitivity analysis.
Kathmann, Shawn M; Schenter, Gregory K; Garrett, Bruce C
2004-05-15
Vapor to liquid multicomponent nucleation is a dynamical process governed by a delicate interplay between condensation and evaporation. Since the population of the vapor phase is dominated by monomers at reasonable supersaturations, the formation of clusters is governed by monomer association and dissociation reactions. Although there is no intrinsic barrier in the interaction potential along the minimum energy path for the association process, the formation of a cluster is impeded by a free energy barrier. Dynamical nucleation theory provides a framework in which equilibrium evaporation rate constants can be calculated and the corresponding condensation rate constants determined from detailed balance. The nucleation rate can then be obtained by solving the kinetic equations. The rate constants governing the multistep kinetics of multicomponent nucleation including sensitivity analysis and the potential influence of contaminants will be presented and discussed. PMID:15267849
Sensitivity analysis of periodic matrix population models.
Caswell, Hal; Shyu, Esther
2012-12-01
Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments. PMID:23316494
Sensitivity analysis of distributed volcanic source inversion
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
Sensitivity Analysis of OECD Benchmark Tests in BISON
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Design, analysis and test verification of advanced encapsulation systems
NASA Technical Reports Server (NTRS)
Garcia, A., III
1982-01-01
An analytical methodology for advanced encapsulation designs was developed. From these methods design sensitivities are established for the development of photovoltaic module criteria and the definition of needed research tasks. Analytical models were developed to perform optical, thermal, electrical and analyses on candidate encapsulation systems. From these analyses several candidate systems were selected for qualification testing. Additionally, test specimens of various types are constructed and tested to determine the validity of the analysis methodology developed. Identified deficiencies and/or discrepancies between analytical models and relevant test data are corrected. Prediction capability of analytical models is improved. Encapsulation engineering generalities, principles, and design aids for photovoltaic module designers is generated.
Longitudinal Genetic Analysis of Anxiety Sensitivity
ERIC Educational Resources Information Center
Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.
2012-01-01
Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…
Advanced Analysis Methods in High Energy Physics
Pushpalatha C. Bhat
2001-10-03
During the coming decade, high energy physics experiments at the Fermilab Tevatron and around the globe will use very sophisticated equipment to record unprecedented amounts of data in the hope of making major discoveries that may unravel some of Nature's deepest mysteries. The discovery of the Higgs boson and signals of new physics may be around the corner. The use of advanced analysis techniques will be crucial in achieving these goals. The author discusses some of the novel methods of analysis that could prove to be particularly valuable for finding evidence of any new physics, for improving precision measurements and for exploring parameter spaces of theoretical models.
The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization
NASA Technical Reports Server (NTRS)
1990-01-01
The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites.
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present
Sensitivity Analysis of Wing Aeroelastic Responses
NASA Technical Reports Server (NTRS)
Issac, Jason Cherian
1995-01-01
Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight
2014-01-01
Background Germ cell tumors (GCT) are the most common solid tumors in adolescent and young adult males (age 15 and 35 years) and remain one of the most curable of all solid malignancies. However a subset of patients will have tumors that are refractory to standard chemotherapy agents. The management of this refractory population remains challenging and approximately 400 patients continue to die every year of this refractory disease in the United States. Methods Given the preclinical evidence implicating vascular endothelial growth factor (VEGF) signaling in the biology of germ cell tumors, we hypothesized that the vascular endothelial growth factor receptor (VEGFR) inhibitor sunitinib (Sutent) may possess important clinical activity in the treatment of this refractory disease. We proposed a Phase II efficacy study of sunitinib in seminomatous and non-seminomatous metastatic GCT’s refractory to first line chemotherapy treatment (ClinicalTrials.gov Identifier: NCT00912912). Next generation targeted exome sequencing using HiSeq 2000 (Illumina Inc., San Diego, CA, USA) was performed on the tumor sample of the unusual responder. Results Five patients are enrolled into this Phase II study. Among them we report here the clinical course of a patient (Patient # 5) who had an exceptional response to sunitinib. Next generation sequencing to understand this patient’s response to sunitinib revealed RET amplification, EGFR and KRAS amplification as relevant aberrations. Oncoscan MIP array were employed to validate the copy number analysis that confirmed RET gene amplification. Conclusion Sunitinib conferred clinical benefit to this heavily pre-treated patient. Next generation sequencing of this ‘exceptional responder’ identified the first reported case of a RET amplification as a potential basis of sensitivity to sunitinib (VEGFR2/PDGFRβ/c-kit/ FLT3/RET/CSF1R inhibitor) in a patient with refractory germ cell tumor. Further characterization of GCT patients using
Multitarget global sensitivity analysis of n-butanol combustion.
Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T
2013-05-01
A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis. PMID:23530815
Advanced Power Plant Development and Analysis Methodologies
A.D. Rao; G.S. Samuelsen; F.L. Robson; B. Washom; S.G. Berenyi
2006-06-30
Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into advanced power plant systems with goals of achieving high efficiency and minimized environmental impact while using fossil fuels. These power plant concepts include 'Zero Emission' power plants and the 'FutureGen' H2 co-production facilities. The study is broken down into three phases. Phase 1 of this study consisted of utilizing advanced technologies that are expected to be available in the 'Vision 21' time frame such as mega scale fuel cell based hybrids. Phase 2 includes current state-of-the-art technologies and those expected to be deployed in the nearer term such as advanced gas turbines and high temperature membranes for separating gas species and advanced gasifier concepts. Phase 3 includes identification of gas turbine based cycles and engine configurations suitable to coal-based gasification applications and the conceptualization of the balance of plant technology, heat integration, and the bottoming cycle for analysis in a future study. Also included in Phase 3 is the task of acquiring/providing turbo-machinery in order to gather turbo-charger performance data that may be used to verify simulation models as well as establishing system design constraints. The results of these various investigations will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.
Wear-Out Sensitivity Analysis Project Abstract
NASA Technical Reports Server (NTRS)
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Sensitivity analysis of volume scattering phase functions.
Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael
2016-08-01
To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m^{-3}. PMID:27505819
Tilt-Sensitivity Analysis for Space Telescopes
NASA Technical Reports Server (NTRS)
Papalexandris, Miltiadis; Waluschka, Eugene
2003-01-01
A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.
Sensitivity analysis of hydrodynamic stability operators
NASA Technical Reports Server (NTRS)
Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.
1992-01-01
The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.
Sensitivity analysis of textural parameters for vertebroplasty
NASA Astrophysics Data System (ADS)
Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.
2002-05-01
Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2
Topographic Avalanche Risk: DEM Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Nazarkulova, Ainura; Strobl, Josef
2015-04-01
GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of
[Ecological sensitivity of Shanghai City based on GIS spatial analysis].
Cao, Jian-jun; Liu, Yong-juan
2010-07-01
In this paper, five sensitivity factors affecting the eco-environment of Shanghai City, i.e., rivers and lakes, historical relics and forest parks, geological disasters, soil pollution, and land use, were selected, and their weights were determined by analytic hierarchy process. Combining with GIS spatial analysis technique, the sensitivities of these factors were classified into four grades, i.e., highly sensitive, moderately sensitive, low sensitive, and insensitive, and the spatial distribution of the ecological sensitivity of Shanghai City was figured out. There existed a significant spatial differentiation in the ecological sensitivity of the City, and the insensitive, low sensitive, moderately sensitive, and highly sensitive areas occupied 37.07%, 5.94%, 38.16%, and 18.83%, respectively. Some suggestions on the City's zoning protection and construction were proposed. This study could provide scientific references for the City's environmental protection and economic development. PMID:20879541
Recent advances in flow injection analysis.
Trojanowicz, Marek; Kołacińska, Kamila
2016-04-01
A dynamic development of methodologies of analytical flow injection measurements during four decades since their invention has reinforced the solid position of flow analysis in the arsenal of techniques and instrumentation of contemporary chemical analysis. With the number of published scientific papers exceeding 20 000, and advanced instrumentation available for environmental, food, and pharmaceutical analysis, flow analysis is well established as an extremely vital field of modern flow chemistry, which is developed simultaneously with methods of chemical synthesis carried out under flow conditions. This review work is based on almost 300 original papers published mostly in the last decade, with special emphasis put on presenting novel achievements from the most recent 2-3 years in order to indicate current development trends of this methodology. Besides the evolution of the design of whole measuring systems, and including especially new applications of various detections methods, several aspects of implications of progress in nanotechnology, and miniaturization of measuring systems for application in different field of modern chemical analysis are also discussed. PMID:26906258
Advancing Behavior Analysis in Zoos and Aquariums.
Maple, Terry L; Segura, Valerie D
2015-05-01
Zoos, aquariums, and other captive animal facilities offer promising opportunities to advance the science and practice of behavior analysis. Zoos and aquariums are necessarily concerned with the health and well-being of their charges and are held to a high standard by their supporters (visitors, members, and donors), organized critics, and the media. Zoos and aquariums offer unique venues for teaching and research and a locus for expanding the footprint of behavior analysis. In North America, Europe, and the UK, formal agreements between zoos, aquariums, and university graduate departments have been operating successfully for decades. To expand on this model, it will be necessary to help zoo and aquarium managers throughout the world to recognize the value of behavior analysis in the delivery of essential animal health and welfare services. Academic institutions, administrators, and invested faculty should consider the utility of training students to meet the growing needs of applied behavior analysis in zoos and aquariums and other animal facilities such as primate research centers, sanctuaries, and rescue centers. PMID:27540508
Extended forward sensitivity analysis of one-dimensional isothermal flow
Johnson, M.; Zhao, H.
2013-07-01
Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)
Advanced digital I&C systems in nuclear power plants: Risk- sensitivities to environmental stressors
Hassan, M.; Vesely, W.E.
1996-06-01
Microprocessor-based advanced digital systems are being used for upgrading analog instrumentation and control (I&C) systems in nuclear power plants (NPPs) in the United States. A concern with using such advanced systems for safety-related applications in NPPs is the limited experience with this equipment in these environments. In this study, we investigate the risk effects of environmental stressors by quantifying the plant`s risk-sensitivities to them. The risk- sensitivities are changes in plant risk caused by the stressors, and are quantified by estimating their effects on I&C failure occurrences and the consequent increase in risk in terms of core damage frequency (CDF). We used available data, including military and NPP operating experience, on the effects of environmental stressors on the reliability of digital I&C equipment. The methods developed are applied to determine and compare risk-sensitivities to temperature, humidity, vibration, EMI (electromagnetic interference) from lightning and smoke as stressors in an example plant using a PRA (Probabilistic Risk Assessment). Uncertainties in the estimates of the stressor effects on the equipment`s reliability are expressed in terms of ranges for risk-sensitivities. The results show that environmental stressors potentially can cause a significant increase in I&C contributions to the CDF. Further, considerable variations can be expected in some stressor effects, depending on where the equipment is located.
Ju, Myeong Jin; Hong, Young-Joo; Makita, Shuichi; Lim, Yiheng; Kurokawa, Kazuhiro; Duan, Lian; Miura, Masahiro; Tang, Shuo; Yasuno, Yoshiaki
2013-08-12
An advanced version of Jones matrix optical coherence tomography (JMT) is demonstrated for Doppler and polarization sensitive imaging of the posterior eye. JMT is capable of providing localized flow tomography by Doppler detection and investigating the birefringence property of tissue through a three-dimensional (3-D) Jones matrix measurement. Owing to an incident polarization multiplexing scheme based on passive optical components, this system is stable, safe in a clinical environment, and cost effective. Since the properties of this version of JMT provide intrinsic compensation for system imperfection, the system is easy to calibrate. Compared with the previous version of JMT, this advanced JMT achieves a sufficiently long depth measurement range for clinical cases of posterior eye disease. Furthermore, a fine spectral shift compensation method based on the cross-correlation of calibration signals was devised for stabilizing the phase of OCT, which enables a high sensitivity Doppler OCT measurement. In addition, a new theory of JMT which integrates the Jones matrix measurement, Doppler measurement, and scattering measurement is presented. This theory enables a sensitivity-enhanced scattering OCT and high-sensitivity Doppler OCT. These new features enable the application of this system to clinical cases. A healthy subject and a geographic atrophy patient were measured in vivo, and simultaneous imaging of choroidal vasculature and birefringence structures are demonstrated. PMID:23938857
NASTRAN flutter analysis of advanced turbopropellers
NASA Technical Reports Server (NTRS)
Elchuri, V.; Smith, G. C. C.
1982-01-01
An existing capability developed to conduct modal flutter analysis of tuned bladed-shrouded discs in NASTRAN was modified and applied to investigate the subsonic unstalled flutter characteristics of advanced turbopropellers. The modifications pertain to the inclusion of oscillatory modal aerodynamic loads of blades with large (backward and forward) variable sweep. The two dimensional subsonic cascade unsteady aerodynamic theory was applied in a strip theory manner with appropriate modifications for the sweep effects. Each strip is associated with a chord selected normal to any spanwise reference curve such as the blade leading edge. The stability of three operating conditions of a 10-bladed propeller is analyzed. Each of these operating conditions is iterated once to determine the flutter boundary. A 5-bladed propeller is also analyzed at one operating condition to investigate stability. Analytical results obtained are in very good agreement with those from wind tunnel tests.
Advanced development in chemical analysis of Cordyceps.
Zhao, J; Xie, J; Wang, L Y; Li, S P
2014-01-01
Cordyceps sinensis, also called DongChongXiaCao (winter worm summer grass) in Chinese, is a well-known and valued traditional Chinese medicine. In 2006, we wrote a review for discussing the markers and analytical methods in quality control of Cordyceps (J. Pharm. Biomed. Anal. 41 (2006) 1571-1584). Since then this review has been cited by others for more than 60 times, which suggested that scientists have great interest in this special herbal material. Actually, the number of publications related to Cordyceps after 2006 is about 2-fold of that in two decades before 2006 according to the data from Web of Science. Therefore, it is necessary to review and discuss the advanced development in chemical analysis of Cordyceps since then. PMID:23688494
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Automated sensitivity analysis using the GRESS language
Pin, F.G.; Oblow, E.M.; Wright, R.Q.
1986-04-01
An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies.
Sensitivity analysis of Stirling engine design parameters
Naso, V.; Dong, W.; Lucentini, M.; Capata, R.
1998-07-01
In the preliminary Stirling engine design process, the values of some design parameters (temperature ratio, swept volume ratio, phase angle and dead volume ratio) have to be assumed; as a matter of fact it can be difficult to determine the best values of these parameters for a particular engine design. In this paper, a mathematical model is developed to analyze the sensitivity of engine's performance variations corresponding to variations of these parameters.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Towards More Efficient and Effective Global Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2014-05-01
Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.
Fuzzy sensitivity analysis for reliability assessment of building structures
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2016-06-01
The mathematical concept of fuzzy sensitivity analysis, which studies the effects of the fuzziness of input fuzzy numbers on the fuzziness of the output fuzzy number, is described in the article. The output fuzzy number is evaluated using Zadeh's general extension principle. The contribution of stochastic and fuzzy uncertainty in reliability analysis tasks of building structures is discussed. The algorithm of fuzzy sensitivity analysis is an alternative to stochastic sensitivity analysis in tasks in which input and output variables are considered as fuzzy numbers.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over
Advancing the sensitivity of selected reaction monitoring-based targeted quantitative proteomics
Shi, Tujin; Su, Dian; Liu, Tao; Tang, Keqi; Camp, David G.; Qian, Weijun; Smith, Richard D.
2012-04-01
Selected reaction monitoring (SRM)—also known as multiple reaction monitoring (MRM)—has emerged as a promising high-throughput targeted protein quantification technology for candidate biomarker verification and systems biology applications. A major bottleneck for current SRM technology, however, is insufficient sensitivity for e.g., detecting low-abundance biomarkers likely present at the pg/mL to low ng/mL range in human blood plasma or serum, or extremely low-abundance signaling proteins in the cells or tissues. Herein we review recent advances in methods and technologies, including front-end immunoaffinity depletion, fractionation, selective enrichment of target proteins/peptides or their posttranslational modifications (PTMs), as well as advances in MS instrumentation, which have significantly enhanced the overall sensitivity of SRM assays and enabled the detection of low-abundance proteins at low to sub- ng/mL level in human blood plasma or serum. General perspectives on the potential of achieving sufficient sensitivity for detection of pg/mL level proteins in plasma are also discussed.
Advanced Coal Wind Hybrid: Economic Analysis
Phadke, Amol; Goldman, Charles; Larson, Doug; Carr, Tom; Rath, Larry; Balash, Peter; Yih-Huei, Wan
2008-11-28
Growing concern over climate change is prompting new thinking about the technologies used to generate electricity. In the future, it is possible that new government policies on greenhouse gas emissions may favor electric generation technology options that release zero or low levels of carbon emissions. The Western U.S. has abundant wind and coal resources. In a world with carbon constraints, the future of coal for new electrical generation is likely to depend on the development and successful application of new clean coal technologies with near zero carbon emissions. This scoping study explores the economic and technical feasibility of combining wind farms with advanced coal generation facilities and operating them as a single generation complex in the Western US. The key questions examined are whether an advanced coal-wind hybrid (ACWH) facility provides sufficient advantages through improvements to the utilization of transmission lines and the capability to firm up variable wind generation for delivery to load centers to compete effectively with other supply-side alternatives in terms of project economics and emissions footprint. The study was conducted by an Analysis Team that consists of staff from the Lawrence Berkeley National Laboratory (LBNL), National Energy Technology Laboratory (NETL), National Renewable Energy Laboratory (NREL), and Western Interstate Energy Board (WIEB). We conducted a screening level analysis of the economic competitiveness and technical feasibility of ACWH generation options located in Wyoming that would supply electricity to load centers in California, Arizona or Nevada. Figure ES-1 is a simple stylized representation of the configuration of the ACWH options. The ACWH consists of a 3,000 MW coal gasification combined cycle power plant equipped with carbon capture and sequestration (G+CC+CCS plant), a fuel production or syngas storage facility, and a 1,500 MW wind plant. The ACWH project is connected to load centers by a 3,000 MW
Boundary formulations for sensitivity analysis without matrix derivatives
NASA Technical Reports Server (NTRS)
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
Sensitivity analysis and optimization of the nuclear fuel cycle
Passerini, S.; Kazimi, M. S.; Shwageraus, E.
2012-07-01
A sensitivity study has been conducted to assess the robustness of the conclusions presented in the MIT Fuel Cycle Study. The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycles. The options include limited recycling in LWRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. The analysis allowed optimization of the fast reactor conversion ratio with respect to desired fuel cycle performance characteristics. The following parameters were found to significantly affect the performance of recycling technologies and their penetration over time: Capacity Factors of the fuel cycle facilities, Spent Fuel Cooling Time, Thermal Reprocessing Introduction Date, and in core and Out-of-core TRU Inventory Requirements for recycling technology. An optimization scheme of the nuclear fuel cycle is proposed. Optimization criteria and metrics of interest for different stakeholders in the fuel cycle (economics, waste management, environmental impact, etc.) are utilized for two different optimization techniques (linear and stochastic). Preliminary results covering single and multi-variable and single and multi-objective optimization demonstrate the viability of the optimization scheme. (authors)
Partial Differential Algebraic Sensitivity Analysis Code
Energy Science and Technology Software Center (ESTSC)
1995-05-15
PDASAC solves stiff, nonlinear initial-boundary-value in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0,1 or 2 in x. Parametric sensitivities of the calculated states are compted simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled.more » Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivites if desired.« less
Aero-Structural Interaction, Analysis, and Shape Sensitivity
NASA Technical Reports Server (NTRS)
Newman, James C., III
1999-01-01
A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.
Automating sensitivity analysis of computer models using computer calculus
Oblow, E.M.; Pin, F.G.
1985-01-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs.
Automated procedure for sensitivity analysis using computer calculus
Oblow, E.M.
1983-05-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach was found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies.
Advances in radiation biology: Relative radiation sensitivities of human organ systems. Volume 12
Lett, J.T.; Altman, K.I.; Ehmann, U.K.; Cox, A.B.
1987-01-01
This volume is a thematically focused issue of Advances in Radiation Biology. The topic surveyed is relative radiosensitivity of human organ systems. Topics considered include relative radiosensitivities of the thymus, spleen, and lymphohemopoietic systems; relative radiosensitivities of the small and large intestine; relative rediosensitivities of the oral cavity, larynx, pharynx, and esophagus; relative radiation sensitivity of the integumentary system; dose response of the epidermal; microvascular, and dermal populations; relative radiosensitivity of the human lung; relative radiosensitivity of fetal tissues; and tolerance of the central and peripheral nervous system to therapeutic irradiation.
Parametric cost analysis for advanced energy concepts
Not Available
1983-10-01
This report presents results of an exploratory study to develop parametric cost estimating relationships for advanced fossil-fuel energy systems. The first of two tasks was to develop a standard Cost Chart of Accounts to serve as a basic organizing framework for energy systems cost analysis. The second task included development of selected parametric cost estimating relationships (CERs) for individual elements (or subsystems) of a fossil fuel plant, nominally for the Solvent-Refined Coal (SRC) process. Parametric CERs are presented for the following elements: coal preparation, coal slurry preparation, dissolver (reactor); gasification; oxygen production; acid gas/CO/sub 2/ removal; shift conversion; cryogenic hydrogen recovery; and sulfur removal. While the nominal focus of the study was on the SRC process, each of these elements is found in other fossil fuel processes. Thus, the results of this effort have broader potential application. However, it should also be noted that the CERs presented in this report are based upon a limited data base. Thus, they are applicable over a limited range of values (of the independent variables) and for a limited set of specific technologies (e.g., the gasifier CER is for the multi-train, Koppers-Totzek process). Additional work is required to extend the range of these CERs. 16 figures, 13 tables.
A topological approach to computer-aided sensitivity analysis
NASA Technical Reports Server (NTRS)
Chan, S. P.; Munoz, R. M.
1971-01-01
Sensitivities of any arbitrary system are calculated using general purpose digital computer with available software packages for transfer function analysis. Sensitivity shows how element variation within system affects system performance. Signal flow graph illustrates topological system behavior and relationship among parameters in system.
Global and Local Sensitivity Analysis Methods for a Physical System
ERIC Educational Resources Information Center
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Bencic, T.; Sullivan, J. P.
1999-01-01
This article reviews new advances and applications of pressure sensitive paints in aerodynamic testing. Emphasis is placed on important technical aspects of pressure sensitive paint including instrumentation, data processing, and uncertainty analysis.
Recent advances in (soil moisture) triple collocation analysis
NASA Astrophysics Data System (ADS)
Gruber, A.; Su, C.-H.; Zwieback, S.; Crow, W.; Dorigo, W.; Wagner, W.
2016-03-01
To date, triple collocation (TC) analysis is one of the most important methods for the global-scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method. Different notations that are used to formulate the TC problem are shown to be mathematically identical. While many studies have investigated issues related to possible violations of the underlying assumptions, only few TC modifications have been proposed to mitigate the impact of these violations. Moreover, assumptions, which are often understood as a limitation that is unique to TC analysis are shown to be common also to other conventional performance metrics. Noteworthy advances in TC analysis have been made in the way error estimates are being presented by moving from the investigation of absolute error variance estimates to the investigation of signal-to-noise ratio (SNR) metrics. Here we review existing error presentations and propose the combined investigation of the SNR (expressed in logarithmic units), the unscaled error variances, and the soil moisture sensitivities of the data sets as an optimal strategy for the evaluation of remotely-sensed soil moisture data sets.
The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...
Sensitivity Analysis in Complex Plasma Chemistry Models
NASA Astrophysics Data System (ADS)
Turner, Miles
2015-09-01
The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''
Selecting step sizes in sensitivity analysis by finite differences
NASA Technical Reports Server (NTRS)
Iott, J.; Haftka, R. T.; Adelman, H. M.
1985-01-01
This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.
Parameter sensitivity analysis for pesticide impacts on honeybee colonies
We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...
SYSTEMATIC SENSITIVITY ANALYSIS OF AIR QUALITY SIMULATION MODELS
This report reviews and assesses systematic sensitivity and uncertainty analysis methods for applications to air quality simulation models. The discussion of the candidate methods presents their basic variables, mathematical foundations, user motivations and preferences, computer...
On the sensitivity analysis of porous material models
NASA Astrophysics Data System (ADS)
Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel
2012-11-01
Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
Advanced Materials and Solids Analysis Research Core (AMSARC)
The Advanced Materials and Solids Analysis Research Core (AMSARC), centered at the U.S. Environmental Protection Agency's (EPA) Andrew W. Breidenbach Environmental Research Center in Cincinnati, Ohio, is the foundation for the Agency's solids and surfaces analysis capabilities. ...
Fixed point sensitivity analysis of interacting structured populations.
Barabás, György; Meszéna, Géza; Ostling, Annette
2014-03-01
Sensitivity analysis of structured populations is a useful tool in population ecology. Historically, methodological development of sensitivity analysis has focused on the sensitivity of eigenvalues in linear matrix models, and on single populations. More recently there have been extensions to the sensitivity of nonlinear models, and to communities of interacting populations. Here we derive a fully general mathematical expression for the sensitivity of equilibrium abundances in communities of interacting structured populations. Our method yields the response of an arbitrary function of the stage class abundances to perturbations of any model parameters. As a demonstration, we apply this sensitivity analysis to a two-species model of ontogenetic niche shift where each species has two stage classes, juveniles and adults. In the context of this model, we demonstrate that our theory is quite robust to violating two of its technical assumptions: the assumption that the community is at a point equilibrium and the assumption of infinitesimally small parameter perturbations. Our results on the sensitivity of a community are also interpreted in a niche theoretical context: we determine how the niche of a structured population is composed of the niches of the individual states, and how the sensitivity of the community depends on niche segregation. PMID:24368160
Sensitivity of the Advanced LIGO detectors at the beginning of gravitational wave astronomy
NASA Astrophysics Data System (ADS)
Martynov, D. V.; Hall, E. D.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Adams, C.; Adhikari, R. X.; Anderson, R. A.; Anderson, S. B.; Arai, K.; Arain, M. A.; Aston, S. M.; Austin, L.; Ballmer, S. W.; Barbet, M.; Barker, D.; Barr, B.; Barsotti, L.; Bartlett, J.; Barton, M. A.; Bartos, I.; Batch, J. C.; Bell, A. S.; Belopolski, I.; Bergman, J.; Betzwieser, J.; Billingsley, G.; Birch, J.; Biscans, S.; Biwer, C.; Black, E.; Blair, C. D.; Bogan, C.; Bork, R.; Bridges, D. O.; Brooks, A. F.; Celerier, C.; Ciani, G.; Clara, F.; Cook, D.; Countryman, S. T.; Cowart, M. J.; Coyne, D. C.; Cumming, A.; Cunningham, L.; Damjanic, M.; Dannenberg, R.; Danzmann, K.; Costa, C. F. Da Silva; Daw, E. J.; DeBra, D.; DeRosa, R. T.; DeSalvo, R.; Dooley, K. L.; Doravari, S.; Driggers, J. C.; Dwyer, S. E.; Effler, A.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fair, H.; Feldbaum, D.; Fisher, R. P.; Foley, S.; Frede, M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Galdi, V.; Giaime, J. A.; Giardina, K. D.; Gleason, J. R.; Goetz, R.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Grote, H.; Guido, C. J.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hammond, G.; Hanks, J.; Hanson, J.; Hardwick, T.; Harry, G. M.; Heefner, J.; Heintze, M. C.; Heptonstall, A. W.; Hoak, D.; Hough, J.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jones, R.; Kandhasamy, S.; Karki, S.; Kasprzack, M.; Kaufer, S.; Kawabe, K.; Kells, W.; Kijbunchoo, N.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kokeyama, K.; Korth, W. Z.; Kuehn, G.; Kwee, P.; Landry, M.; Lantz, B.; Le Roux, A.; Levine, B. M.; Lewis, J. B.; Lhuillier, V.; Lockerbie, N. A.; Lormand, M.; Lubinski, M. J.; Lundgren, A. P.; MacDonald, T.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Massinger, T. J.; Matichard, F.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McIntyre, G.; McIver, J.; Merilh, E. L.; Meyer, M. S.; Meyers, P. M.; Miller, J.; Mittleman, R.; Moreno, G.; Mueller, C. L.; Mueller, G.; Mullavey, A.; Munch, J.; Nuttall, L. K.; Oberling, J.; O'Dell, J.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Osthelder, C.; Ottaway, D. J.; Overmier, H.; Palamos, J. R.; Paris, H. R.; Parker, W.; Patrick, Z.; Pele, A.; Penn, S.; Phelps, M.; Pickenpack, M.; Pierro, V.; Pinto, I.; Poeld, J.; Principe, M.; Prokhorov, L.; Puncken, O.; Quetschke, V.; Quintero, E. A.; Raab, F. J.; Radkins, H.; Raffai, P.; Ramet, C. R.; Reed, C. M.; Reid, S.; Reitze, D. H.; Robertson, N. A.; Rollins, J. G.; Roma, V. J.; Romie, J. H.; Rowan, S.; Ryan, K.; Sadecki, T.; Sanchez, E. J.; Sandberg, V.; Sannibale, V.; Savage, R. L.; Schofield, R. M. S.; Schultz, B.; Schwinberg, P.; Sellers, D.; Sevigny, A.; Shaddock, D. A.; Shao, Z.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sigg, D.; Slagmolen, B. J. J.; Smith, J. R.; Smith, M. R.; Smith-Lefebvre, N. D.; Sorazu, B.; Staley, A.; Stein, A. J.; Stochino, A.; Strain, K. A.; Taylor, R.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Torrie, C. I.; Traylor, G.; Vajente, G.; Valdes, G.; van Veggel, A. A.; Vargas, M.; Vecchio, A.; Veitch, P. J.; Venkateswara, K.; Vo, T.; Vorvick, C.; Waldman, S. J.; Walker, M.; Ward, R. L.; Warner, J.; Weaver, B.; Weiss, R.; Welborn, T.; Weßels, P.; Wilkinson, C.; Willems, P. A.; Williams, L.; Willke, B.; Winkelmann, L.; Wipf, C. C.; Worden, J.; Wu, G.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Zhang, L.; Zucker, M. E.; Zweizig, J.
2016-06-01
The Laser Interferometer Gravitational Wave Observatory (LIGO) consists of two widely separated 4 km laser interferometers designed to detect gravitational waves from distant astrophysical sources in the frequency range from 10 Hz to 10 kHz. The first observation run of the Advanced LIGO detectors started in September 2015 and ended in January 2016. A strain sensitivity of better than 10-23/√{Hz } was achieved around 100 Hz. Understanding both the fundamental and the technical noise sources was critical for increasing the astrophysical strain sensitivity. The average distance at which coalescing binary black hole systems with individual masses of 30 M⊙ could be detected above a signal-to-noise ratio (SNR) of 8 was 1.3 Gpc, and the range for binary neutron star inspirals was about 75 Mpc. With respect to the initial detectors, the observable volume of the Universe increased by a factor 69 and 43, respectively. These improvements helped Advanced LIGO to detect the gravitational wave signal from the binary black hole coalescence, known as GW150914.
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
On the sensitivity analysis of separated-loop MRS data
NASA Astrophysics Data System (ADS)
Behroozmand, A.; Auken, E.; Fiandaca, G.
2013-12-01
In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the separated-loop with the conventional coincident-loop MRS data. MRS has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop (typically 25 - 100 m in side length/diameter) deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record SNMR signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the sensitivity kernels of different separated-loop MRS soundings are studied and compared with
Monden, Masayo; Koyama, Hidenori; Otsuka, Yoshiko; Morioka, Tomoaki; Mori, Katsuhito; Shoji, Takuhito; Mima, Yohei; Motoyama, Koka; Fukumoto, Shinya; Shioi, Atsushi; Emoto, Masanori; Yamamoto, Yasuhiko; Yamamoto, Hiroshi; Nishizawa, Yoshiki; Kurajoh, Masafumi; Yamamoto, Tetsuya; Inaba, Masaaki
2013-01-01
Receptor for advanced glycation end products (RAGE) has been shown to be involved in adiposity as well as atherosclerosis even in nondiabetic conditions. In this study, we examined mechanisms underlying how RAGE regulates adiposity and insulin sensitivity. RAGE overexpression in 3T3-L1 preadipocytes using adenoviral gene transfer accelerated adipocyte hypertrophy, whereas inhibitions of RAGE by small interfering RNA significantly decrease adipocyte hypertrophy. Furthermore, double knockdown of high mobility group box-1 and S100b, both of which are RAGE ligands endogenously expressed in 3T3-L1 cells, also canceled RAGE-medicated adipocyte hypertrophy, implicating a fundamental role of ligands–RAGE ligation. Adipocyte hypertrophy induced by RAGE overexpression is associated with suppression of glucose transporter type 4 and adiponectin mRNA expression, attenuated insulin-stimulated glucose uptake, and insulin-stimulated signaling. Toll-like receptor (Tlr)2 mRNA, but not Tlr4 mRNA, is rapidly upregulated by RAGE overexpression, and inhibition of Tlr2 almost completely abrogates RAGE-mediated adipocyte hypertrophy. Finally, RAGE−/− mice exhibited significantly less body weight, epididymal fat weight, epididymal adipocyte size, higher serum adiponectin levels, and higher insulin sensitivity than wild-type mice. RAGE deficiency is associated with early suppression of Tlr2 mRNA expression in adipose tissues. Thus, RAGE appears to be involved in mouse adipocyte hypertrophy and insulin sensitivity, whereas Tlr2 regulation may partly play a role. PMID:23011593
Nam, U W; Lee, S G; Bak, J G; Moon, M K; Cheon, J K; Lee, C H
2007-10-01
A versatile time-to-digital converter based data acquisition system for a segmented position-sensitive detector has been developed. This data acquisition system was successfully demonstrated to a two-segment position-sensitive detector. The data acquisition system will be developed further to support multisegmented position-sensitive detector to improve the photon count rate capability of the advanced x-ray imaging crystal spectrometer system. PMID:17979416
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite
Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Hall, Kenneth C.
1994-01-01
During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important
The GOES-R Advanced Baseline Imager: polarization sensitivity and potential impacts
NASA Astrophysics Data System (ADS)
Pearlman, Aaron J.; Cao, Changyong; Wu, Xiangqian
2015-09-01
In contrast to the National Oceanic and Atmospheric Administration's (NOAA's) current geostationary imagers for operational weather forecasting, the next generation imager, the Advanced Baseline Imager (ABI) aboard the Geostationary Operational Environmental Satellite R-Series (GOES-R), will have six reflective solar bands - five more than currently available. These bands will be used for applications such as aerosol retrievals, which are influenced by polarization effects. These effects are determined by two factors: instrument polarization sensitivity and the polarization states of the observations. The former is measured as part of the pre-launch testing program performed by the instrument vendor. We analyzed the results of the pre-launch polarization sensitivity measurements of the 0.47 μm and 0.64 μm channels and used them in conjunction with simulated scene polarization states to estimate potential on-orbit radiometric impacts. The pre-launch test setups involved illuminating the ABI with an integrating sphere through either one or two polarizers. The measurement with one (rotating) polarizer yields the degree of linear polarization of ABI, and the measurements using two polarizers (one rotating and one fixed) characterized the non-ideal properties of the polarizer. To estimate the radiometric performance impacts from the instrument polarization sensitivity, we simulated polarized scenes using a radiative transfer code and accounted for the instrument polarization sensitivity over its field of regard. The results show the variation in the polarization impacts over the day and by regions of the full disk can reach up to 3.2% for the 0.47μm channel and 4.8% for the 0.64μm channel. Geostationary orbiters like the ABI give the unique opportunity to show these impacts throughout the day compared to low earth orbiters, which are more limited to certain times of day. This work may enhance the ability to diagnose anomalies on-orbit.
Sensitivity analysis and scale issues in landslide susceptibility mapping
NASA Astrophysics Data System (ADS)
Catani, Filippo; Lagomarsino, Daniela; Segoni, Samuele; Tofani, Veronica
2013-04-01
Despite the large number of recent advances and developments in landslide susceptibility mapping (LSM) there is still a lack of studies focusing on specific aspects of LSM model sensitivity. For example, the influence of factors of paramount importance such as the survey scale of the landslide conditioning variables (LCVs), the resolution of the mapping unit (MUR) and the optimal number and ranking of LCVs have never been investigated analytically, especially on large datasets. In this paper we attempt this experimentation concentrating on the impact of model tuning choice on the final result, rather than on the comparison of methodologies. To this end, we adopt a simple implementation of the random forest (RF) classification family to produce an ensamble of landslide susceptibility maps for a set of different model settings, input data types and scales. RF classification and regression methods offer a very flexible environment for testing model parameters and mapping hypotheses, allowing for a direct quantification of variable importance. The model choice is, in itself, quite innovative since it is the first time that such technique, widely used in remote sensing for image classification, is used in this form for the production of a LSM. Random forest is a combination of tree (usually binary) bayesian predictors that permits to relate a set of contributing factors with the actual landslides occurrence. Being it a nonparametric model, it is possible to incorporate a range of numeric or categorical data layers and there is no need to select unimodal training data. Many classical and widely acknowledged landslide predisposing factors have been taken into account as mainly related to: the lithology, the land use, the land surface geometry (derived from DTM), the structural and anthropogenic constrains. In addition, for each factor we also included in the parameter set the standard deviation (for numerical variables) or the variety (for categorical ones). The use of
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Unique Systems Analysis Task 7, Advanced Subsonic Technologies Evaluation Analysis
NASA Technical Reports Server (NTRS)
Eisenberg, Joseph D. (Technical Monitor); Bettner, J. L.; Stratton, S.
2004-01-01
To retain a preeminent U.S. position in the aircraft industry, aircraft passenger mile costs must be reduced while at the same time, meeting anticipated more stringent environmental regulations. A significant portion of these improvements will come from the propulsion system. A technology evaluation and system analysis was accomplished under this task, including areas such as aerodynamics and materials and improved methods for obtaining low noise and emissions. Previous subsonic evaluation analyses have identified key technologies in selected components for propulsion systems for year 2015 and beyond. Based on the current economic and competitive environment, it is clear that studies with nearer turn focus that have a direct impact on the propulsion industry s next generation product are required. This study will emphasize the year 2005 entry into service time period. The objective of this study was to determine which technologies and materials offer the greatest opportunities for improving propulsion systems. The goals are twofold. The first goal is to determine an acceptable compromise between the thermodynamic operating conditions for A) best performance, and B) acceptable noise and chemical emissions. The second goal is the evaluation of performance, weight and cost of advanced materials and concepts on the direct operating cost of an advanced regional transport of comparable technology level.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
Li, Peiyue; Qian, Hui; Wu, Jianhua; Chen, Jie
2013-03-01
Sensitivity analysis is becoming increasingly widespread in many fields of engineering and sciences and has become a necessary step to verify the feasibility and reliability of a model or a method. The sensitivity of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method in water quality assessment mainly includes sensitivity to the parameter weights and sensitivity to the index input data. In the present study, the sensitivity of TOPSIS to the parameter weights was discussed in detail. The present study assumed the original parameter weights to be equal to each other, and then each weight was changed separately to see how the assessment results would be affected. Fourteen schemes were designed to investigate the sensitivity to the variation of each weight. The variation ranges that keep the assessment results unchangeable were also derived theoretically. The results show that the final assessment results will change when the weights increase or decrease by ±20 to ±50 %. The feedback of different samples to the variation of a given weight is different, and the feedback of a given sample to the variation of different weights is also different. The final assessment results can keep relatively stable when a given weight is disturbed as long as the initial variation ratios meet one of the eight derived requirements. PMID:22752962
Kelley, Shana O; Mirkin, Chad A; Walt, David R; Ismagilov, Rustem F; Toner, Mehmet; Sargent, Edward H
2014-12-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541
Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.
2015-01-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541
NASA Astrophysics Data System (ADS)
Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.
2014-12-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.
A comprehensive sensitivity analysis of central-loop MRS data
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad; Auken, Esben; Dalgaard, Esben; Rejkjaer, Simon
2014-05-01
In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the central-loop with the conventional coincident-loop MRS data. MRS, also called surface NMR, has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content and, through empirical relations, is linked to hydraulic properties of the subsurface such as hydraulic conductivity. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record MRS signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the
The Tuition Advance Fund: An Analysis Prepared for Boston University.
ERIC Educational Resources Information Center
Botsford, Keith
Three models for anlayzing the Tuition Advance Fund (TAF) are examined. The three models are: projections by the Institute for Demographic and Economic Studies (IDES), projections by Data Resources, Inc. (DRI), and the Tuition Advance Fund Simulation (TAFSIM) models from Boston University. Analysis of the TAF is based on enrollment, price, and…
A Meta-Analysis of Advance-Organizer Studies.
ERIC Educational Resources Information Center
Stone, Carol Leth
Long term studies of advance organizers (AO) were analyzed with Glass's meta-analysis technique. AO's were defined as bridges from reader's previous knowledge to what is to be learned. The results were compared with predictions from Ausubel's model of assimilative learning. The results of the study indicated that advance organizers were associated…
Sensitivity analysis technique for application to deterministic models
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method.
NASA Astrophysics Data System (ADS)
Dasgupta, Sambarta
Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. PMID:26567763
New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Wang, Qiqi
2012-11-01
Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.
Imaging system sensitivity analysis with NV-IPM
NASA Astrophysics Data System (ADS)
Fanning, Jonathan; Teaney, Brian
2014-05-01
This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1974-01-01
Sensitivity data for advanced technology transports has been systematically collected. This data has been generated in two separate studies. In the first of these, three nominal, or base point, vehicles designed to cruise at Mach numbers .85, .93, and .98, respectively, were defined. The effects on performance and economics of perturbations to basic parameters in the areas of structures, aerodynamics, and propulsion were then determined. In all cases, aircraft were sized to meet the same payload and range as the nominals. This sensitivity data may be used to assess the relative effects of technology changes. The second study was an assessment of the effect of cruise Mach number. Three families of aircraft were investigated in the Mach number range 0.70 to 0.98: straight wing aircraft from 0.70 to 0.80; sweptwing, non-area ruled aircraft from 0.80 to 0.95; and area ruled aircraft from 0.90 to 0.98. At each Mach number, the values of wing loading, aspect ratio, and bypass ratio which resulted in minimum gross takeoff weight were used. As part of the Mach number study, an assessment of the effect of increased fuel costs was made.
Sensitivity analysis approach to multibody systems described by natural coordinates
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2014-03-01
The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.
Advanced Fingerprint Analysis Project Fingerprint Constituents
GM Mong; CE Petersen; TRW Clauss
1999-10-29
The work described in this report was focused on generating fundamental data on fingerprint components which will be used to develop advanced forensic techniques to enhance fluorescent detection, and visualization of latent fingerprints. Chemical components of sweat gland secretions are well documented in the medical literature and many chemical techniques are available to develop latent prints, but there have been no systematic forensic studies of fingerprint sweat components or of the chemical and physical changes these substances undergo over time.
Sensitivity analysis of the fission gas behavior model in BISON.
Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard
2013-05-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.
Sensitivity analysis for handling uncertainty in an economic evaluation.
Limwattananon, Supon
2014-05-01
To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Advanced nuclear rocket engine mission analysis
Ramsthaler, J.; Farbman, G.; Sulmeisters, T.; Buden, D.; Harris, P.
1987-12-01
The use of a derivative of the NERVA engine developed from 1955 to 1973 was evluated for potential application to Air Force orbital transfer and maneuvering missions in the time period 1995 to 2020. The NERVA stge was found to have lower life cycle costs (LCC) than an advanced chemical stage for performing low earth orbit (LEO) to geosynchronous orbit (GEO0 missions at any level of activity greater than three missions per year. It had lower life cycle costs than a high performance nuclear electric engine at any level of LEO to GEO mission activity. An examination of all unmanned orbital transfer and maneuvering missions from the Space Transportation Architecture study (STAS 111-3) indicated a LCC advantage for the NERVA stage over the advanced chemical stage of fifteen million dollars. The cost advanced accured from both the orbital transfer and maneuvering missions. Parametric analyses showed that the specific impulse of the NERVA stage and the cost of delivering material to low earth orbit were the most significant factors in the LCC advantage over the chemical stage. Lower development costs and a higher thrust gave the NERVA engine an LCC advantage over the nuclear electric stage. An examination of technical data from the Rover/NERVA program indicated that development of the NERVA stage has a low technical risk, and the potential for high reliability and safe operation. The data indicated the NERVA engine had a great flexibility which would permit a single stage to perform all Air Force missions.
Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review
NASA Technical Reports Server (NTRS)
Antonsson, Erik; Gombosi, Tamas
2005-01-01
Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Adjoint-based sensitivity analysis for reactor-safety applications
Parks, C.V.
1985-01-01
The application and usefulness of an adjoint-based methodology for performing sensitivity analysis on reactor safety computer codes is investigated. The adjoint-based methodology, referred to as differential sensitivity theory (DST), provides first-order derivatives of the calculated quantities of interest (responses) with respect to the input parameters. The basic theoretical development of DST is presented along with the needed general extensions for consideration of model discontinuities and a variety of useful response definitions. A simple analytic problem is used to highlight the general DST procedures. Finally, DST procedures presented in this work are applied to two highly nonlinear reactor accident analysis codes: (1) FASTGAS, a relatively small code for analysis of loss-of-decay-heat-removal accident in a gas-cooled fast reactor, and (2) an existing code called VENUS-II which is typically employed for analyzing the core disassembly phase of a hypothetical fast reactor accident. The two codes are different both in terms of complexity and in terms of the facets of DST which can be illustrated. Sensitivity results from the adjoint codes ADJGAS and VENUS-ADJ are verified with direct recalculations using perturbed input parameters. The effectiveness of the DST results for parameter ranking, prediction of response changes, and uncertainty analysis are illustrated. The conclusion drawn from this study is that DST is a viable, cost-effective methodology for accurate sensitivity analysis.
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2004-07-15
statistical postprocessing must be repeated anew. In particular, a 'fool-proof' statistical method for correctly analyzing models involving highly correlated parameters does not seem to exist currently, so that particular care must be used when interpreting regression results for such models.By addressing computational issues and particularly challenging open problems and knowledge gaps, this review paper aims at providing a comprehensive basis for further advancements and innovations in the field of sensitivity and uncertainty analysis.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach. PMID:24978258
Advanced surface design for logistics analysis
NASA Astrophysics Data System (ADS)
Brown, Tim R.; Hansen, Scott D.
The development of anthropometric arm/hand and tool models and their manipulation in a large system model for maintenance simulation are discussed. The use of Advanced Surface Design and s-fig technology in anthropometrics, and three-dimensional graphics simulation tools, are found to achieve a good balance between model manipulation speed and model accuracy. The present second generation models are shown to be twice as fast to manipulate as the first generation b-surf models, to be easier to manipulate into various configurations, and to more closely approximate human contours.
Advanced tracking systems design and analysis
NASA Technical Reports Server (NTRS)
Potash, R.; Floyd, L.; Jacobsen, A.; Cunningham, K.; Kapoor, A.; Kwadrat, C.; Radel, J.; Mccarthy, J.
1989-01-01
The results of an assessment of several types of high-accuracy tracking systems proposed to track the spacecraft in the National Aeronautics and Space Administration (NASA) Advanced Tracking and Data Relay Satellite System (ATDRSS) are summarized. Tracking systems based on the use of interferometry and ranging are investigated. For each system, the top-level system design and operations concept are provided. A comparative system assessment is presented in terms of orbit determination performance, ATDRSS impacts, life-cycle cost, and technological risk.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Uncertainty and sensitivity analysis and its applications in OCD measurements
NASA Astrophysics Data System (ADS)
Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio
2009-03-01
This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Sensitivity analysis of the critical speed in railway vehicle dynamics
NASA Astrophysics Data System (ADS)
Bigoni, D.; True, H.; Engsig-Karup, A. P.
2014-05-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.
Parameter sensitivity analysis of IL-6 signalling pathways.
Chu, Y; Jayaraman, A; Hahn, J
2007-11-01
Signal transduction pathways generally consist of a large number of individual components and have an even greater number of parameters describing their reaction kinetics. Although the structure of some signalling pathways can be found in the literature, many of the parameters are not well known and they would need to be re-estimated from experimental data for each specific case. However it is not feasible to estimate hundreds of parameters because of the cost of the experiments associated with generating data. Parameter sensitivity analysis can address this situation as it investigates how the system behaviour is changed by variations of parameters and the analysis identifies which parameters play a key role in signal transduction. Only these important parameters need then be re-estimated using data from further experiments. This article presents a detailed parameter sensitivity analysis of the JAK/STAT and MAPK signal transduction pathway that is used for signalling by the cytokine IL-6. As no parameter sensitivity analysis technique is known to work best for all situations, a comparison of the results returned by four techniques is presented: differential analysis, the Morris method, a sampling-based approach and the Fourier amplitude sensitivity test. The recruitment of the transcription factor STAT3 to the dimer of the phosphorylated receptor complex is determined as the most important step by the sensitivity analysis. Additionally, the desphosphorylation of the nuclear STAT3 dimer by PP2 as well as feedback inhibition by SOCS3 are found to play an important role for signal transduction. PMID:18203580
Multicriteria Evaluation and Sensitivity Analysis on Information Security
NASA Astrophysics Data System (ADS)
Syamsuddin, Irfan
2013-05-01
Information security plays a significant role in recent information society. Increasing number and impact of cyber attacks on information assets have resulted the increasing awareness among managers that attack on information is actually attack on organization itself. Unfortunately, particular model for information security evaluation for management levels is still not well defined. In this study, decision analysis based on Ternary Analytic Hierarchy Process (T-AHP) is proposed as a novel model to aid managers who responsible in making strategic evaluation related to information security issues. In addition, sensitivity analysis is applied to extend our analysis by using several "what-if" scenarios in order to measure the consistency of the final evaluation. Finally, we conclude that the final evaluation made by managers has a significant consistency shown by sensitivity analysis results.
Advanced research equipment for fast ultraweak luminescence analysis
NASA Astrophysics Data System (ADS)
Tudisco, S.; Musumeci, F.; Scordino, A.; Privitera, G.
2003-10-01
This article describes new advanced research equipment for fast ultraweak luminescence analysis, which can detect at high sensitivity photons after ultraviolet A laser irradiation in biological probes as well as plant, animal, and human cells. The design and construction of this equipment, developed at the Southern National Laboratory of the National Nuclear Physics Institute, is described with the first experimental results and future developments. The setup, employing a photomultiplier tube working in single photon counting mode, allows accurate and reliable photoluminescence measurements with excitation wavelengths in the range 337-700 nm and the emission wavelength in the range 400-800 nm. With respect to the traditional setup, this new equipment is able to perform measurements starting at a few microseconds after the laser irradiation is switched off and with a large detection efficiency (about 10% of the total solid angle). Moreover, the adopted design assures a low background noise level. A further optimization of the system is under study, with special care for the reliability needed for the delayed luminescence for optical screening project aimed to enhance the detection of the low level photoinduced luminescence from human cells to be used as an optical biopsy technique.
Advances in the analysis of iminocyclitols: Methods, sources and bioavailability.
Amézqueta, Susana; Torres, Josep Lluís
2016-05-01
Iminocyclitols are chemically and metabolically stable, naturally occurring sugar mimetics. Their biological activities make them interesting and extremely promising as both drug leads and functional food ingredients. The first iminocyclitols were discovered using preparative isolation and purification methods followed by chemical characterization using nuclear magnetic resonance spectroscopy. In addition to this classical approach, gas and liquid chromatography coupled to mass spectrometry are increasingly used; they are highly sensitive techniques capable of detecting minute amounts of analytes in a broad spectrum of sources after only minimal sample preparation. These techniques have been applied to identify new iminocyclitols in plants, microorganisms and synthetic mixtures. The separation of iminocyclitol mixtures by chromatography is particularly difficult however, as the most commonly used matrices have very low selectivity for these highly hydrophilic structurally similar molecules. This review critically summarizes recent advances in the analysis of iminocyclitols from plant sources and findings regarding their quantification in dietary supplements and foodstuffs, as well as in biological fluids and organs, from bioavailability studies. PMID:26946023
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Sensitivity analysis of the Ohio phosphorus risk index
Technology Transfer Automated Retrieval System (TEKTRAN)
The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
ERIC Educational Resources Information Center
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Adjoint-based sensitivity analysis for reactor safety applications
Parks, C.V.
1986-08-01
The application and usefulness of an adjoint-based methodology for performing sensitivity analysis on reactor safety computer codes is investigated. The adjoint-based methodology, referred to as differential sensitivity theory (DST), provides first-order derivatives of the calculated quantities of interest (responses) with respect to the input parameters. The basic theoretical development of DST is presented along with the needed general extensions for consideration of model discontinuities and a variety of useful response definitions. A simple analytic problem is used to highlight the general DST procedures. finally, DST procedures presented in this work are applied to two highly nonlinear reactor accident analysis codes: (1) FASTGAS, a relatively small code for analysis of a loss-of-decay-heat-removal accident in a gas-cooled fast reactor, and (2) an existing code called VENUS-II which has been employed for analyzing the core disassembly phase of a hypothetical fast reactor accident. The two codes are different both in terms of complexity and in terms of the facets of DST which can be illustrated. Sensitivity results from the adjoint codes ADJGAS and VENUS-ADJ are verified with direct recalcualtions using perturbed input parameters. The effectiveness of the DST results for parameter ranking, prediction of response changes, and uncertainty analysis are illustrated. The conclusion drawn from this study is that DST is a viable, cost-effective methodology for accurate sensitivity analysis. In addition, a useful sensitivity tool for use in the fast reactor safety area has been developed in VENUS-ADJ. Future work needs to concentrate on combining the accurate first-order derivatives/results from DST with existing methods (based solely on direct recalculations) for higher-order response surfaces.
Integrative "omic" analysis for tamoxifen sensitivity through cell based models.
Weng, Liming; Ziliak, Dana; Lacroix, Bonnie; Geeleher, Paul; Huang, R Stephanie
2014-01-01
It has long been observed that tamoxifen sensitivity varies among breast cancer patients. Further, ethnic differences of tamoxifen therapy between Caucasian and African American have also been reported. Since most studies have been focused on Caucasian people, we sought to comprehensively evaluate genetic variants related to tamoxifen therapy in African-derived samples. An integrative "omic" approach developed by our group was used to investigate relationships among endoxifen (an active metabolite of tamoxifen) sensitivity, SNP genotype, mRNA and microRNA expressions in 58 HapMap YRI lymphoblastoid cell lines. We identified 50 SNPs that associate with cellular sensitivity to endoxifen through their effects on 34 genes and 30 microRNA expression. Some of these findings are shared in both Caucasian and African samples, while others are unique in the African samples. Among gene/microRNA that were identified in both ethnic groups, the expression of TRAF1 is also correlated with tamoxifen sensitivity in a collection of 44 breast cancer cell lines. Further, knock-down TRAF1 and over-expression of hsa-let-7i confirmed the roles of hsa-let-7i and TRAF1 in increasing tamoxifen sensitivity in the ZR-75-1 breast cancer cell line. Our integrative omic analysis facilitated the discovery of pharmacogenomic biomarkers that potentially affect tamoxifen sensitivity. PMID:24699530
Recent Advances in Anthocyanin Analysis and Characterization
Welch, Cara R.; Wu, Qingli; Simon, James E.
2009-01-01
Anthocyanins are a class of polyphenols responsible for the orange, red, purple and blue colors of many fruits, vegetables, grains, flowers and other plants. Consumption of anthocyanins has been linked as protective agents against many chronic diseases and possesses strong antioxidant properties leading to a variety of health benefits. In this review, we examine the advances in the chemical profiling of natural anthocyanins in plant and biological matrices using various chromatographic separations (HPLC and CE) coupled with different detection systems (UV, MS and NMR). An overview of anthocyanin chemistry, prevalence in plants, biosynthesis and metabolism, bioactivities and health properties, sample preparation and phytochemical investigations are discussed while the major focus examines the comparative advantages and disadvantages of each analytical technique. PMID:19946465
Li, Peiyue; Wu, Jianhua; Qian, Hui; Chen, Jie
2013-03-01
This is the second part of the study on sensitivity analysis of the technique for order preference by similarity to ideal solution (TOPSIS) method in water quality assessment. In the present study, the sensitivity of the TOPSIS method to the index input data was investigated. The sensitivity was first theoretically analyzed under two major assumptions. One assumption was that one index or more of the samples were perturbed with the same ratio while other indices kept unchanged. The other one was that all indices of a given sample were changed simultaneously with the same ratio, while the indices of other samples were unchanged. Furthermore, a case study under assumption 2 was also carried out in this paper. When the same indices of different water samples are changed simultaneously with the same variation ratio, the final water quality assessment results will not be influenced at all. When the input data of all indices of a given sample are perturbed with the same variation ratio, the assessment values of all samples will be influenced theoretically. However, the case study shows that only the perturbed sample is sensitive to the variation, and a simple linear equation representing the relation between the closeness coefficient (CC) values of the perturbed sample and variation ratios can be derived under the assumption 2. This linear equation can be used for determining the sample orders under various variation ratios. PMID:22832843
Stochastic Simulations and Sensitivity Analysis of Plasma Flow
Lin, Guang; Karniadakis, George E.
2008-08-01
For complex physical systems with large number of random inputs, it will be very expensive to perform stochastic simulations for all of the random inputs. Stochastic sensitivity analysis is introduced in this paper to rank the significance of random inputs, provide information on which random input has more influence on the system outputs and the coupling or interaction effect among different random inputs. There are two types of numerical methods in stochastic sensitivity analysis: local and global methods. The local approach, which relies on a partial derivative of output with respect to parameters, is used to measure the sensitivity around a local operating point. When the system has strong nonlinearities and parameters fluctuate within a wide range from their nominal values, the local sensitivity does not provide full information to the system operators. On the other side, the global approach examines the sensitivity from the entire range of the parameter variations. The global screening methods, based on One-At-a-Time (OAT) perturbation of parameters, rank the significant parameters and identify their interaction among a large number of parameters. Several screening methods have been proposed in literature, i.e., the Morris method, Cotter's method, factorial experimentation, and iterated fractional factorial design. In this paper, the Morris method, Monte Carlo sampling method, Quasi-Monte Carlo method and collocation method based on sparse grids are studied. Additionally, two MHD examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis, which can be used as a pre-screening technique for reducing the dimensionality and hence the cost in stochastic simulations.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris
2015-11-01
Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.
Double Precision Differential/Algebraic Sensitivity Analysis Code
Energy Science and Technology Software Center (ESTSC)
1995-06-02
DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-normmore » or the 2-norm) is provided for the state vector and can include the sensitivities on request.« less
Sensitivity Analysis Of Technological And Material Parameters In Roll Forming
NASA Astrophysics Data System (ADS)
Gehring, Albrecht; Saal, Helmut
2007-05-01
Roll forming is applied for several decades to manufacture thin gauged profiles. However, the knowledge about this technology is still based on empirical approaches. Due to the complexity of the forming process, the main effects on profile properties are difficult to identify. This is especially true for the interaction of technological parameters and material parameters. General considerations for building a finite-element model of the roll forming process are given in this paper. A sensitivity analysis is performed on base of a statistical design approach in order to identify the effects and interactions of different parameters on profile properties. The parameters included in the analysis are the roll diameter, the rolling speed, the sheet thickness, friction between the tools and the sheet and the strain hardening behavior of the sheet material. The analysis includes an isotropic hardening model and a nonlinear kinematic hardening model. All jobs are executed parallel to reduce the overall time as the sensitivity analysis requires much CPU-time. The results of the sensitivity analysis demonstrate the opportunities to improve the properties of roll formed profiles by adjusting technological and material parameters to their optimum interacting performance.
Analysis of an advanced technology subsonic turbofan incorporating revolutionary materials
NASA Technical Reports Server (NTRS)
Knip, Gerald, Jr.
1987-01-01
Successful implementation of revolutionary composite materials in an advanced turbofan offers the possibility of further improvements in engine performance and thrust-to-weight ratio relative to current metallic materials. The present analysis determines the approximate engine cycle and configuration for an early 21st century subsonic turbofan incorporating all composite materials. The advanced engine is evaluated relative to a current technology baseline engine in terms of its potential fuel savings for an intercontinental quadjet having a design range of 5500 nmi and a payload of 500 passengers. The resultant near optimum, uncooled, two-spool, advanced engine has an overall pressure ratio of 87, a bypass ratio of 18, a geared fan, and a turbine rotor inlet temperature of 3085 R. Improvements result in a 33-percent fuel saving for the specified misssion. Various advanced composite materials are used throughout the engine. For example, advanced polymer composite materials are used for the fan and the low pressure compressor (LPC).
Advanced reliability method for fatigue analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Wirsching, P. H.
1984-01-01
When design factors are considered as random variables and the failure condition cannot be expressed by a closed form algebraic inequality, computations of risk (or probability of failure) may become extremely difficult or very inefficient. This study suggests using a simple and easily constructed second degree polynomial to approximate the complicated limit state in the neighborhood of the design point; a computer analysis relates the design variables at selected points. Then a fast probability integration technique (i.e., the Rackwitz-Fiessler algorithm) can be used to estimate risk. The capability of the proposed method is demonstrated in an example of a low cycle fatigue problem for which a computer analysis is required to perform local strain analysis to relate the design variables. A comparison of the performance of this method is made with a far more costly Monte Carlo solution. Agreement of the proposed method with Monte Carlo is considered to be good.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
Modeling and analysis of advanced binary cycles
Gawlik, K.
1997-12-31
A computer model (Cycle Analysis Simulation Tool, CAST) and a methodology have been developed to perform value analysis for small, low- to moderate-temperature binary geothermal power plants. The value analysis method allows for incremental changes in the levelized electricity cost (LEC) to be determined between a baseline plant and a modified plant. Thermodynamic cycle analyses and component sizing are carried out in the model followed by economic analysis which provides LEC results. The emphasis of the present work is on evaluating the effect of mixed working fluids instead of pure fluids on the LEC of a geothermal binary plant that uses a simple Organic Rankine Cycle. Four resources were studied spanning the range of 265{degrees}F to 375{degrees}F. A variety of isobutane and propane based mixtures, in addition to pure fluids, were used as working fluids. This study shows that the use of propane mixtures at a 265{degrees}F resource can reduce the LEC by 24% when compared to a base case value that utilizes commercial isobutane as its working fluid. The cost savings drop to 6% for a 375{degrees}F resource, where an isobutane mixture is favored. Supercritical cycles were found to have the lowest cost at all resources.
El Deeb, Sami; Wätzig, Hermann; Abd El-Hady, Deia; Sänger-van de Griend, Cari; Scriba, Gerhard K E
2016-07-01
This review updates and follows-up a previous review by highlighting recent advancements regarding capillary electromigration methodologies and applications in pharmaceutical analysis. General approaches such as quality by design as well as sample injection methods and detection sensitivity are discussed. The separation and analysis of drug-related substances, chiral CE, and chiral CE-MS in addition to the determination of physicochemical constants are addressed. The advantages of applying affinity capillary electrophoresis in studying receptor-ligand interactions are highlighted. Finally, current aspects related to the analysis of biopharmaceuticals are reviewed. The present review covers the literature between January 2013 and December 2015. PMID:26988029
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Shape sensitivity analysis of flutter response of a laminated wing
NASA Technical Reports Server (NTRS)
Bergen, Fred D.; Kapania, Rakesh K.
1988-01-01
A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less
Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania
Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh
2016-01-01
This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634
Progress in Advanced Spectral Analysis of Radioxenon
Haas, Derek A.; Schrom, Brian T.; Cooper, Matthew W.; Ely, James H.; Flory, Adam E.; Hayes, James C.; Heimbigner, Tom R.; McIntyre, Justin I.; Saunders, Danielle L.; Suckow, Thomas J.
2010-09-21
Improvements to a Java based software package developed at Pacific Northwest National Laboratory (PNNL) for display and analysis of radioxenon spectra acquired by the International Monitoring System (IMS) are described here. The current version of the Radioxenon JavaViewer implements the region of interest (ROI) method for analysis of beta-gamma coincidence data. Upgrades to the Radioxenon JavaViewer will include routines to analyze high-purity germanium detector (HPGe) data, Standard Spectrum Method to analyze beta-gamma coincidence data and calibration routines to characterize beta-gamma coincidence detectors. These upgrades are currently under development; the status and initial results will be presented. Implementation of these routines into the JavaViewer and subsequent release is planned for FY 2011-2012.
Recent advances in statistical energy analysis
NASA Technical Reports Server (NTRS)
Heron, K. H.
1992-01-01
Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.
Advancing Usability Evaluation through Human Reliability Analysis
Ronald L. Boring; David I. Gertman
2005-07-01
This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.
Advanced Techniques for Root Cause Analysis
Energy Science and Technology Software Center (ESTSC)
2000-09-19
Five items make up this package, or can be used individually. The Chronological Safety Management Template utilizes a linear adaptation of the Integrated Safety Management System laid out in the form of a template that greatly enhances the ability of the analyst to perform the first step of any investigation which is to gather all pertinent facts and identify causal factors. The Problem Analysis Tree is a simple three (3) level problem analysis tree whichmore » is easier for organizations outside of WSRC to use. Another part is the Systemic Root Cause Tree. One of the most basic and unique features of Expanded Root Cause Analysis is the Systemic Root Cause portion of the Expanded Root Cause Pyramid. The Systemic Root Causes are even more basic than the Programmatic Root Causes and represent Root Causes that cut across multiple (if not all) programs in an organization. the Systemic Root Cause portion contains 51 causes embedded at the bottom level of a three level Systemic Root Cause Tree that is divided into logical, organizationally based categorie to assist the analyst. The Computer Aided Root Cause Analysis that allows the analyst at each level of the Pyramid to a) obtain a brief description of the cause that is being considered, b) record a decision that the item is applicable, c) proceed to the next level of the Pyramid to see only those items at the next level of the tree that are relevant to the particular cause that has been chosen, and d) at the end of the process automatically print out a summary report of the incident, the causal factors as they relate to the safety management system, the probable causes, apparent causes, Programmatic Root Causes and Systemic Root Causes for each causal factor and the associated corrective action.« less
Advanced CMOS Radiation Effects Testing Analysis
NASA Technical Reports Server (NTRS)
Pellish, Jonathan Allen; Marshall, Paul W.; Rodbell, Kenneth P.; Gordon, Michael S.; LaBel, Kenneth A.; Schwank, James R.; Dodds, Nathaniel A.; Castaneda, Carlos M.; Berg, Melanie D.; Kim, Hak S.; Phan, Anthony M.; Seidleck, Christina M.
2014-01-01
Presentation at the annual NASA Electronic Parts and Packaging (NEPP) Program Electronic Technology Workshop (ETW). The material includes an update of progress in this NEPP task area over the past year, which includes testing, evaluation, and analysis of radiation effects data on the IBM 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) process. The testing was conducted using test vehicles supplied by directly by IBM.
Advanced CMOS Radiation Effects Testing and Analysis
NASA Technical Reports Server (NTRS)
Pellish, J. A.; Marshall, P. W.; Rodbell, K. P.; Gordon, M. S.; LaBel, K. A.; Schwank, J. R.; Dodds, N. A.; Castaneda, C. M.; Berg, M. D.; Kim, H. S.; Phan, A. M.; Seidleck, C. M.
2014-01-01
Presentation at the annual NASA Electronic Parts and Packaging (NEPP) Program Electronic Technology Workshop (ETW). The material includes an update of progress in this NEPP task area over the past year, which includes testing, evaluation, and analysis of radiation effects data on the IBM 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) process. The testing was conducted using test vehicles supplied by directly by IBM.
Objective analysis of the ARM IOP data: method and sensitivity
Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H
1999-04-01
Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.
Sensitivity analysis of transport modeling in a fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Merkel, Broder J.
2015-03-01
Modeling solute transport in fractured aquifers is still challenging for scientists and engineers. Tracer tests are a powerful tool to investigate fractured aquifers with complex geometry and variable heterogeneity. This research focuses on obtaining hydraulic and transport parameters from an experimental site with several wells. At the site, a tracer test with NaCl was performed under natural gradient conditions. Observed concentrations of tracer test were used to calibrate a conservative solute transport model by inverse modeling based on UCODE2013, MODFLOW, and MT3DMS. In addition, several statistics are employed for sensitivity analysis. Sensitivity analysis results indicate that hydraulic conductivity and immobile porosity play important role in the late arrive for breakthrough curve. The results proved that the calibrated model fits well with the observed data set.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Hu, Rui; Blonigan, Patrick
2014-06-01
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Control of a mechanical aeration process via topological sensitivity analysis
NASA Astrophysics Data System (ADS)
Abdelwahed, M.; Hassine, M.; Masmoudi, M.
2009-06-01
The topological sensitivity analysis method gives the variation of a criterion with respect to the creation of a small hole in the domain. In this paper, we use this method to control the mechanical aeration process in eutrophic lakes. A simplified model based on incompressible Navier-Stokes equations is used, only considering the liquid phase, which is the dominant one. The injected air is taken into account through local boundary conditions for the velocity, on the injector holes. A 3D numerical simulation of the aeration effects is proposed using a mixed finite element method. In order to generate the best motion in the fluid for aeration purposes, the optimization of the injector location is considered. The main idea is to carry out topological sensitivity analysis with respect to the insertion of an injector. Finally, a topological optimization algorithm is proposed and some numerical results, showing the efficiency of our approach, are presented.