Science.gov

Sample records for advanced sensitivity analysis

  1. Advanced Fuel Cycle Economic Sensitivity Analysis

    SciTech Connect

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  2. Advancing sensitivity analysis to precisely characterize temporal parameter dominance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola

    2016-04-01

    Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological

  3. Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers

    NASA Astrophysics Data System (ADS)

    Martynov, Denis

    Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in

  4. Recent advances in steady compressible aerodynamic sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene J.-W.; Jones, Henry E.

    1992-01-01

    Sensitivity analysis methods are classified as belonging to either of the two broad categories: the discrete (quasi-analytical) approach and the continuous approach. The two approaches differ by the order in which discretization and differentiation of the governing equations and boundary conditions is undertaken. The discussion focuses on the discrete approach. Basic equations are presented, and the major difficulties are reviewed in some detail, as are the proposed solutions. Recent research activity concerned with the continuous approach is also discussed.

  5. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2003-01-01

    An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.

  6. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  7. Demasking the integrated value of discharge - Advanced sensitivity analysis on the components of hydrological models

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Gafurov, Abror; Fohrer, Nicola; Gupta, Hoshin

    2016-04-01

    The hydrologic response variable most often used in sensitivity analysis is discharge which provides an integrated value of all catchment processes. The typical sensitivity analysis evaluates how changes in the model parameters affect the model output. However, due to discharge being the aggregated effect of all hydrological processes, the sensitivity signal of a certain model parameter can be strongly masked. A more advanced form of sensitivity analysis would be achieved if we could investigate how the sensitivity of a certain modelled process variable relates to the changes in a parameter. Based on this, the controlling parameters for different hydrological components could be detected. Towards this end, we apply the approach of temporal dynamics of parameter sensitivity (TEDPAS) to calculate the daily sensitivities for different model outputs with the FAST method. The temporal variations in parameter dominance are then analysed for both the modelled hydrological components themselves, and also for the rates of change (derivatives) in the modelled hydrological components. The daily parameter sensitivities are then compared with the modelled hydrological components using regime curves. Application of this approach shows that when the corresponding modelled process is investigated instead of discharge, we obtain both an increased indication of parameter sensitivity, and also a clear pattern showing how the seasonal patterns of parameter dominance change over time for each hydrological process. By relating these results with the model structure, we can see that the sensitivity of model parameters is influenced by the function of the parameter. While capacity parameters show more sensitivity to the modelled hydrological component, flux parameters tend to have a higher sensitivity to rates of change in the modelled hydrological component. By better disentangling the information hidden in the discharge values, we can use sensitivity analyses to obtain a clearer signal

  8. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  9. Sensitivity analysis of infectious disease models: methods, advances and their application.

    PubMed

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V

    2013-09-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods-scatter plots, the Morris and Sobol' methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method-and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  10. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  11. Advanced Sensitivity Analysis of the Danish Eulerian Model in Parallel and Grid Environment

    NASA Astrophysics Data System (ADS)

    Ostromsky, Tz.; Dimov, I.; Marinov, P.; Georgieva, R.; Zlatev, Z.

    2011-11-01

    A 3-stage sensitivity analysis approach, based on analysis of variances technique for calculating Sobol's global sensitivity indices and computationaly efficient Monte Carlo integration techniques is considered and applied to a large-scale air pollurion model, the Danish Eulerian Model. On the first stage it is necessary to carry out a set of computationally expensive numerical experiments and to extract the necessary sensitivity analysis data. The output is used to construct mesh-functions of ozone concentration ratios to be used in the next stages for evaluating the necessary variances. Here we use a specially adapted for the purpose version of the model, called SA-DEM. It has been successfully implemented and run on the most powerful parallel supercomputer in Bulgaria—IBM Blue Gene/P. A more advanced version, capable of using efficiently the full capacity of this powerful supercomputer, is described in this paper, followed by some performance analysis of the numerical experiments. Another source of computational power for solving such a tuff numerical problem is the computational grid. That is why another version of SA-DEM has been adapted to exploit efficiently the capacity of our Grid infrastructure. The numerical results from both the parallel and Grid implementation are presented, compared and analysed.

  12. Recent advances in the sensitivity analysis for the thermomechanical postbuckling of composite panels

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Three recent developments in the sensitivity analysis for thermomechanical postbuckling response of composite panels are reviewed. The three developments are: (1) effective computational procedure for evaluating hierarchical sensitivity coefficients of the various response quantities with respect to the different laminate, layer, and micromechanical characteristics; (2) application of reduction methods to the sensitivity analysis of the postbuckling response; and (3) accurate evaluation of the sensitivity coefficients to transverse shear stresses. Sample numerical results are presented to demonstrate the effectiveness of the computational procedures presented. Some of the future directions for research on sensitivity analysis for the thermomechanical postbuckling response of composite and smart structures are outlined.

  13. Sensitivity analysis and multidisciplinary optimization for aircraft design - Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  14. Sensitivity analysis and multidisciplinary optimization for aircraft design: Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  15. Advanced Simulation Capability for Environmental Management (ASCEM): Developments in Uncertainty Quantification and Sensitivity Analysis.

    NASA Astrophysics Data System (ADS)

    McKinney, S. W.

    2015-12-01

    Effectiveness of uncertainty quantification (UQ) and sensitivity analysis (SA) has been improved in ASCEM by choosing from a variety of methods to best suit each model. Previously, ASCEM had a small toolset for UQ and SA, leaving out benefits of the many unincluded methods. Many UQ and SA methods are useful for analyzing models with specific characteristics; therefore, programming these methods into ASCEM would have been inefficient. Embedding the R programming language into ASCEM grants access to a plethora of UQ and SA methods. As a result, programming required is drastically decreased, and runtime efficiency and analysis effectiveness are increased relative to each unique model.

  16. Advances in Sensitivity Analysis Capabilities with SCALE 6.0 and 6.1

    SciTech Connect

    Rearden, Bradley T; Petrie Jr, Lester M; Williams, Mark L

    2010-01-01

    The sensitivity and uncertainty analysis sequences of SCALE compute the sensitivity of k{sub eff} to each constituent multigroup cross section using perturbation theory based on forward and adjoint transport computations with several available codes. Versions 6.0 and 6.1 of SCALE, released in 2009 and 2010, respectively, include important additions to the TSUNAMI-3D sequence, which computes forward and adjoint solutions in multigroup with the KENO Monte Carlo codes. Previously, sensitivity calculations were performed with the simple and efficient geometry capabilities of KENO V.a, but now calculations can also be performed with the generalized geometry code KENO-VI. TSUNAMI-3D requires spatial refinement of the angular flux moment solutions for the forward and adjoint calculations. These refinements are most efficiently achieved with the use of a mesh accumulator. For SCALE 6.0, a more flexible mesh accumulator capability has been added to the KENO codes, enabling varying granularity of the spatial refinement to optimize the calculation for different regions of the system model. The new mesh capabilities allow the efficient calculation of larger models than were previously possible. Additional improvements in the TSUNAMI calculations were realized in the computation of implicit effects of resonance self-shielding on the final sensitivity coefficients. Multigroup resonance self-shielded cross sections are accurately computed with SCALE's robust deterministic continuous-energy treatment for the resolved and thermal energy range and with Bondarenko shielding factors elsewhere, including the unresolved resonance range. However, the sensitivities of the self-shielded cross sections to the parameters input to the calculation are quantified using only full-range Bondarenko factors.

  17. Sensitivity Analysis of earth and environmental models: a systematic review to guide scientific advancement

    NASA Astrophysics Data System (ADS)

    Wagener, Thorsten; Pianosi, Francesca

    2016-04-01

    Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.

  18. Advanced Nuclear Measurements - Sensitivity Analysis Emerging Safeguards, Problems and Proliferation Risk

    SciTech Connect

    Dreicer, J.S.

    1999-07-15

    During the past year this component of the Advanced Nuclear Measurements LDRD-DR has focused on emerging safeguards problems and proliferation risk by investigating problems in two domains. The first is related to the analysis, quantification, and characterization of existing inventories of fissile materials, in particular, the minor actinides (MA) formed in the commercial fuel cycle. Understanding material forms and quantities helps identify and define future measurement problems, instrument requirements, and assists in prioritizing safeguards technology development. The second problem (dissertation research) has focused on the development of a theoretical foundation for sensor array anomaly detection. Remote and unattended monitoring or verification of safeguards activities is becoming a necessity due to domestic and international budgetary constraints. However, the ability to assess the trustworthiness of a sensor array has not been investigated. This research is developing an anomaly detection methodology to assess the sensor array.

  19. Development of the High-Order Decoupled Direct Method in Three Dimensions for Particulate Matter: Enabling Advanced Sensitivity Analysis in Air Quality Models

    EPA Science Inventory

    The high-order decoupled direct method in three dimensions for particular matter (HDDM-3D/PM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity...

  20. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  1. Detection of acute nervous system injury with advanced diffusion-weighted MRI: a simulation and sensitivity analysis.

    PubMed

    Skinner, Nathan P; Kurpad, Shekar N; Schmit, Brian D; Budde, Matthew D

    2015-11-01

    Diffusion-weighted imaging (DWI) is a powerful tool to investigate the microscopic structure of the central nervous system (CNS). Diffusion tensor imaging (DTI), a common model of the DWI signal, has a demonstrated sensitivity to detect microscopic changes as a result of injury or disease. However, DTI and other similar models have inherent limitations that reduce their specificity for certain pathological features, particularly in tissues with complex fiber arrangements. Methods such as double pulsed field gradient (dPFG) and q-vector magic angle spinning (qMAS) have been proposed to specifically probe the underlying microscopic anisotropy without interference from the macroscopic tissue organization. This is particularly important for the study of acute injury, where abrupt changes in the microscopic morphology of axons and dendrites manifest as focal enlargements known as beading. The purpose of this work was to assess the relative sensitivity of DWI measures to beading in the context of macroscopic fiber organization and edema. Computational simulations of DWI experiments in normal and beaded axons demonstrated that, although DWI models can be highly specific for the simulated pathologies of beading and volume fraction changes in coherent fiber pathways, their sensitivity to a single idealized pathology is considerably reduced in crossing and dispersed fibers. However, dPFG and qMAS have a high sensitivity for beading, even in complex fiber tracts. Moreover, in tissues with coherent arrangements, such as the spinal cord or nerve fibers in which tract orientation is known a priori, a specific dPFG sequence variant decreases the effects of edema and improves specificity for beading. Collectively, the simulation results demonstrate that advanced DWI methods, particularly those which sample diffusion along multiple directions within a single acquisition, have improved sensitivity to acute axonal injury over conventional DTI metrics and hold promise for more

  2. A one- and two-dimensional cross-section sensitivity and uncertainty path of the AARE (Advanced Analysis for Reactor Engineering) modular code system

    SciTech Connect

    Davidson, J.W.; Dudziak, D.J.; Higgs, C.E.; Stepanek, J.

    1988-01-01

    AARE, a code package to perform Advanced Analysis for Reactor Engineering, is a linked modular system for fission reactor core and shielding, as well as fusion blanket, analysis. Its cross-section sensitivity and uncertainty path presently includes the cross-section processing and reformatting code TRAMIX, cross-section homogenization and library reformatting code MIXIT, the 1-dimensional transport code ONEDANT, the 2-dimensional transport code TRISM, and the 1- and 2- dimensional cross-section sensitivity and uncertainty code SENSIBL. IN the present work, a short description of the whole AARE system is given, followed by a detailed description of the cross-section sensitivity and uncertainty path. 23 refs., 2 figs.

  3. Sensitivity Test Analysis

    1992-02-20

    SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less

  4. Advanced protein crystal growth programmatic sensitivity study

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The purpose of this study is to define the costs of various APCG (Advanced Protein Crystal Growth) program options and to determine the parameters which, if changed, impact the costs and goals of the programs and to what extent. This was accomplished by developing and evaluating several alternate programmatic scenarios for the microgravity Advanced Protein Crystal Growth program transitioning from the present shuttle activity to the man tended Space Station to the permanently manned Space Station. These scenarios include selected variations in such sensitivity parameters as development and operational costs, schedules, technology issues, and crystal growth methods. This final report provides information that will aid in planning the Advanced Protein Crystal Growth Program.

  5. LISA Telescope Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    The results of a LISA telescope sensitivity analysis will be presented, The emphasis will be on the outgoing beam of the Dall-Kirkham' telescope and its far field phase patterns. The computed sensitivity analysis will include motions of the secondary with respect to the primary, changes in shape of the primary and secondary, effect of aberrations of the input laser beam and the effect the telescope thin film coatings on polarization. An end-to-end optical model will also be discussed.

  6. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  7. Brief analysis of causes of sensitive skin and advances in evaluation of anti-allergic activity of cosmetic products.

    PubMed

    Fan, L; He, C; Jiang, L; Bi, Y; Dong, Y; Jia, Y

    2016-04-01

    This review focuses on the causes of sensitive skin and elaborates on the relationship between skin sensitivity and skin irritations and allergies, which has puzzled cosmetologists. Here, an overview is presented of the research on active ingredients in cosmetic products for sensitive skin (anti-sensitive ingredients), which is followed by a discussion of their experimental efficacy. Moreover, several evaluation methods for the efficacy of anti-sensitive ingredients are classified and summarized. Through this review, we aim to provide the cosmetic industry with a better understanding of sensitive skin, which could in turn provide some theoretical guidance to the research on targeted cosmetic products. PMID:26444676

  8. Sensitivity Analysis Without Assumptions

    PubMed Central

    VanderWeele, Tyler J.

    2016-01-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder. PMID:26841057

  9. Sensitivity analysis of SPURR

    SciTech Connect

    Witholder, R.E.

    1980-04-01

    The Solar Energy Research Institute has conducted a limited sensitivity analysis on a System for Projecting the Utilization of Renewable Resources (SPURR). The study utilized the Domestic Policy Review scenario for SPURR agricultural and industrial process heat and utility market sectors. This sensitivity analysis determines whether variations in solar system capital cost, operation and maintenance cost, and fuel cost (biomass only) correlate with intuitive expectations. The results of this effort contribute to a much larger issue: validation of SPURR. Such a study has practical applications for engineering improvements in solar technologies and is useful as a planning tool in the R and D allocation process.

  10. RESRAD parameter sensitivity analysis

    SciTech Connect

    Cheng, J.J.; Yu, C.; Zielen, A.J.

    1991-08-01

    Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.

  11. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F., Jr.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  12. Advances in Identifying Beryllium Sensitization and Disease

    PubMed Central

    Middleton, Dan; Kowalski, Peter

    2010-01-01

    Beryllium is a lightweight metal with unique qualities related to stiffness, corrosion resistance, and conductivity. While there are many useful applications, researchers in the 1930s and l940s linked beryllium exposure to a progressive occupational lung disease. Acute beryllium disease is a pulmonary irritant response to high exposure levels, whereas chronic beryllium disease (CBD) typically results from a hypersensitivity response to lower exposure levels. A blood test, the beryllium lymphocyte proliferation test (BeLPT), was an important advance in identifying individuals who are sensitized to beryllium (BeS) and thus at risk for developing CBD. While there is no true “gold standard” for BeS, basic epidemiologic concepts have been used to advance our understanding of the different screening algorithms. PMID:20195436

  13. LISA Telescope Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)

    2002-01-01

    The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.

  14. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  15. Sensitivity analysis in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1984-01-01

    Information on sensitivity analysis in computational aerodynamics is given in outline, graphical, and chart form. The prediction accuracy if the MCAERO program, a perturbation analysis method, is discussed. A procedure for calculating perturbation matrix, baseline wing paneling for perturbation analysis test cases and applications of an inviscid sensitivity matrix are among the topics covered.

  16. Geothermal well cost sensitivity analysis: current status

    SciTech Connect

    Carson, C.C.; Lin, Y.T.

    1980-01-01

    The geothermal well-cost model developed by Sandia National Laboratories is being used to analyze the sensitivity of well costs to improvements in geothermal drilling technology. Three interim results from this modeling effort are discussed. The sensitivity of well costs to bit parameters, rig parameters, and material costs; an analysis of the cost reduction potential of an advanced bit; and a consideration of breakeven costs for new cementing technology. All three results illustrate that the well-cost savings arising from any new technology will be highly site-dependent but that in specific wells the advances considered can result in significant cost reductions.

  17. Advanced PFBC transient analysis

    SciTech Connect

    White, J.S.; Bonk, D.L.

    1997-05-01

    Transient modeling and analysis of advanced Pressurized Fluidized Bed Combustion (PFBC) systems is a research area that is currently under investigation by the US Department of Energy`s Federal Energy Technology Center (FETC). The object of the effort is to identify key operating parameters that affect plant performance and then quantify the basic response of major sub-systems to changes in operating conditions. PC-TRAX{trademark}, a commercially available dynamic software program, was chosen and applied in this modeling and analysis effort. This paper describes the development of a series of TRAX-based transient models of advanced PFBC power plants. These power plants burn coal or other suitable fuel in a PFBC, and the high temperature flue gas supports low-Btu fuel gas or natural gas combustion in a gas turbine topping combustor. When it is utilized, the low-Btu fuel gas is produced in a bubbling bed carbonizer. High temperature, high pressure combustion products exiting the topping combustor are expanded in a modified gas turbine to generate electrical power. Waste heat from the system is used to raise and superheat steam for a reheat steam turbine bottoming cycle that generates additional electrical power. Basic control/instrumentation models were developed and modeled in PC-TRAX and used to investigate off-design plant performance. System performance for various transient conditions and control philosophies was studied.

  18. Estimation of natural history parameters of breast cancer based on non-randomized organized screening data: subsidiary analysis of effects of inter-screening interval, sensitivity, and attendance rate on reduction of advanced cancer.

    PubMed

    Wu, Jenny Chia-Yun; Hakama, Matti; Anttila, Ahti; Yen, Amy Ming-Fang; Malila, Nea; Sarkeala, Tytti; Auvinen, Anssi; Chiu, Sherry Yueh-Hsia; Chen, Hsiu-Hsi

    2010-07-01

    Estimating the natural history parameters of breast cancer not only elucidates the disease progression but also make contributions to assessing the impact of inter-screening interval, sensitivity, and attendance rate on reducing advanced breast cancer. We applied three-state and five-state Markov models to data on a two-yearly routine mammography screening in Finland between 1988 and 2000. The mean sojourn time (MST) was computed from estimated transition parameters. Computer simulation was implemented to examine the effect of inter-screening interval, sensitivity, and attendance rate on reducing advanced breast cancers. In three-state model, the MST was 2.02 years, and the sensitivity for detecting preclinical breast cancer was 84.83%. In five-state model, the MST was 2.21 years for localized tumor and 0.82 year for non-localized tumor. Annual, biennial, and triennial screening programs can reduce 53, 37, and 28% of advanced cancer. The effectiveness of intensive screening with poor attendance is the same as that of infrequent screening with high attendance rate. We demonstrated how to estimate the natural history parameters using a service screening program and applied these parameters to assess the impact of inter-screening interval, sensitivity, and attendance rate on reducing advanced cancer. The proposed method makes contribution to further cost-effectiveness analysis. However, these findings had better be validated by using a further long-term follow-up data. PMID:20054645

  19. Probabilistic sensitivity analysis in health economics.

    PubMed

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. PMID:21930515

  20. Recent advances in sensitized mesoscopic solar cells.

    PubMed

    Grätzel, Michael

    2009-11-17

    -intensive high vacuum and materials purification steps that are currently employed in the fabrication of all other thin-film solar cells. Organic materials are abundantly available, so that the technology can be scaled up to the terawatt scale without running into feedstock supply problems. This gives organic-based solar cells an advantage over the two major competing thin-film photovoltaic devices, i.e., CdTe and CuIn(As)Se, which use highly toxic materials of low natural abundance. However, a drawback of the current embodiment of OPV cells is that their efficiency is significantly lower than that for single and multicrystalline silicon as well as CdTe and CuIn(As)Se cells. Also, polymer-based OPV cells are very sensitive to water and oxygen and, hence, need to be carefully sealed to avoid rapid degradation. The research discussed within the framework of this Account aims at identifying and providing solutions to the efficiency problems that the OPV field is still facing. The discussion focuses on mesoscopic solar cells, in particular, dye-sensitized solar cells (DSCs), which have been developed in our laboratory and remain the focus of our investigations. The efficiency problem is being tackled using molecular science and nanotechnology. The sensitizer constitutes the heart of the DSC, using sunlight to pump electrons from a lower to a higher energy level, generating in this fashion an electric potential difference, which can exploited to produce electric work. Currently, there is a quest for sensitizers that achieve effective harnessing of the red and near-IR part of sunlight, converting these photons to electricity better than the currently used generation of dyes. Progress in this area has been significant over the past few years, resulting in a boost in the conversion efficiency of the DSC that will be reviewed. PMID:19715294

  1. Sensitive oil industry: users of advanced technology

    NASA Astrophysics Data System (ADS)

    Lindsey, Rhonda P.; Barnes, James L.

    1999-01-01

    The oil industry exemplifies mankind's search for resource sin a harsh environment here on the earth. Traditionally, the oil industry has created technological solutions to increasingly difficult exploration, drilling, and production activities as the need has arisen. The depths to which a well must be drilled to produce the finite hydrocarbon resources are increasing and the surface environments during oil and gas activities is the key to success, not information that is hours old or incomplete; but 'real-time' data that responds to the variable environment downhole and allows prediction and prevention. The difference that information makes can be the difference between a successfully drilled well and a blowout that causes permanent damage to the reservoir and may reduce the value of the reserves downhole. The difference that information makes can make the difference between recovering 22 percent of the hydrocarbon reserves in a profitable field and recovering none of the reserves because of an uneconomic bottom line. Sensors of every type are essential in the new oil and gas industry and they must be rugged, accurate, affordable, and long lived. It is not just for the sophisticated majors exploring the very deep waters of the world but for the thousands of independent producers who provide a lion's share of the oil and gas produced in the US domestic market. The Department of Energy has been instrumental in keeping reserves from being lost by funding advancements in sensor technology. Due to sponsorship by the Federal Government, the combined efforts of researchers in the National Laboratories, academic institutions, and industry research centers are producing increasingly accurate tools capable of functioning in extreme conditions with economics acceptable to the accountants of the industry. Three examples of such senors developed with Federal funding are given.

  2. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  3. An analysis of sensitivity tests

    SciTech Connect

    Neyer, B.T.

    1992-03-06

    A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, {mu}, and the standard deviation, {sigma}) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.

  4. Pain sensitivity profiles in patients with advanced knee osteoarthritis.

    PubMed

    Frey-Law, Laura A; Bohr, Nicole L; Sluka, Kathleen A; Herr, Keela; Clark, Charles R; Noiseux, Nicolas O; Callaghan, John J; Zimmerman, M Bridget; Rakel, Barbara A

    2016-09-01

    The development of patient profiles to subgroup individuals on a variety of variables has gained attention as a potential means to better inform clinical decision making. Patterns of pain sensitivity response specific to quantitative sensory testing (QST) modality have been demonstrated in healthy subjects. It has not been determined whether these patterns persist in a knee osteoarthritis population. In a sample of 218 participants, 19 QST measures along with pain, psychological factors, self-reported function, and quality of life were assessed before total knee arthroplasty. Component analysis was used to identify commonalities across the 19 QST assessments to produce standardized pain sensitivity factors. Cluster analysis then grouped individuals who exhibited similar patterns of standardized pain sensitivity component scores. The QST resulted in 4 pain sensitivity components: heat, punctate, temporal summation, and pressure. Cluster analysis resulted in 5 pain sensitivity profiles: a "low pressure pain" group, an "average pain" group, and 3 "high pain" sensitivity groups who were sensitive to different modalities (punctate, heat, and temporal summation). Pain and function differed between pain sensitivity profiles, along with sex distribution; however, no differences in osteoarthritis grade, medication use, or psychological traits were found. Residualizing QST data by age and sex resulted in similar components and pain sensitivity profiles. Furthermore, these profiles are surprisingly similar to those reported in healthy populations, which suggests that individual differences in pain sensitivity are a robust finding even in an older population with significant disease. PMID:27152688

  5. Advanced Economic Analysis

    NASA Technical Reports Server (NTRS)

    Greenberg, Marc W.; Laing, William

    2013-01-01

    An Economic Analysis (EA) is a systematic approach to the problem of choosing the best method of allocating scarce resources to achieve a given objective. An EA helps guide decisions on the "worth" of pursuing an action that departs from status quo ... an EA is the crux of decision-support.

  6. Advanced tests for skin and respiratory sensitization assessment.

    PubMed

    Rovida, Costanza; Martin, Stefan F; Vivier, Manon; Weltzien, Hans Ulrich; Roggen, Erwin

    2013-01-01

    Sens-it-iv is an FP6 Integrated Project that finished in March 2011 after 66 months of activity, thanks to 12 million € of funding. The ultimate goal of the Sens-it-iv project was the development of a set of in vitro methods for the assessment of the skin and respiratory sensitization potential of chemicals and proteins. The level of development was intended to be at the point to enter the pre-validation phase. At the end of the project it can be concluded that the goal has been largely accomplished. Several advanced methods were evaluated extensively, and for some of them a detailed Standard Operating Procedure (SOP) was established. Other, less advanced methods also contributed to our understanding of the mechanisms driving sensitization. The present contribution, which has been prepared with the support of CAAT-Europe, represents a short summary of what was discussed during the 3-day end congress of the Sens-it-iv project in Brussels. It presents a list of methods that are ready for skin sensitization hazard assessment. Potency evaluation and the possibility of distinguishing skin from respiratory sensitizers are also well advanced. PMID:23665811

  7. Sensitivity and Uncertainty Analysis Shell

    1999-04-20

    SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less

  8. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  9. Stiff DAE integrator with sensitivity analysis capabilities

    2007-11-26

    IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.

  10. Point Source Location Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Cox, J. Allen

    1986-11-01

    This paper presents the results of an analysis of point source location accuracy and sensitivity as a function of focal plane geometry, optical blur spot, and location algorithm. Five specific blur spots are treated: gaussian, diffraction-limited circular aperture with and without central obscuration (obscured and clear bessinc, respectively), diffraction-limited rectangular aperture, and a pill box distribution. For each blur spot, location accuracies are calculated for square, rectangular, and hexagonal detector shapes of equal area. The rectangular detectors are arranged on a hexagonal lattice. The two location algorithms consist of standard and generalized centroid techniques. Hexagonal detector arrays are shown to give the best performance under a wide range of conditions.

  11. Advances in total scattering analysis

    SciTech Connect

    Proffen, Thomas E; Kim, Hyunjeong

    2008-01-01

    In recent years the analysis of the total scattering pattern has become an invaluable tool to study disordered crystalline and nanocrystalline materials. Traditional crystallographic structure determination is based on Bragg intensities and yields the long range average atomic structure. By including diffuse scattering into the analysis, the local and medium range atomic structure can be unravelled. Here we give an overview of recent experimental advances, using X-rays as well as neutron scattering as well as current trends in modelling of total scattering data.

  12. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  13. Design sensitivity analysis of nonlinear structural response

    NASA Technical Reports Server (NTRS)

    Cardoso, J. B.; Arora, J. S.

    1987-01-01

    A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.

  14. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  15. Recent developments in structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Adelman, Howard M.

    1988-01-01

    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.

  16. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  17. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2008-09-01

    This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other

  18. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2013-01-01

    This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.

  19. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  20. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  1. Sensitivity analysis for solar plates

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1986-01-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  2. Sensitivity analysis for solar plates

    NASA Astrophysics Data System (ADS)

    Aster, R. W.

    1986-02-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  3. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  4. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  5. Sensitivity Analysis in the Model Web

    NASA Astrophysics Data System (ADS)

    Jones, R.; Cornford, D.; Boukouvalas, A.

    2012-04-01

    The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In

  6. ADVANCED POWER SYSTEMS ANALYSIS TOOLS

    SciTech Connect

    Robert R. Jensen; Steven A. Benson; Jason D. Laumb

    2001-08-31

    The use of Energy and Environmental Research Center (EERC) modeling tools and improved analytical methods has provided key information in optimizing advanced power system design and operating conditions for efficiency, producing minimal air pollutant emissions and utilizing a wide range of fossil fuel properties. This project was divided into four tasks: the demonstration of the ash transformation model, upgrading spreadsheet tools, enhancements to analytical capabilities using the scanning electron microscopy (SEM), and improvements to the slag viscosity model. The ash transformation model, Atran, was used to predict the size and composition of ash particles, which has a major impact on the fate of the combustion system. To optimize Atran key factors such as mineral fragmentation and coalescence, the heterogeneous and homogeneous interaction of the organically associated elements must be considered as they are applied to the operating conditions. The resulting model's ash composition compares favorably to measured results. Enhancements to existing EERC spreadsheet application included upgrading interactive spreadsheets to calculate the thermodynamic properties for fuels, reactants, products, and steam with Newton Raphson algorithms to perform calculations on mass, energy, and elemental balances, isentropic expansion of steam, and gasifier equilibrium conditions. Derivative calculations can be performed to estimate fuel heating values, adiabatic flame temperatures, emission factors, comparative fuel costs, and per-unit carbon taxes from fuel analyses. Using state-of-the-art computer-controlled scanning electron microscopes and associated microanalysis systems, a method to determine viscosity using the incorporation of grey-scale binning acquired by the SEM image was developed. The image analysis capabilities of a backscattered electron image can be subdivided into various grey-scale ranges that can be analyzed separately. Since the grey scale's intensity is

  7. Recent Advances in Multidisciplinary Analysis and Optimization, part 3

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: aircraft design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  8. Recent Advances in Multidisciplinary Analysis and Optimization, part 1

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  9. Recent Advances in Multidisciplinary Analysis and Optimization, part 2

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  10. Sensitivity analysis and application in exploration geophysics

    NASA Astrophysics Data System (ADS)

    Tang, R.

    2013-12-01

    In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the

  11. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  12. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, James C., III; Taylor, Arthur C., III; Barnwell, Richard W.; Newman, Perry A.; Hou, Gene J.-W.

    1999-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics (CFD). The focus here is on those methods particularly well-suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid CFD algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid CFDs in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  13. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  14. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  15. Comparative Sensitivity Analysis of Muscle Activation Dynamics.

    PubMed

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  16. A numerical comparison of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.

  17. Bayesian sensitivity analysis of a nonlinear finite element model

    NASA Astrophysics Data System (ADS)

    Becker, W.; Oakley, J. E.; Surace, C.; Gili, P.; Rowson, J.; Worden, K.

    2012-10-01

    A major problem in uncertainty and sensitivity analysis is that the computational cost of propagating probabilistic uncertainty through large nonlinear models can be prohibitive when using conventional methods (such as Monte Carlo methods). A powerful solution to this problem is to use an emulator, which is a mathematical representation of the model built from a small set of model runs at specified points in input space. Such emulators are massively cheaper to run and can be used to mimic the "true" model, with the result that uncertainty analysis and sensitivity analysis can be performed for a greatly reduced computational cost. The work here investigates the use of an emulator known as a Gaussian process (GP), which is an advanced probabilistic form of regression. The GP is particularly suited to uncertainty analysis since it is able to emulate a wide class of models, and accounts for its own emulation uncertainty. Additionally, uncertainty and sensitivity measures can be estimated analytically, given certain assumptions. The GP approach is explained in detail here, and a case study of a finite element model of an airship is used to demonstrate the method. It is concluded that the GP is a very attractive way of performing uncertainty and sensitivity analysis on large models, provided that the dimensionality is not too high.

  18. Advanced Technology Lifecycle Analysis System (ATLAS)

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  19. Pressure-Sensitive Paints Advance Rotorcraft Design Testing

    NASA Technical Reports Server (NTRS)

    2013-01-01

    The rotors of certain helicopters can spin at speeds as high as 500 revolutions per minute. As the blades slice through the air, they flex, moving into the wind and back out, experiencing pressure changes on the order of thousands of times a second and even higher. All of this makes acquiring a true understanding of rotorcraft aerodynamics a difficult task. A traditional means of acquiring aerodynamic data is to conduct wind tunnel tests using a vehicle model outfitted with pressure taps and other sensors. These sensors add significant costs to wind tunnel testing while only providing measurements at discrete locations on the model's surface. In addition, standard sensor solutions do not work for pulling data from a rotor in motion. "Typical static pressure instrumentation can't handle that," explains Neal Watkins, electronics engineer in Langley Research Center s Advanced Sensing and Optical Measurement Branch. "There are dynamic pressure taps, but your costs go up by a factor of five to ten if you use those. In addition, recovery of the pressure tap readings is accomplished through slip rings, which allow only a limited amount of sensors and can require significant maintenance throughout a typical rotor test." One alternative to sensor-based wind tunnel testing is pressure sensitive paint (PSP). A coating of a specialized paint containing luminescent material is applied to the model. When exposed to an LED or laser light source, the material glows. The glowing material tends to be reactive to oxygen, explains Watkins, which causes the glow to diminish. The more oxygen that is present (or the more air present, since oxygen exists in a fixed proportion in air), the less the painted surface glows. Imaged with a camera, the areas experiencing greater air pressure show up darker than areas of less pressure. "The paint allows for a global pressure map as opposed to specific points," says Watkins. With PSP, each pixel recorded by the camera becomes an optical pressure

  20. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  1. Pediatric Pain, Predictive Inference, and Sensitivity Analysis.

    ERIC Educational Resources Information Center

    Weiss, Robert

    1994-01-01

    Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…

  2. Advanced materials: Information and analysis needs

    SciTech Connect

    Curlee, T.R.; Das, S.; Lee, R.; Trumble, D.

    1990-09-01

    This report presents the findings of a study to identify the types of information and analysis that are needed for advanced materials. The project was sponsored by the US Bureau of Mines (BOM). It includes a conceptual description of information needs for advanced materials and the development and implementation of a questionnaire on the same subject. This report identifies twelve fundamental differences between advanced and traditional materials and discusses the implications of these differences for data and analysis needs. Advanced and traditional materials differ significantly in terms of physical and chemical properties. Advanced material properties can be customized more easily. The production of advanced materials may differ from traditional materials in terms of inputs, the importance of by-products, the importance of different processing steps (especially fabrication), and scale economies. The potential for change in advanced materials characteristics and markets is greater and is derived from the marriage of radically different materials and processes. In addition to the conceptual study, a questionnaire was developed and implemented to assess the opinions of people who are likely users of BOM information on advanced materials. The results of the questionnaire, which was sent to about 1000 people, generally confirm the propositions set forth in the conceptual part of the study. The results also provide data on the categories of advanced materials and the types of information that are of greatest interest to potential users. 32 refs., 1 fig., 12 tabs.

  3. Ultra-sensitive transducer advances micro-measurement range

    NASA Technical Reports Server (NTRS)

    Rogallo, V. L.

    1964-01-01

    An ultrasensitive piezoelectric transducer, that converts minute mechanical forces into electrical impulses, measures the impact of micrometeoroids against space vehicles. It has uniform sensitivity over the entire target area and a high degree of stability.

  4. NIR sensitivity analysis with the VANE

    NASA Astrophysics Data System (ADS)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  5. Sensitivity analysis for magnetic induction tomography.

    PubMed

    Soleimani, Manuchehr; Jersey-Willuhn, Karen

    2004-01-01

    This work focuses on sensitivity analysis of magnetic induction tomography in terms of theoretical modelling and numerical implementation. We will explain a new and efficient method to determine the Jacobian matrix, directly from the results of the forward solution. The results presented are for the eddy current approximation, and are given in terms of magnetic vector potential, which is computationally convenient, and which may be extracted directly from the FE solution of the forward problem. Examples of sensitivity maps for an opposite sensor geometry are also shown. PMID:17271947

  6. Advanced nanoscale separations and mass spectrometry for sensitive high-throughput proteomics

    SciTech Connect

    Shen, Yufeng; Smith, Richard D.

    2005-06-01

    We review recent development in separations and mass spectrometric instrumentation for sensitive and high-throughput proteomic analyses. These efforts have been primarily focused on the development of high-efficiency (separation peak capacity of ~103) nanoscale liquid chromatography (nanoLC; e.g., flow rates extending down to ~20 nL/min at optimal separation linear velocities through narrow packed capillaries) in combination with advanced mass spectrometry (MS), including high sensitivity and high resolution Fourier transform ion cyclotron resonance (FTICR) MS. This technology enables MS analysis of low nanogram-level proteomic samples (i.e., nanoscale proteomics) with individual protein identification sensitivity at the low zeptomole-level. The resultant protein measurement dynamic range can reach 106 for nanogram-sized proteomic samples, while more abundant proteins can be detected from complex sub-picogram size proteome samples. The average proteome identification throughput using MS/MS is >200 proteins/h for a ~3 h analysis. These qualities provide the foundation for proteomics studies of single or small populations of cells. The instrumental robustness required for automation and providing high quality routine performance nanoscale proteomic analyses is also discussed.

  7. Rotary absorption heat pump sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Bamberger, J. A.; Zalondek, F. R.

    1990-03-01

    Conserve Resources, Incorporated is currently developing an innovative, patented absorption heat pump. The heat pump uses rotation and thin film technology to enhance the absorption process and to provide a more efficient, compact system. The results are presented of a sensitivity analysis of the rotary absorption heat pump (RAHP) performance conducted to further the development of a 1-ton RAHP. The objective of the uncertainty analysis was to determine the sensitivity of RAHP steady state performance to uncertainties in design parameters. Prior to conducting the uncertainty analysis, a computer model was developed to describe the performance of the RAHP thermodynamic cycle. The RAHP performance is based on many interrelating factors, not all of which could be investigated during the sensitivity analysis. Confirmatory measurements of LiBr/H2O properties during absorber/generator operation will provide experimental verification that the system is operating as it was designed to operate. Quantities to be measured include: flow rate in the absorber and generator, film thickness, recirculation rate, and the effects of rotational speed on these parameters.

  8. Advanced analysis methods in particle physics

    SciTech Connect

    Bhat, Pushpalatha C.; /Fermilab

    2010-10-01

    Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.

  9. Advanced Power System Analysis Capabilities

    NASA Technical Reports Server (NTRS)

    1997-01-01

    As a continuing effort to assist in the design and characterization of space power systems, the NASA Lewis Research Center's Power and Propulsion Office developed a powerful computerized analysis tool called System Power Analysis for Capability Evaluation (SPACE). This year, SPACE was used extensively in analyzing detailed operational timelines for the International Space Station (ISS) program. SPACE was developed to analyze the performance of space-based photovoltaic power systems such as that being developed for the ISS. It is a highly integrated tool that combines numerous factors in a single analysis, providing a comprehensive assessment of the power system's capability. Factors particularly critical to the ISS include the orientation of the solar arrays toward the Sun and the shadowing of the arrays by other portions of the station.

  10. Diagnostic Analysis of Middle Atmosphere Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Zhu, X.; Cai, M.; Swartz, W. H.; Coy, L.; Yee, J.; Talaat, E. R.

    2013-12-01

    Both the middle atmosphere climate sensitivity associated with the cooling trend and its uncertainty due to a complex system of drivers increase with altitude. Furthermore, the combined effect of middle atmosphere cooling due to long-lived greenhouse gases and ozone is also associated with natural climate variations due to solar activity. To understand and predict climate change from a global perspective, we use the recently developed climate feedback-response analysis method (CFRAM) to identify and isolate the signals from the external forcing and from different feedback processes in the middle atmosphere climate system. By use of the JHU/APL middle atmosphere radiation algorithm, the CFRAM is applied to the model output fields of the high-altitude GEOS-5 climate model in the middle atmosphere to delineate the individual contributions of radiative forcing to middle atmosphere climate sensitivity.

  11. Sensitivity analysis of artificial neural network (ANN) brightness temperature predictions over snow-covered regions in North America using the Advanced Microwave Sounding Radiometer (AMSR-E) from 2002 to 2011

    NASA Astrophysics Data System (ADS)

    Xue, Y.; Forman, B. A.

    2013-12-01

    Snow is a significant contributor to the earth's hydrologic cycle, energy cycle, and climate system. Further, up to 80% of freshwater supply in the western United States originates as snow (and ice). Characterization of the mass of snow, or snow water equivalent (SWE), across regional and continental scales has commonly been conducted using satellite-based passive microwave (PMW) brightness temperatures (Tb) within a SWE retrieval algorithm. However, SWE retrievals often suffer from deficiencies related to deep snow, wet snow, snow evolution, snow aging, overlying vegetation, surface and internal ice lenses, depth hoar, and sub-grid scale lakes. As an alternative to SWE retrievals, this study explores the potential for using PMW Tb and machine learning within a data assimilation framework. An artificial neural network (ANN) is presented for eventual use as an observation operator to map the land surface model states into Tb space. This study explores the sensitivity of an ANN as a computationally efficient measurement model operator for the prediction of PMW Tb across North America. The analysis employs normalized sensitivity coefficients and a one-at-a-time approach such that each of the 11 different inputs could be examined separately in order to quantify the impact of perturbations to each input on the multi-frequency, multi-polarization Tb output from the ANN. Spatiotemporal variability in the Tb predictions across regional spatial scales and seasonal timescales is investigated from 2002 to 2011. Preliminary results suggest ANN-based Tb predictions are sensitive to certain snow states, such as SWE, snow density, and snow temperature in non-vegetated or sparsely vegetated regions. Further, sensitivity of ANN prediction of ΔTb=Tb, 18v*-Tb, 36v* to changes in SWE suggest the likelihood for success when the ANN is eventually implemented into a data assimilation framework. Despite the promise in these initial results, challenges remain at enhancing ANN sensitivity

  12. Engaging Chinese American Adults in Advance Care Planning: A Community-Based, Culturally Sensitive Seminar.

    PubMed

    Lee, Mei Ching; Hinderer, Katherine A; Friedmann, Erika

    2015-08-01

    Ethnic minority groups are less engaged than Caucasian American adults in advance care planning (ACP). Knowledge deficits, language, and culture are barriers to ACP. Limited research exists on ACP and advance directives in the Chinese American adult population. Using a pre-posttest, repeated measures design, the current study explored the effectiveness of a nurseled, culturally sensitive ACP seminar for Chinese American adults on (a) knowledge, completion, and discussion of advance directives; and (b) the relationship between demographic variables, advance directive completion, and ACP discussions. A convenience sample of 72 urban, community-dwelling Chinese American adults (mean age=61 years) was included. Knowledge, advance directive completion, and ACP discussions increased significantly after attending the nurse-led seminar (p<0.01). Increased age correlated with advance directive completion and ACP discussions; female gender correlated with ACP discussions. Nursing education in a community setting increased advance directive knowledge and ACP engagement in Chinese American adults. PMID:25912237

  13. Sensitivity analysis of coexistence in ecological communities: theory and application.

    PubMed

    Barabás, György; Pásztor, Liz; Meszéna, Géza; Ostling, Annette

    2014-12-01

    Sensitivity analysis, the study of how ecological variables of interest respond to changes in external conditions, is a theoretically well-developed and widely applied approach in population ecology. Though the application of sensitivity analysis to predicting the response of species-rich communities to disturbances also has a long history, derivation of a mathematical framework for understanding the factors leading to robust coexistence has only been a recent undertaking. Here we suggest that this development opens up a new perspective, providing advances ranging from the applied to the theoretical. First, it yields a framework to be applied in specific cases for assessing the extinction risk of community modules in the face of environmental change. Second, it can be used to determine trait combinations allowing for coexistence that is robust to environmental variation, and limits to diversity in the presence of environmental variation, for specific community types. Third, it offers general insights into the nature of communities that are robust to environmental variation. We apply recent community-level extensions of mathematical sensitivity analysis to example models for illustration. We discuss the advantages and limitations of the method, and some of the empirical questions the theoretical framework could help answer. PMID:25252135

  14. The Theoretical Foundation of Sensitivity Analysis for GPS

    NASA Astrophysics Data System (ADS)

    Shikoska, U.; Davchev, D.; Shikoski, J.

    2008-10-01

    In this paper the equations of sensitivity analysis are derived and established theoretical underpinnings for the analyses. Paper propounds a land-vehicle navigation concepts and definition for sensitivity analysis. Equations of sensitivity analysis are presented for a linear Kalman filter and case study is given to illustrate the use of sensitivity analysis to the reader. At the end of the paper, extensions that are required for this research are made to the basic equations of sensitivity analysis specifically; the equations of sensitivity analysis are re-derived for a linearized Kalman filter.

  15. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094

  16. Advanced Placement: Model Policy Components. Policy Analysis

    ERIC Educational Resources Information Center

    Zinth, Jennifer

    2016-01-01

    Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…

  17. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  18. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  19. A Post-Monte-Carlo Sensitivity Analysis Code

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  20. Advances in Mid-Infrared Spectroscopy for Chemical Analysis.

    PubMed

    Haas, Julian; Mizaikoff, Boris

    2016-06-12

    Infrared spectroscopy in the 3-20 μm spectral window has evolved from a routine laboratory technique into a state-of-the-art spectroscopy and sensing tool by benefitting from recent progress in increasingly sophisticated spectra acquisition techniques and advanced materials for generating, guiding, and detecting mid-infrared (MIR) radiation. Today, MIR spectroscopy provides molecular information with trace to ultratrace sensitivity, fast data acquisition rates, and high spectral resolution catering to demanding applications in bioanalytics, for example, and to improved routine analysis. In addition to advances in miniaturized device technology without sacrificing analytical performance, selected innovative applications for MIR spectroscopy ranging from process analysis to biotechnology and medical diagnostics are highlighted in this review. PMID:27070183

  1. Advances in Mid-Infrared Spectroscopy for Chemical Analysis

    NASA Astrophysics Data System (ADS)

    Haas, Julian; Mizaikoff, Boris

    2016-06-01

    Infrared spectroscopy in the 3–20 μm spectral window has evolved from a routine laboratory technique into a state-of-the-art spectroscopy and sensing tool by benefitting from recent progress in increasingly sophisticated spectra acquisition techniques and advanced materials for generating, guiding, and detecting mid-infrared (MIR) radiation. Today, MIR spectroscopy provides molecular information with trace to ultratrace sensitivity, fast data acquisition rates, and high spectral resolution catering to demanding applications in bioanalytics, for example, and to improved routine analysis. In addition to advances in miniaturized device technology without sacrificing analytical performance, selected innovative applications for MIR spectroscopy ranging from process analysis to biotechnology and medical diagnostics are highlighted in this review.

  2. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  3. Recent advances in morphological cell image analysis.

    PubMed

    Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei

    2012-01-01

    This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215

  4. Global sensitivity analysis of groundwater transport

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Soltani, S.; Vigouroux, G.

    2015-12-01

    In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.

  5. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  6. Multicomponent dynamical nucleation theory and sensitivity analysis.

    PubMed

    Kathmann, Shawn M; Schenter, Gregory K; Garrett, Bruce C

    2004-05-15

    Vapor to liquid multicomponent nucleation is a dynamical process governed by a delicate interplay between condensation and evaporation. Since the population of the vapor phase is dominated by monomers at reasonable supersaturations, the formation of clusters is governed by monomer association and dissociation reactions. Although there is no intrinsic barrier in the interaction potential along the minimum energy path for the association process, the formation of a cluster is impeded by a free energy barrier. Dynamical nucleation theory provides a framework in which equilibrium evaporation rate constants can be calculated and the corresponding condensation rate constants determined from detailed balance. The nucleation rate can then be obtained by solving the kinetic equations. The rate constants governing the multistep kinetics of multicomponent nucleation including sensitivity analysis and the potential influence of contaminants will be presented and discussed. PMID:15267849

  7. Sensitivity analysis of periodic matrix population models.

    PubMed

    Caswell, Hal; Shyu, Esther

    2012-12-01

    Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments. PMID:23316494

  8. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  9. Sensitivity Analysis of OECD Benchmark Tests in BISON

    SciTech Connect

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  10. Design, analysis and test verification of advanced encapsulation systems

    NASA Technical Reports Server (NTRS)

    Garcia, A., III

    1982-01-01

    An analytical methodology for advanced encapsulation designs was developed. From these methods design sensitivities are established for the development of photovoltaic module criteria and the definition of needed research tasks. Analytical models were developed to perform optical, thermal, electrical and analyses on candidate encapsulation systems. From these analyses several candidate systems were selected for qualification testing. Additionally, test specimens of various types are constructed and tested to determine the validity of the analysis methodology developed. Identified deficiencies and/or discrepancies between analytical models and relevant test data are corrected. Prediction capability of analytical models is improved. Encapsulation engineering generalities, principles, and design aids for photovoltaic module designers is generated.

  11. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  12. Advanced Analysis Methods in High Energy Physics

    SciTech Connect

    Pushpalatha C. Bhat

    2001-10-03

    During the coming decade, high energy physics experiments at the Fermilab Tevatron and around the globe will use very sophisticated equipment to record unprecedented amounts of data in the hope of making major discoveries that may unravel some of Nature's deepest mysteries. The discovery of the Higgs boson and signals of new physics may be around the corner. The use of advanced analysis techniques will be crucial in achieving these goals. The author discusses some of the novel methods of analysis that could prove to be particularly valuable for finding evidence of any new physics, for improving precision measurements and for exploring parameter spaces of theoretical models.

  13. The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites.

  14. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  15. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight

  16. Next generation sequencing analysis of platinum refractory advanced germ cell tumor sensitive to Sunitinib (Sutent®) a VEGFR2/PDGFRβ/c-kit/ FLT3/RET/CSF1R inhibitor in a phase II trial

    PubMed Central

    2014-01-01

    Background Germ cell tumors (GCT) are the most common solid tumors in adolescent and young adult males (age 15 and 35 years) and remain one of the most curable of all solid malignancies. However a subset of patients will have tumors that are refractory to standard chemotherapy agents. The management of this refractory population remains challenging and approximately 400 patients continue to die every year of this refractory disease in the United States. Methods Given the preclinical evidence implicating vascular endothelial growth factor (VEGF) signaling in the biology of germ cell tumors, we hypothesized that the vascular endothelial growth factor receptor (VEGFR) inhibitor sunitinib (Sutent) may possess important clinical activity in the treatment of this refractory disease. We proposed a Phase II efficacy study of sunitinib in seminomatous and non-seminomatous metastatic GCT’s refractory to first line chemotherapy treatment (ClinicalTrials.gov Identifier: NCT00912912). Next generation targeted exome sequencing using HiSeq 2000 (Illumina Inc., San Diego, CA, USA) was performed on the tumor sample of the unusual responder. Results Five patients are enrolled into this Phase II study. Among them we report here the clinical course of a patient (Patient # 5) who had an exceptional response to sunitinib. Next generation sequencing to understand this patient’s response to sunitinib revealed RET amplification, EGFR and KRAS amplification as relevant aberrations. Oncoscan MIP array were employed to validate the copy number analysis that confirmed RET gene amplification. Conclusion Sunitinib conferred clinical benefit to this heavily pre-treated patient. Next generation sequencing of this ‘exceptional responder’ identified the first reported case of a RET amplification as a potential basis of sensitivity to sunitinib (VEGFR2/PDGFRβ/c-kit/ FLT3/RET/CSF1R inhibitor) in a patient with refractory germ cell tumor. Further characterization of GCT patients using

  17. Multitarget global sensitivity analysis of n-butanol combustion.

    PubMed

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-01

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis. PMID:23530815

  18. Advanced Power Plant Development and Analysis Methodologies

    SciTech Connect

    A.D. Rao; G.S. Samuelsen; F.L. Robson; B. Washom; S.G. Berenyi

    2006-06-30

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into advanced power plant systems with goals of achieving high efficiency and minimized environmental impact while using fossil fuels. These power plant concepts include 'Zero Emission' power plants and the 'FutureGen' H2 co-production facilities. The study is broken down into three phases. Phase 1 of this study consisted of utilizing advanced technologies that are expected to be available in the 'Vision 21' time frame such as mega scale fuel cell based hybrids. Phase 2 includes current state-of-the-art technologies and those expected to be deployed in the nearer term such as advanced gas turbines and high temperature membranes for separating gas species and advanced gasifier concepts. Phase 3 includes identification of gas turbine based cycles and engine configurations suitable to coal-based gasification applications and the conceptualization of the balance of plant technology, heat integration, and the bottoming cycle for analysis in a future study. Also included in Phase 3 is the task of acquiring/providing turbo-machinery in order to gather turbo-charger performance data that may be used to verify simulation models as well as establishing system design constraints. The results of these various investigations will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  19. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  20. Sensitivity analysis of volume scattering phase functions.

    PubMed

    Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael

    2016-08-01

    To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m-3. PMID:27505819

  1. Tilt-Sensitivity Analysis for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Papalexandris, Miltiadis; Waluschka, Eugene

    2003-01-01

    A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.

  2. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  3. Sensitivity analysis of textural parameters for vertebroplasty

    NASA Astrophysics Data System (ADS)

    Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.

    2002-05-01

    Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2

  4. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  5. [Ecological sensitivity of Shanghai City based on GIS spatial analysis].

    PubMed

    Cao, Jian-jun; Liu, Yong-juan

    2010-07-01

    In this paper, five sensitivity factors affecting the eco-environment of Shanghai City, i.e., rivers and lakes, historical relics and forest parks, geological disasters, soil pollution, and land use, were selected, and their weights were determined by analytic hierarchy process. Combining with GIS spatial analysis technique, the sensitivities of these factors were classified into four grades, i.e., highly sensitive, moderately sensitive, low sensitive, and insensitive, and the spatial distribution of the ecological sensitivity of Shanghai City was figured out. There existed a significant spatial differentiation in the ecological sensitivity of the City, and the insensitive, low sensitive, moderately sensitive, and highly sensitive areas occupied 37.07%, 5.94%, 38.16%, and 18.83%, respectively. Some suggestions on the City's zoning protection and construction were proposed. This study could provide scientific references for the City's environmental protection and economic development. PMID:20879541

  6. Recent advances in flow injection analysis.

    PubMed

    Trojanowicz, Marek; Kołacińska, Kamila

    2016-04-01

    A dynamic development of methodologies of analytical flow injection measurements during four decades since their invention has reinforced the solid position of flow analysis in the arsenal of techniques and instrumentation of contemporary chemical analysis. With the number of published scientific papers exceeding 20 000, and advanced instrumentation available for environmental, food, and pharmaceutical analysis, flow analysis is well established as an extremely vital field of modern flow chemistry, which is developed simultaneously with methods of chemical synthesis carried out under flow conditions. This review work is based on almost 300 original papers published mostly in the last decade, with special emphasis put on presenting novel achievements from the most recent 2-3 years in order to indicate current development trends of this methodology. Besides the evolution of the design of whole measuring systems, and including especially new applications of various detections methods, several aspects of implications of progress in nanotechnology, and miniaturization of measuring systems for application in different field of modern chemical analysis are also discussed. PMID:26906258

  7. Advancing Behavior Analysis in Zoos and Aquariums.

    PubMed

    Maple, Terry L; Segura, Valerie D

    2015-05-01

    Zoos, aquariums, and other captive animal facilities offer promising opportunities to advance the science and practice of behavior analysis. Zoos and aquariums are necessarily concerned with the health and well-being of their charges and are held to a high standard by their supporters (visitors, members, and donors), organized critics, and the media. Zoos and aquariums offer unique venues for teaching and research and a locus for expanding the footprint of behavior analysis. In North America, Europe, and the UK, formal agreements between zoos, aquariums, and university graduate departments have been operating successfully for decades. To expand on this model, it will be necessary to help zoo and aquarium managers throughout the world to recognize the value of behavior analysis in the delivery of essential animal health and welfare services. Academic institutions, administrators, and invested faculty should consider the utility of training students to meet the growing needs of applied behavior analysis in zoos and aquariums and other animal facilities such as primate research centers, sanctuaries, and rescue centers. PMID:27540508

  8. Extended forward sensitivity analysis of one-dimensional isothermal flow

    SciTech Connect

    Johnson, M.; Zhao, H.

    2013-07-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  9. Advanced digital I&C systems in nuclear power plants: Risk- sensitivities to environmental stressors

    SciTech Connect

    Hassan, M.; Vesely, W.E.

    1996-06-01

    Microprocessor-based advanced digital systems are being used for upgrading analog instrumentation and control (I&C) systems in nuclear power plants (NPPs) in the United States. A concern with using such advanced systems for safety-related applications in NPPs is the limited experience with this equipment in these environments. In this study, we investigate the risk effects of environmental stressors by quantifying the plant`s risk-sensitivities to them. The risk- sensitivities are changes in plant risk caused by the stressors, and are quantified by estimating their effects on I&C failure occurrences and the consequent increase in risk in terms of core damage frequency (CDF). We used available data, including military and NPP operating experience, on the effects of environmental stressors on the reliability of digital I&C equipment. The methods developed are applied to determine and compare risk-sensitivities to temperature, humidity, vibration, EMI (electromagnetic interference) from lightning and smoke as stressors in an example plant using a PRA (Probabilistic Risk Assessment). Uncertainties in the estimates of the stressor effects on the equipment`s reliability are expressed in terms of ranges for risk-sensitivities. The results show that environmental stressors potentially can cause a significant increase in I&C contributions to the CDF. Further, considerable variations can be expected in some stressor effects, depending on where the equipment is located.

  10. Advanced multi-contrast Jones matrix optical coherence tomography for Doppler and polarization sensitive imaging.

    PubMed

    Ju, Myeong Jin; Hong, Young-Joo; Makita, Shuichi; Lim, Yiheng; Kurokawa, Kazuhiro; Duan, Lian; Miura, Masahiro; Tang, Shuo; Yasuno, Yoshiaki

    2013-08-12

    An advanced version of Jones matrix optical coherence tomography (JMT) is demonstrated for Doppler and polarization sensitive imaging of the posterior eye. JMT is capable of providing localized flow tomography by Doppler detection and investigating the birefringence property of tissue through a three-dimensional (3-D) Jones matrix measurement. Owing to an incident polarization multiplexing scheme based on passive optical components, this system is stable, safe in a clinical environment, and cost effective. Since the properties of this version of JMT provide intrinsic compensation for system imperfection, the system is easy to calibrate. Compared with the previous version of JMT, this advanced JMT achieves a sufficiently long depth measurement range for clinical cases of posterior eye disease. Furthermore, a fine spectral shift compensation method based on the cross-correlation of calibration signals was devised for stabilizing the phase of OCT, which enables a high sensitivity Doppler OCT measurement. In addition, a new theory of JMT which integrates the Jones matrix measurement, Doppler measurement, and scattering measurement is presented. This theory enables a sensitivity-enhanced scattering OCT and high-sensitivity Doppler OCT. These new features enable the application of this system to clinical cases. A healthy subject and a geographic atrophy patient were measured in vivo, and simultaneous imaging of choroidal vasculature and birefringence structures are demonstrated. PMID:23938857

  11. NASTRAN flutter analysis of advanced turbopropellers

    NASA Technical Reports Server (NTRS)

    Elchuri, V.; Smith, G. C. C.

    1982-01-01

    An existing capability developed to conduct modal flutter analysis of tuned bladed-shrouded discs in NASTRAN was modified and applied to investigate the subsonic unstalled flutter characteristics of advanced turbopropellers. The modifications pertain to the inclusion of oscillatory modal aerodynamic loads of blades with large (backward and forward) variable sweep. The two dimensional subsonic cascade unsteady aerodynamic theory was applied in a strip theory manner with appropriate modifications for the sweep effects. Each strip is associated with a chord selected normal to any spanwise reference curve such as the blade leading edge. The stability of three operating conditions of a 10-bladed propeller is analyzed. Each of these operating conditions is iterated once to determine the flutter boundary. A 5-bladed propeller is also analyzed at one operating condition to investigate stability. Analytical results obtained are in very good agreement with those from wind tunnel tests.

  12. Advanced development in chemical analysis of Cordyceps.

    PubMed

    Zhao, J; Xie, J; Wang, L Y; Li, S P

    2014-01-01

    Cordyceps sinensis, also called DongChongXiaCao (winter worm summer grass) in Chinese, is a well-known and valued traditional Chinese medicine. In 2006, we wrote a review for discussing the markers and analytical methods in quality control of Cordyceps (J. Pharm. Biomed. Anal. 41 (2006) 1571-1584). Since then this review has been cited by others for more than 60 times, which suggested that scientists have great interest in this special herbal material. Actually, the number of publications related to Cordyceps after 2006 is about 2-fold of that in two decades before 2006 according to the data from Web of Science. Therefore, it is necessary to review and discuss the advanced development in chemical analysis of Cordyceps since then. PMID:23688494

  13. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  14. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  15. Automated sensitivity analysis using the GRESS language

    SciTech Connect

    Pin, F.G.; Oblow, E.M.; Wright, R.Q.

    1986-04-01

    An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies.

  16. Sensitivity analysis of Stirling engine design parameters

    SciTech Connect

    Naso, V.; Dong, W.; Lucentini, M.; Capata, R.

    1998-07-01

    In the preliminary Stirling engine design process, the values of some design parameters (temperature ratio, swept volume ratio, phase angle and dead volume ratio) have to be assumed; as a matter of fact it can be difficult to determine the best values of these parameters for a particular engine design. In this paper, a mathematical model is developed to analyze the sensitivity of engine's performance variations corresponding to variations of these parameters.

  17. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  18. Towards More Efficient and Effective Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2014-05-01

    Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.

  19. Fuzzy sensitivity analysis for reliability assessment of building structures

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2016-06-01

    The mathematical concept of fuzzy sensitivity analysis, which studies the effects of the fuzziness of input fuzzy numbers on the fuzziness of the output fuzzy number, is described in the article. The output fuzzy number is evaluated using Zadeh's general extension principle. The contribution of stochastic and fuzzy uncertainty in reliability analysis tasks of building structures is discussed. The algorithm of fuzzy sensitivity analysis is an alternative to stochastic sensitivity analysis in tasks in which input and output variables are considered as fuzzy numbers.

  20. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  1. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  2. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over

  3. Advancing the sensitivity of selected reaction monitoring-based targeted quantitative proteomics

    SciTech Connect

    Shi, Tujin; Su, Dian; Liu, Tao; Tang, Keqi; Camp, David G.; Qian, Weijun; Smith, Richard D.

    2012-04-01

    Selected reaction monitoring (SRM)—also known as multiple reaction monitoring (MRM)—has emerged as a promising high-throughput targeted protein quantification technology for candidate biomarker verification and systems biology applications. A major bottleneck for current SRM technology, however, is insufficient sensitivity for e.g., detecting low-abundance biomarkers likely present at the pg/mL to low ng/mL range in human blood plasma or serum, or extremely low-abundance signaling proteins in the cells or tissues. Herein we review recent advances in methods and technologies, including front-end immunoaffinity depletion, fractionation, selective enrichment of target proteins/peptides or their posttranslational modifications (PTMs), as well as advances in MS instrumentation, which have significantly enhanced the overall sensitivity of SRM assays and enabled the detection of low-abundance proteins at low to sub- ng/mL level in human blood plasma or serum. General perspectives on the potential of achieving sufficient sensitivity for detection of pg/mL level proteins in plasma are also discussed.

  4. Advanced Coal Wind Hybrid: Economic Analysis

    SciTech Connect

    Phadke, Amol; Goldman, Charles; Larson, Doug; Carr, Tom; Rath, Larry; Balash, Peter; Yih-Huei, Wan

    2008-11-28

    Growing concern over climate change is prompting new thinking about the technologies used to generate electricity. In the future, it is possible that new government policies on greenhouse gas emissions may favor electric generation technology options that release zero or low levels of carbon emissions. The Western U.S. has abundant wind and coal resources. In a world with carbon constraints, the future of coal for new electrical generation is likely to depend on the development and successful application of new clean coal technologies with near zero carbon emissions. This scoping study explores the economic and technical feasibility of combining wind farms with advanced coal generation facilities and operating them as a single generation complex in the Western US. The key questions examined are whether an advanced coal-wind hybrid (ACWH) facility provides sufficient advantages through improvements to the utilization of transmission lines and the capability to firm up variable wind generation for delivery to load centers to compete effectively with other supply-side alternatives in terms of project economics and emissions footprint. The study was conducted by an Analysis Team that consists of staff from the Lawrence Berkeley National Laboratory (LBNL), National Energy Technology Laboratory (NETL), National Renewable Energy Laboratory (NREL), and Western Interstate Energy Board (WIEB). We conducted a screening level analysis of the economic competitiveness and technical feasibility of ACWH generation options located in Wyoming that would supply electricity to load centers in California, Arizona or Nevada. Figure ES-1 is a simple stylized representation of the configuration of the ACWH options. The ACWH consists of a 3,000 MW coal gasification combined cycle power plant equipped with carbon capture and sequestration (G+CC+CCS plant), a fuel production or syngas storage facility, and a 1,500 MW wind plant. The ACWH project is connected to load centers by a 3,000 MW

  5. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  6. Sensitivity analysis and optimization of the nuclear fuel cycle

    SciTech Connect

    Passerini, S.; Kazimi, M. S.; Shwageraus, E.

    2012-07-01

    A sensitivity study has been conducted to assess the robustness of the conclusions presented in the MIT Fuel Cycle Study. The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycles. The options include limited recycling in LWRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. The analysis allowed optimization of the fast reactor conversion ratio with respect to desired fuel cycle performance characteristics. The following parameters were found to significantly affect the performance of recycling technologies and their penetration over time: Capacity Factors of the fuel cycle facilities, Spent Fuel Cooling Time, Thermal Reprocessing Introduction Date, and in core and Out-of-core TRU Inventory Requirements for recycling technology. An optimization scheme of the nuclear fuel cycle is proposed. Optimization criteria and metrics of interest for different stakeholders in the fuel cycle (economics, waste management, environmental impact, etc.) are utilized for two different optimization techniques (linear and stochastic). Preliminary results covering single and multi-variable and single and multi-objective optimization demonstrate the viability of the optimization scheme. (authors)

  7. Partial Differential Algebraic Sensitivity Analysis Code

    1995-05-15

    PDASAC solves stiff, nonlinear initial-boundary-value in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0,1 or 2 in x. Parametric sensitivities of the calculated states are compted simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled.more » Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivites if desired.« less

  8. Aero-Structural Interaction, Analysis, and Shape Sensitivity

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1999-01-01

    A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.

  9. Automating sensitivity analysis of computer models using computer calculus

    SciTech Connect

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs.

  10. Automated procedure for sensitivity analysis using computer calculus

    SciTech Connect

    Oblow, E.M.

    1983-05-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach was found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies.

  11. Advances in radiation biology: Relative radiation sensitivities of human organ systems. Volume 12

    SciTech Connect

    Lett, J.T.; Altman, K.I.; Ehmann, U.K.; Cox, A.B.

    1987-01-01

    This volume is a thematically focused issue of Advances in Radiation Biology. The topic surveyed is relative radiosensitivity of human organ systems. Topics considered include relative radiosensitivities of the thymus, spleen, and lymphohemopoietic systems; relative radiosensitivities of the small and large intestine; relative rediosensitivities of the oral cavity, larynx, pharynx, and esophagus; relative radiation sensitivity of the integumentary system; dose response of the epidermal; microvascular, and dermal populations; relative radiosensitivity of the human lung; relative radiosensitivity of fetal tissues; and tolerance of the central and peripheral nervous system to therapeutic irradiation.

  12. Parametric cost analysis for advanced energy concepts

    SciTech Connect

    Not Available

    1983-10-01

    This report presents results of an exploratory study to develop parametric cost estimating relationships for advanced fossil-fuel energy systems. The first of two tasks was to develop a standard Cost Chart of Accounts to serve as a basic organizing framework for energy systems cost analysis. The second task included development of selected parametric cost estimating relationships (CERs) for individual elements (or subsystems) of a fossil fuel plant, nominally for the Solvent-Refined Coal (SRC) process. Parametric CERs are presented for the following elements: coal preparation, coal slurry preparation, dissolver (reactor); gasification; oxygen production; acid gas/CO/sub 2/ removal; shift conversion; cryogenic hydrogen recovery; and sulfur removal. While the nominal focus of the study was on the SRC process, each of these elements is found in other fossil fuel processes. Thus, the results of this effort have broader potential application. However, it should also be noted that the CERs presented in this report are based upon a limited data base. Thus, they are applicable over a limited range of values (of the independent variables) and for a limited set of specific technologies (e.g., the gasifier CER is for the multi-train, Koppers-Totzek process). Additional work is required to extend the range of these CERs. 16 figures, 13 tables.

  13. A topological approach to computer-aided sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Chan, S. P.; Munoz, R. M.

    1971-01-01

    Sensitivities of any arbitrary system are calculated using general purpose digital computer with available software packages for transfer function analysis. Sensitivity shows how element variation within system affects system performance. Signal flow graph illustrates topological system behavior and relationship among parameters in system.

  14. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  15. Pressure Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Bencic, T.; Sullivan, J. P.

    1999-01-01

    This article reviews new advances and applications of pressure sensitive paints in aerodynamic testing. Emphasis is placed on important technical aspects of pressure sensitive paint including instrumentation, data processing, and uncertainty analysis.

  16. Recent advances in (soil moisture) triple collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Zwieback, S.; Crow, W.; Dorigo, W.; Wagner, W.

    2016-03-01

    To date, triple collocation (TC) analysis is one of the most important methods for the global-scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method. Different notations that are used to formulate the TC problem are shown to be mathematically identical. While many studies have investigated issues related to possible violations of the underlying assumptions, only few TC modifications have been proposed to mitigate the impact of these violations. Moreover, assumptions, which are often understood as a limitation that is unique to TC analysis are shown to be common also to other conventional performance metrics. Noteworthy advances in TC analysis have been made in the way error estimates are being presented by moving from the investigation of absolute error variance estimates to the investigation of signal-to-noise ratio (SNR) metrics. Here we review existing error presentations and propose the combined investigation of the SNR (expressed in logarithmic units), the unscaled error variances, and the soil moisture sensitivities of the data sets as an optimal strategy for the evaluation of remotely-sensed soil moisture data sets.

  17. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  18. Sensitivity Analysis in Complex Plasma Chemistry Models

    NASA Astrophysics Data System (ADS)

    Turner, Miles

    2015-09-01

    The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''

  19. Selecting step sizes in sensitivity analysis by finite differences

    NASA Technical Reports Server (NTRS)

    Iott, J.; Haftka, R. T.; Adelman, H. M.

    1985-01-01

    This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.

  20. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  1. SYSTEMATIC SENSITIVITY ANALYSIS OF AIR QUALITY SIMULATION MODELS

    EPA Science Inventory

    This report reviews and assesses systematic sensitivity and uncertainty analysis methods for applications to air quality simulation models. The discussion of the candidate methods presents their basic variables, mathematical foundations, user motivations and preferences, computer...

  2. On the sensitivity analysis of porous material models

    NASA Astrophysics Data System (ADS)

    Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel

    2012-11-01

    Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.

  3. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  4. Advanced Materials and Solids Analysis Research Core (AMSARC)

    EPA Science Inventory

    The Advanced Materials and Solids Analysis Research Core (AMSARC), centered at the U.S. Environmental Protection Agency's (EPA) Andrew W. Breidenbach Environmental Research Center in Cincinnati, Ohio, is the foundation for the Agency's solids and surfaces analysis capabilities. ...

  5. Fixed point sensitivity analysis of interacting structured populations.

    PubMed

    Barabás, György; Meszéna, Géza; Ostling, Annette

    2014-03-01

    Sensitivity analysis of structured populations is a useful tool in population ecology. Historically, methodological development of sensitivity analysis has focused on the sensitivity of eigenvalues in linear matrix models, and on single populations. More recently there have been extensions to the sensitivity of nonlinear models, and to communities of interacting populations. Here we derive a fully general mathematical expression for the sensitivity of equilibrium abundances in communities of interacting structured populations. Our method yields the response of an arbitrary function of the stage class abundances to perturbations of any model parameters. As a demonstration, we apply this sensitivity analysis to a two-species model of ontogenetic niche shift where each species has two stage classes, juveniles and adults. In the context of this model, we demonstrate that our theory is quite robust to violating two of its technical assumptions: the assumption that the community is at a point equilibrium and the assumption of infinitesimally small parameter perturbations. Our results on the sensitivity of a community are also interpreted in a niche theoretical context: we determine how the niche of a structured population is composed of the niches of the individual states, and how the sensitivity of the community depends on niche segregation. PMID:24368160

  6. Sensitivity of the Advanced LIGO detectors at the beginning of gravitational wave astronomy

    NASA Astrophysics Data System (ADS)

    Martynov, D. V.; Hall, E. D.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Adams, C.; Adhikari, R. X.; Anderson, R. A.; Anderson, S. B.; Arai, K.; Arain, M. A.; Aston, S. M.; Austin, L.; Ballmer, S. W.; Barbet, M.; Barker, D.; Barr, B.; Barsotti, L.; Bartlett, J.; Barton, M. A.; Bartos, I.; Batch, J. C.; Bell, A. S.; Belopolski, I.; Bergman, J.; Betzwieser, J.; Billingsley, G.; Birch, J.; Biscans, S.; Biwer, C.; Black, E.; Blair, C. D.; Bogan, C.; Bork, R.; Bridges, D. O.; Brooks, A. F.; Celerier, C.; Ciani, G.; Clara, F.; Cook, D.; Countryman, S. T.; Cowart, M. J.; Coyne, D. C.; Cumming, A.; Cunningham, L.; Damjanic, M.; Dannenberg, R.; Danzmann, K.; Costa, C. F. Da Silva; Daw, E. J.; DeBra, D.; DeRosa, R. T.; DeSalvo, R.; Dooley, K. L.; Doravari, S.; Driggers, J. C.; Dwyer, S. E.; Effler, A.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fair, H.; Feldbaum, D.; Fisher, R. P.; Foley, S.; Frede, M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Galdi, V.; Giaime, J. A.; Giardina, K. D.; Gleason, J. R.; Goetz, R.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Grote, H.; Guido, C. J.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hammond, G.; Hanks, J.; Hanson, J.; Hardwick, T.; Harry, G. M.; Heefner, J.; Heintze, M. C.; Heptonstall, A. W.; Hoak, D.; Hough, J.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jones, R.; Kandhasamy, S.; Karki, S.; Kasprzack, M.; Kaufer, S.; Kawabe, K.; Kells, W.; Kijbunchoo, N.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kokeyama, K.; Korth, W. Z.; Kuehn, G.; Kwee, P.; Landry, M.; Lantz, B.; Le Roux, A.; Levine, B. M.; Lewis, J. B.; Lhuillier, V.; Lockerbie, N. A.; Lormand, M.; Lubinski, M. J.; Lundgren, A. P.; MacDonald, T.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Massinger, T. J.; Matichard, F.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McIntyre, G.; McIver, J.; Merilh, E. L.; Meyer, M. S.; Meyers, P. M.; Miller, J.; Mittleman, R.; Moreno, G.; Mueller, C. L.; Mueller, G.; Mullavey, A.; Munch, J.; Nuttall, L. K.; Oberling, J.; O'Dell, J.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Osthelder, C.; Ottaway, D. J.; Overmier, H.; Palamos, J. R.; Paris, H. R.; Parker, W.; Patrick, Z.; Pele, A.; Penn, S.; Phelps, M.; Pickenpack, M.; Pierro, V.; Pinto, I.; Poeld, J.; Principe, M.; Prokhorov, L.; Puncken, O.; Quetschke, V.; Quintero, E. A.; Raab, F. J.; Radkins, H.; Raffai, P.; Ramet, C. R.; Reed, C. M.; Reid, S.; Reitze, D. H.; Robertson, N. A.; Rollins, J. G.; Roma, V. J.; Romie, J. H.; Rowan, S.; Ryan, K.; Sadecki, T.; Sanchez, E. J.; Sandberg, V.; Sannibale, V.; Savage, R. L.; Schofield, R. M. S.; Schultz, B.; Schwinberg, P.; Sellers, D.; Sevigny, A.; Shaddock, D. A.; Shao, Z.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sigg, D.; Slagmolen, B. J. J.; Smith, J. R.; Smith, M. R.; Smith-Lefebvre, N. D.; Sorazu, B.; Staley, A.; Stein, A. J.; Stochino, A.; Strain, K. A.; Taylor, R.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Torrie, C. I.; Traylor, G.; Vajente, G.; Valdes, G.; van Veggel, A. A.; Vargas, M.; Vecchio, A.; Veitch, P. J.; Venkateswara, K.; Vo, T.; Vorvick, C.; Waldman, S. J.; Walker, M.; Ward, R. L.; Warner, J.; Weaver, B.; Weiss, R.; Welborn, T.; Weßels, P.; Wilkinson, C.; Willems, P. A.; Williams, L.; Willke, B.; Winkelmann, L.; Wipf, C. C.; Worden, J.; Wu, G.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Zhang, L.; Zucker, M. E.; Zweizig, J.

    2016-06-01

    The Laser Interferometer Gravitational Wave Observatory (LIGO) consists of two widely separated 4 km laser interferometers designed to detect gravitational waves from distant astrophysical sources in the frequency range from 10 Hz to 10 kHz. The first observation run of the Advanced LIGO detectors started in September 2015 and ended in January 2016. A strain sensitivity of better than 10-23/√{Hz } was achieved around 100 Hz. Understanding both the fundamental and the technical noise sources was critical for increasing the astrophysical strain sensitivity. The average distance at which coalescing binary black hole systems with individual masses of 30 M⊙ could be detected above a signal-to-noise ratio (SNR) of 8 was 1.3 Gpc, and the range for binary neutron star inspirals was about 75 Mpc. With respect to the initial detectors, the observable volume of the Universe increased by a factor 69 and 43, respectively. These improvements helped Advanced LIGO to detect the gravitational wave signal from the binary black hole coalescence, known as GW150914.

  7. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  8. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  9. On the sensitivity analysis of separated-loop MRS data

    NASA Astrophysics Data System (ADS)

    Behroozmand, A.; Auken, E.; Fiandaca, G.

    2013-12-01

    In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the separated-loop with the conventional coincident-loop MRS data. MRS has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop (typically 25 - 100 m in side length/diameter) deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record SNMR signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the sensitivity kernels of different separated-loop MRS soundings are studied and compared with

  10. Receptor for Advanced Glycation End Products Regulates Adipocyte Hypertrophy and Insulin Sensitivity in Mice

    PubMed Central

    Monden, Masayo; Koyama, Hidenori; Otsuka, Yoshiko; Morioka, Tomoaki; Mori, Katsuhito; Shoji, Takuhito; Mima, Yohei; Motoyama, Koka; Fukumoto, Shinya; Shioi, Atsushi; Emoto, Masanori; Yamamoto, Yasuhiko; Yamamoto, Hiroshi; Nishizawa, Yoshiki; Kurajoh, Masafumi; Yamamoto, Tetsuya; Inaba, Masaaki

    2013-01-01

    Receptor for advanced glycation end products (RAGE) has been shown to be involved in adiposity as well as atherosclerosis even in nondiabetic conditions. In this study, we examined mechanisms underlying how RAGE regulates adiposity and insulin sensitivity. RAGE overexpression in 3T3-L1 preadipocytes using adenoviral gene transfer accelerated adipocyte hypertrophy, whereas inhibitions of RAGE by small interfering RNA significantly decrease adipocyte hypertrophy. Furthermore, double knockdown of high mobility group box-1 and S100b, both of which are RAGE ligands endogenously expressed in 3T3-L1 cells, also canceled RAGE-medicated adipocyte hypertrophy, implicating a fundamental role of ligands–RAGE ligation. Adipocyte hypertrophy induced by RAGE overexpression is associated with suppression of glucose transporter type 4 and adiponectin mRNA expression, attenuated insulin-stimulated glucose uptake, and insulin-stimulated signaling. Toll-like receptor (Tlr)2 mRNA, but not Tlr4 mRNA, is rapidly upregulated by RAGE overexpression, and inhibition of Tlr2 almost completely abrogates RAGE-mediated adipocyte hypertrophy. Finally, RAGE−/− mice exhibited significantly less body weight, epididymal fat weight, epididymal adipocyte size, higher serum adiponectin levels, and higher insulin sensitivity than wild-type mice. RAGE deficiency is associated with early suppression of Tlr2 mRNA expression in adipose tissues. Thus, RAGE appears to be involved in mouse adipocyte hypertrophy and insulin sensitivity, whereas Tlr2 regulation may partly play a role. PMID:23011593

  11. Data acquisition system for an advanced x-ray imaging crystal spectrometer using a segmented position-sensitive detector.

    PubMed

    Nam, U W; Lee, S G; Bak, J G; Moon, M K; Cheon, J K; Lee, C H

    2007-10-01

    A versatile time-to-digital converter based data acquisition system for a segmented position-sensitive detector has been developed. This data acquisition system was successfully demonstrated to a two-segment position-sensitive detector. The data acquisition system will be developed further to support multisegmented position-sensitive detector to improve the photon count rate capability of the advanced x-ray imaging crystal spectrometer system. PMID:17979416

  12. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite

  13. Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.

    1994-01-01

    During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important

  14. The GOES-R Advanced Baseline Imager: polarization sensitivity and potential impacts

    NASA Astrophysics Data System (ADS)

    Pearlman, Aaron J.; Cao, Changyong; Wu, Xiangqian

    2015-09-01

    In contrast to the National Oceanic and Atmospheric Administration's (NOAA's) current geostationary imagers for operational weather forecasting, the next generation imager, the Advanced Baseline Imager (ABI) aboard the Geostationary Operational Environmental Satellite R-Series (GOES-R), will have six reflective solar bands - five more than currently available. These bands will be used for applications such as aerosol retrievals, which are influenced by polarization effects. These effects are determined by two factors: instrument polarization sensitivity and the polarization states of the observations. The former is measured as part of the pre-launch testing program performed by the instrument vendor. We analyzed the results of the pre-launch polarization sensitivity measurements of the 0.47 μm and 0.64 μm channels and used them in conjunction with simulated scene polarization states to estimate potential on-orbit radiometric impacts. The pre-launch test setups involved illuminating the ABI with an integrating sphere through either one or two polarizers. The measurement with one (rotating) polarizer yields the degree of linear polarization of ABI, and the measurements using two polarizers (one rotating and one fixed) characterized the non-ideal properties of the polarizer. To estimate the radiometric performance impacts from the instrument polarization sensitivity, we simulated polarized scenes using a radiative transfer code and accounted for the instrument polarization sensitivity over its field of regard. The results show the variation in the polarization impacts over the day and by regions of the full disk can reach up to 3.2% for the 0.47μm channel and 4.8% for the 0.64μm channel. Geostationary orbiters like the ABI give the unique opportunity to show these impacts throughout the day compared to low earth orbiters, which are more limited to certain times of day. This work may enhance the ability to diagnose anomalies on-orbit.

  15. Sensitivity analysis and scale issues in landslide susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Catani, Filippo; Lagomarsino, Daniela; Segoni, Samuele; Tofani, Veronica

    2013-04-01

    Despite the large number of recent advances and developments in landslide susceptibility mapping (LSM) there is still a lack of studies focusing on specific aspects of LSM model sensitivity. For example, the influence of factors of paramount importance such as the survey scale of the landslide conditioning variables (LCVs), the resolution of the mapping unit (MUR) and the optimal number and ranking of LCVs have never been investigated analytically, especially on large datasets. In this paper we attempt this experimentation concentrating on the impact of model tuning choice on the final result, rather than on the comparison of methodologies. To this end, we adopt a simple implementation of the random forest (RF) classification family to produce an ensamble of landslide susceptibility maps for a set of different model settings, input data types and scales. RF classification and regression methods offer a very flexible environment for testing model parameters and mapping hypotheses, allowing for a direct quantification of variable importance. The model choice is, in itself, quite innovative since it is the first time that such technique, widely used in remote sensing for image classification, is used in this form for the production of a LSM. Random forest is a combination of tree (usually binary) bayesian predictors that permits to relate a set of contributing factors with the actual landslides occurrence. Being it a nonparametric model, it is possible to incorporate a range of numeric or categorical data layers and there is no need to select unimodal training data. Many classical and widely acknowledged landslide predisposing factors have been taken into account as mainly related to: the lithology, the land use, the land surface geometry (derived from DTM), the structural and anthropogenic constrains. In addition, for each factor we also included in the parameter set the standard deviation (for numerical variables) or the variety (for categorical ones). The use of

  16. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    PubMed Central

    Curtis, Janelle M.R.

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  17. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    PubMed

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  18. Unique Systems Analysis Task 7, Advanced Subsonic Technologies Evaluation Analysis

    NASA Technical Reports Server (NTRS)

    Eisenberg, Joseph D. (Technical Monitor); Bettner, J. L.; Stratton, S.

    2004-01-01

    To retain a preeminent U.S. position in the aircraft industry, aircraft passenger mile costs must be reduced while at the same time, meeting anticipated more stringent environmental regulations. A significant portion of these improvements will come from the propulsion system. A technology evaluation and system analysis was accomplished under this task, including areas such as aerodynamics and materials and improved methods for obtaining low noise and emissions. Previous subsonic evaluation analyses have identified key technologies in selected components for propulsion systems for year 2015 and beyond. Based on the current economic and competitive environment, it is clear that studies with nearer turn focus that have a direct impact on the propulsion industry s next generation product are required. This study will emphasize the year 2005 entry into service time period. The objective of this study was to determine which technologies and materials offer the greatest opportunities for improving propulsion systems. The goals are twofold. The first goal is to determine an acceptable compromise between the thermodynamic operating conditions for A) best performance, and B) acceptable noise and chemical emissions. The second goal is the evaluation of performance, weight and cost of advanced materials and concepts on the direct operating cost of an advanced regional transport of comparable technology level.

  19. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  20. Sensitivity analysis of TOPSIS method in water quality assessment: I. Sensitivity to the parameter weights.

    PubMed

    Li, Peiyue; Qian, Hui; Wu, Jianhua; Chen, Jie

    2013-03-01

    Sensitivity analysis is becoming increasingly widespread in many fields of engineering and sciences and has become a necessary step to verify the feasibility and reliability of a model or a method. The sensitivity of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method in water quality assessment mainly includes sensitivity to the parameter weights and sensitivity to the index input data. In the present study, the sensitivity of TOPSIS to the parameter weights was discussed in detail. The present study assumed the original parameter weights to be equal to each other, and then each weight was changed separately to see how the assessment results would be affected. Fourteen schemes were designed to investigate the sensitivity to the variation of each weight. The variation ranges that keep the assessment results unchangeable were also derived theoretically. The results show that the final assessment results will change when the weights increase or decrease by ±20 to ±50 %. The feedback of different samples to the variation of a given weight is different, and the feedback of a given sample to the variation of different weights is also different. The final assessment results can keep relatively stable when a given weight is disturbed as long as the initial variation ratios meet one of the eight derived requirements. PMID:22752962

  1. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering.

    PubMed

    Kelley, Shana O; Mirkin, Chad A; Walt, David R; Ismagilov, Rustem F; Toner, Mehmet; Sargent, Edward H

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541

  2. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    PubMed Central

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2015-01-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541

  3. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    NASA Astrophysics Data System (ADS)

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.

  4. A comprehensive sensitivity analysis of central-loop MRS data

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad; Auken, Esben; Dalgaard, Esben; Rejkjaer, Simon

    2014-05-01

    In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the central-loop with the conventional coincident-loop MRS data. MRS, also called surface NMR, has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content and, through empirical relations, is linked to hydraulic properties of the subsurface such as hydraulic conductivity. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record MRS signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the

  5. The Tuition Advance Fund: An Analysis Prepared for Boston University.

    ERIC Educational Resources Information Center

    Botsford, Keith

    Three models for anlayzing the Tuition Advance Fund (TAF) are examined. The three models are: projections by the Institute for Demographic and Economic Studies (IDES), projections by Data Resources, Inc. (DRI), and the Tuition Advance Fund Simulation (TAFSIM) models from Boston University. Analysis of the TAF is based on enrollment, price, and…

  6. A Meta-Analysis of Advance-Organizer Studies.

    ERIC Educational Resources Information Center

    Stone, Carol Leth

    Long term studies of advance organizers (AO) were analyzed with Glass's meta-analysis technique. AO's were defined as bridges from reader's previous knowledge to what is to be learned. The results were compared with predictions from Ausubel's model of assimilative learning. The results of the study indicated that advance organizers were associated…

  7. Sensitivity analysis technique for application to deterministic models

    SciTech Connect

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method.

  8. Theoretical foundations for finite-time transient stability and sensitivity analysis of power systems

    NASA Astrophysics Data System (ADS)

    Dasgupta, Sambarta

    Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of

  9. Sensitivity analysis for missing data in regulatory submissions.

    PubMed

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. PMID:26567763

  10. New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi

    2012-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.

  11. Imaging system sensitivity analysis with NV-IPM

    NASA Astrophysics Data System (ADS)

    Fanning, Jonathan; Teaney, Brian

    2014-05-01

    This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.

  12. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  13. Sensitivity of transport aircraft performance and economics to advanced technology and cruise Mach number

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.

    1974-01-01

    Sensitivity data for advanced technology transports has been systematically collected. This data has been generated in two separate studies. In the first of these, three nominal, or base point, vehicles designed to cruise at Mach numbers .85, .93, and .98, respectively, were defined. The effects on performance and economics of perturbations to basic parameters in the areas of structures, aerodynamics, and propulsion were then determined. In all cases, aircraft were sized to meet the same payload and range as the nominals. This sensitivity data may be used to assess the relative effects of technology changes. The second study was an assessment of the effect of cruise Mach number. Three families of aircraft were investigated in the Mach number range 0.70 to 0.98: straight wing aircraft from 0.70 to 0.80; sweptwing, non-area ruled aircraft from 0.80 to 0.95; and area ruled aircraft from 0.90 to 0.98. At each Mach number, the values of wing loading, aspect ratio, and bypass ratio which resulted in minimum gross takeoff weight were used. As part of the Mach number study, an assessment of the effect of increased fuel costs was made.

  14. Sensitivity analysis approach to multibody systems described by natural coordinates

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2014-03-01

    The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.

  15. Advanced Fingerprint Analysis Project Fingerprint Constituents

    SciTech Connect

    GM Mong; CE Petersen; TRW Clauss

    1999-10-29

    The work described in this report was focused on generating fundamental data on fingerprint components which will be used to develop advanced forensic techniques to enhance fluorescent detection, and visualization of latent fingerprints. Chemical components of sweat gland secretions are well documented in the medical literature and many chemical techniques are available to develop latent prints, but there have been no systematic forensic studies of fingerprint sweat components or of the chemical and physical changes these substances undergo over time.

  16. Sensitivity analysis of the fission gas behavior model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  17. Sensitivity analysis for handling uncertainty in an economic evaluation.

    PubMed

    Limwattananon, Supon

    2014-05-01

    To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700

  18. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  19. Advanced nuclear rocket engine mission analysis

    SciTech Connect

    Ramsthaler, J.; Farbman, G.; Sulmeisters, T.; Buden, D.; Harris, P.

    1987-12-01

    The use of a derivative of the NERVA engine developed from 1955 to 1973 was evluated for potential application to Air Force orbital transfer and maneuvering missions in the time period 1995 to 2020. The NERVA stge was found to have lower life cycle costs (LCC) than an advanced chemical stage for performing low earth orbit (LEO) to geosynchronous orbit (GEO0 missions at any level of activity greater than three missions per year. It had lower life cycle costs than a high performance nuclear electric engine at any level of LEO to GEO mission activity. An examination of all unmanned orbital transfer and maneuvering missions from the Space Transportation Architecture study (STAS 111-3) indicated a LCC advantage for the NERVA stage over the advanced chemical stage of fifteen million dollars. The cost advanced accured from both the orbital transfer and maneuvering missions. Parametric analyses showed that the specific impulse of the NERVA stage and the cost of delivering material to low earth orbit were the most significant factors in the LCC advantage over the chemical stage. Lower development costs and a higher thrust gave the NERVA engine an LCC advantage over the nuclear electric stage. An examination of technical data from the Rover/NERVA program indicated that development of the NERVA stage has a low technical risk, and the potential for high reliability and safe operation. The data indicated the NERVA engine had a great flexibility which would permit a single stage to perform all Air Force missions.

  20. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    NASA Technical Reports Server (NTRS)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  1. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  2. Adjoint-based sensitivity analysis for reactor-safety applications

    SciTech Connect

    Parks, C.V.

    1985-01-01

    The application and usefulness of an adjoint-based methodology for performing sensitivity analysis on reactor safety computer codes is investigated. The adjoint-based methodology, referred to as differential sensitivity theory (DST), provides first-order derivatives of the calculated quantities of interest (responses) with respect to the input parameters. The basic theoretical development of DST is presented along with the needed general extensions for consideration of model discontinuities and a variety of useful response definitions. A simple analytic problem is used to highlight the general DST procedures. Finally, DST procedures presented in this work are applied to two highly nonlinear reactor accident analysis codes: (1) FASTGAS, a relatively small code for analysis of loss-of-decay-heat-removal accident in a gas-cooled fast reactor, and (2) an existing code called VENUS-II which is typically employed for analyzing the core disassembly phase of a hypothetical fast reactor accident. The two codes are different both in terms of complexity and in terms of the facets of DST which can be illustrated. Sensitivity results from the adjoint codes ADJGAS and VENUS-ADJ are verified with direct recalculations using perturbed input parameters. The effectiveness of the DST results for parameter ranking, prediction of response changes, and uncertainty analysis are illustrated. The conclusion drawn from this study is that DST is a viable, cost-effective methodology for accurate sensitivity analysis.

  3. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    SciTech Connect

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  4. A Comparative Review of Sensitivity and Uncertainty Analysis of Large-Scale Systems - II: Statistical Methods

    SciTech Connect

    Cacuci, Dan G.; Ionescu-Bujor, Mihaela

    2004-07-15

    statistical postprocessing must be repeated anew. In particular, a 'fool-proof' statistical method for correctly analyzing models involving highly correlated parameters does not seem to exist currently, so that particular care must be used when interpreting regression results for such models.By addressing computational issues and particularly challenging open problems and knowledge gaps, this review paper aims at providing a comprehensive basis for further advancements and innovations in the field of sensitivity and uncertainty analysis.

  5. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach. PMID:24978258

  6. Advanced surface design for logistics analysis

    NASA Astrophysics Data System (ADS)

    Brown, Tim R.; Hansen, Scott D.

    The development of anthropometric arm/hand and tool models and their manipulation in a large system model for maintenance simulation are discussed. The use of Advanced Surface Design and s-fig technology in anthropometrics, and three-dimensional graphics simulation tools, are found to achieve a good balance between model manipulation speed and model accuracy. The present second generation models are shown to be twice as fast to manipulate as the first generation b-surf models, to be easier to manipulate into various configurations, and to more closely approximate human contours.

  7. Advanced tracking systems design and analysis

    NASA Technical Reports Server (NTRS)

    Potash, R.; Floyd, L.; Jacobsen, A.; Cunningham, K.; Kapoor, A.; Kwadrat, C.; Radel, J.; Mccarthy, J.

    1989-01-01

    The results of an assessment of several types of high-accuracy tracking systems proposed to track the spacecraft in the National Aeronautics and Space Administration (NASA) Advanced Tracking and Data Relay Satellite System (ATDRSS) are summarized. Tracking systems based on the use of interferometry and ranging are investigated. For each system, the top-level system design and operations concept are provided. A comparative system assessment is presented in terms of orbit determination performance, ATDRSS impacts, life-cycle cost, and technological risk.

  8. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  9. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    PubMed Central

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  10. Uncertainty and sensitivity analysis and its applications in OCD measurements

    NASA Astrophysics Data System (ADS)

    Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio

    2009-03-01

    This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.

  11. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  12. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  13. Sensitivity analysis of the critical speed in railway vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Bigoni, D.; True, H.; Engsig-Karup, A. P.

    2014-05-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.

  14. Parameter sensitivity analysis of IL-6 signalling pathways.

    PubMed

    Chu, Y; Jayaraman, A; Hahn, J

    2007-11-01

    Signal transduction pathways generally consist of a large number of individual components and have an even greater number of parameters describing their reaction kinetics. Although the structure of some signalling pathways can be found in the literature, many of the parameters are not well known and they would need to be re-estimated from experimental data for each specific case. However it is not feasible to estimate hundreds of parameters because of the cost of the experiments associated with generating data. Parameter sensitivity analysis can address this situation as it investigates how the system behaviour is changed by variations of parameters and the analysis identifies which parameters play a key role in signal transduction. Only these important parameters need then be re-estimated using data from further experiments. This article presents a detailed parameter sensitivity analysis of the JAK/STAT and MAPK signal transduction pathway that is used for signalling by the cytokine IL-6. As no parameter sensitivity analysis technique is known to work best for all situations, a comparison of the results returned by four techniques is presented: differential analysis, the Morris method, a sampling-based approach and the Fourier amplitude sensitivity test. The recruitment of the transcription factor STAT3 to the dimer of the phosphorylated receptor complex is determined as the most important step by the sensitivity analysis. Additionally, the desphosphorylation of the nuclear STAT3 dimer by PP2 as well as feedback inhibition by SOCS3 are found to play an important role for signal transduction. PMID:18203580

  15. Multicriteria Evaluation and Sensitivity Analysis on Information Security

    NASA Astrophysics Data System (ADS)

    Syamsuddin, Irfan

    2013-05-01

    Information security plays a significant role in recent information society. Increasing number and impact of cyber attacks on information assets have resulted the increasing awareness among managers that attack on information is actually attack on organization itself. Unfortunately, particular model for information security evaluation for management levels is still not well defined. In this study, decision analysis based on Ternary Analytic Hierarchy Process (T-AHP) is proposed as a novel model to aid managers who responsible in making strategic evaluation related to information security issues. In addition, sensitivity analysis is applied to extend our analysis by using several "what-if" scenarios in order to measure the consistency of the final evaluation. Finally, we conclude that the final evaluation made by managers has a significant consistency shown by sensitivity analysis results.

  16. Advanced research equipment for fast ultraweak luminescence analysis

    NASA Astrophysics Data System (ADS)

    Tudisco, S.; Musumeci, F.; Scordino, A.; Privitera, G.

    2003-10-01

    This article describes new advanced research equipment for fast ultraweak luminescence analysis, which can detect at high sensitivity photons after ultraviolet A laser irradiation in biological probes as well as plant, animal, and human cells. The design and construction of this equipment, developed at the Southern National Laboratory of the National Nuclear Physics Institute, is described with the first experimental results and future developments. The setup, employing a photomultiplier tube working in single photon counting mode, allows accurate and reliable photoluminescence measurements with excitation wavelengths in the range 337-700 nm and the emission wavelength in the range 400-800 nm. With respect to the traditional setup, this new equipment is able to perform measurements starting at a few microseconds after the laser irradiation is switched off and with a large detection efficiency (about 10% of the total solid angle). Moreover, the adopted design assures a low background noise level. A further optimization of the system is under study, with special care for the reliability needed for the delayed luminescence for optical screening project aimed to enhance the detection of the low level photoinduced luminescence from human cells to be used as an optical biopsy technique.

  17. Advances in the analysis of iminocyclitols: Methods, sources and bioavailability.

    PubMed

    Amézqueta, Susana; Torres, Josep Lluís

    2016-05-01

    Iminocyclitols are chemically and metabolically stable, naturally occurring sugar mimetics. Their biological activities make them interesting and extremely promising as both drug leads and functional food ingredients. The first iminocyclitols were discovered using preparative isolation and purification methods followed by chemical characterization using nuclear magnetic resonance spectroscopy. In addition to this classical approach, gas and liquid chromatography coupled to mass spectrometry are increasingly used; they are highly sensitive techniques capable of detecting minute amounts of analytes in a broad spectrum of sources after only minimal sample preparation. These techniques have been applied to identify new iminocyclitols in plants, microorganisms and synthetic mixtures. The separation of iminocyclitol mixtures by chromatography is particularly difficult however, as the most commonly used matrices have very low selectivity for these highly hydrophilic structurally similar molecules. This review critically summarizes recent advances in the analysis of iminocyclitols from plant sources and findings regarding their quantification in dietary supplements and foodstuffs, as well as in biological fluids and organs, from bioavailability studies. PMID:26946023

  18. Beyond the GUM: variance-based sensitivity analysis in metrology

    NASA Astrophysics Data System (ADS)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  19. Sensitivity analysis of the Ohio phosphorus risk index

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...

  20. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    ERIC Educational Resources Information Center

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  1. Adjoint-based sensitivity analysis for reactor safety applications

    SciTech Connect

    Parks, C.V.

    1986-08-01

    The application and usefulness of an adjoint-based methodology for performing sensitivity analysis on reactor safety computer codes is investigated. The adjoint-based methodology, referred to as differential sensitivity theory (DST), provides first-order derivatives of the calculated quantities of interest (responses) with respect to the input parameters. The basic theoretical development of DST is presented along with the needed general extensions for consideration of model discontinuities and a variety of useful response definitions. A simple analytic problem is used to highlight the general DST procedures. finally, DST procedures presented in this work are applied to two highly nonlinear reactor accident analysis codes: (1) FASTGAS, a relatively small code for analysis of a loss-of-decay-heat-removal accident in a gas-cooled fast reactor, and (2) an existing code called VENUS-II which has been employed for analyzing the core disassembly phase of a hypothetical fast reactor accident. The two codes are different both in terms of complexity and in terms of the facets of DST which can be illustrated. Sensitivity results from the adjoint codes ADJGAS and VENUS-ADJ are verified with direct recalcualtions using perturbed input parameters. The effectiveness of the DST results for parameter ranking, prediction of response changes, and uncertainty analysis are illustrated. The conclusion drawn from this study is that DST is a viable, cost-effective methodology for accurate sensitivity analysis. In addition, a useful sensitivity tool for use in the fast reactor safety area has been developed in VENUS-ADJ. Future work needs to concentrate on combining the accurate first-order derivatives/results from DST with existing methods (based solely on direct recalculations) for higher-order response surfaces.

  2. Integrative "omic" analysis for tamoxifen sensitivity through cell based models.

    PubMed

    Weng, Liming; Ziliak, Dana; Lacroix, Bonnie; Geeleher, Paul; Huang, R Stephanie

    2014-01-01

    It has long been observed that tamoxifen sensitivity varies among breast cancer patients. Further, ethnic differences of tamoxifen therapy between Caucasian and African American have also been reported. Since most studies have been focused on Caucasian people, we sought to comprehensively evaluate genetic variants related to tamoxifen therapy in African-derived samples. An integrative "omic" approach developed by our group was used to investigate relationships among endoxifen (an active metabolite of tamoxifen) sensitivity, SNP genotype, mRNA and microRNA expressions in 58 HapMap YRI lymphoblastoid cell lines. We identified 50 SNPs that associate with cellular sensitivity to endoxifen through their effects on 34 genes and 30 microRNA expression. Some of these findings are shared in both Caucasian and African samples, while others are unique in the African samples. Among gene/microRNA that were identified in both ethnic groups, the expression of TRAF1 is also correlated with tamoxifen sensitivity in a collection of 44 breast cancer cell lines. Further, knock-down TRAF1 and over-expression of hsa-let-7i confirmed the roles of hsa-let-7i and TRAF1 in increasing tamoxifen sensitivity in the ZR-75-1 breast cancer cell line. Our integrative omic analysis facilitated the discovery of pharmacogenomic biomarkers that potentially affect tamoxifen sensitivity. PMID:24699530

  3. Recent Advances in Anthocyanin Analysis and Characterization

    PubMed Central

    Welch, Cara R.; Wu, Qingli; Simon, James E.

    2009-01-01

    Anthocyanins are a class of polyphenols responsible for the orange, red, purple and blue colors of many fruits, vegetables, grains, flowers and other plants. Consumption of anthocyanins has been linked as protective agents against many chronic diseases and possesses strong antioxidant properties leading to a variety of health benefits. In this review, we examine the advances in the chemical profiling of natural anthocyanins in plant and biological matrices using various chromatographic separations (HPLC and CE) coupled with different detection systems (UV, MS and NMR). An overview of anthocyanin chemistry, prevalence in plants, biosynthesis and metabolism, bioactivities and health properties, sample preparation and phytochemical investigations are discussed while the major focus examines the comparative advantages and disadvantages of each analytical technique. PMID:19946465

  4. Sensitivity analysis of TOPSIS method in water quality assessment II: sensitivity to the index input data.

    PubMed

    Li, Peiyue; Wu, Jianhua; Qian, Hui; Chen, Jie

    2013-03-01

    This is the second part of the study on sensitivity analysis of the technique for order preference by similarity to ideal solution (TOPSIS) method in water quality assessment. In the present study, the sensitivity of the TOPSIS method to the index input data was investigated. The sensitivity was first theoretically analyzed under two major assumptions. One assumption was that one index or more of the samples were perturbed with the same ratio while other indices kept unchanged. The other one was that all indices of a given sample were changed simultaneously with the same ratio, while the indices of other samples were unchanged. Furthermore, a case study under assumption 2 was also carried out in this paper. When the same indices of different water samples are changed simultaneously with the same variation ratio, the final water quality assessment results will not be influenced at all. When the input data of all indices of a given sample are perturbed with the same variation ratio, the assessment values of all samples will be influenced theoretically. However, the case study shows that only the perturbed sample is sensitive to the variation, and a simple linear equation representing the relation between the closeness coefficient (CC) values of the perturbed sample and variation ratios can be derived under the assumption 2. This linear equation can be used for determining the sample orders under various variation ratios. PMID:22832843

  5. Stochastic Simulations and Sensitivity Analysis of Plasma Flow

    SciTech Connect

    Lin, Guang; Karniadakis, George E.

    2008-08-01

    For complex physical systems with large number of random inputs, it will be very expensive to perform stochastic simulations for all of the random inputs. Stochastic sensitivity analysis is introduced in this paper to rank the significance of random inputs, provide information on which random input has more influence on the system outputs and the coupling or interaction effect among different random inputs. There are two types of numerical methods in stochastic sensitivity analysis: local and global methods. The local approach, which relies on a partial derivative of output with respect to parameters, is used to measure the sensitivity around a local operating point. When the system has strong nonlinearities and parameters fluctuate within a wide range from their nominal values, the local sensitivity does not provide full information to the system operators. On the other side, the global approach examines the sensitivity from the entire range of the parameter variations. The global screening methods, based on One-At-a-Time (OAT) perturbation of parameters, rank the significant parameters and identify their interaction among a large number of parameters. Several screening methods have been proposed in literature, i.e., the Morris method, Cotter's method, factorial experimentation, and iterated fractional factorial design. In this paper, the Morris method, Monte Carlo sampling method, Quasi-Monte Carlo method and collocation method based on sparse grids are studied. Additionally, two MHD examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis, which can be used as a pre-screening technique for reducing the dimensionality and hence the cost in stochastic simulations.

  6. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  7. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  8. Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris

    2015-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.

  9. Double Precision Differential/Algebraic Sensitivity Analysis Code

    1995-06-02

    DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-normmore » or the 2-norm) is provided for the state vector and can include the sensitivities on request.« less

  10. Sensitivity Analysis Of Technological And Material Parameters In Roll Forming

    NASA Astrophysics Data System (ADS)

    Gehring, Albrecht; Saal, Helmut

    2007-05-01

    Roll forming is applied for several decades to manufacture thin gauged profiles. However, the knowledge about this technology is still based on empirical approaches. Due to the complexity of the forming process, the main effects on profile properties are difficult to identify. This is especially true for the interaction of technological parameters and material parameters. General considerations for building a finite-element model of the roll forming process are given in this paper. A sensitivity analysis is performed on base of a statistical design approach in order to identify the effects and interactions of different parameters on profile properties. The parameters included in the analysis are the roll diameter, the rolling speed, the sheet thickness, friction between the tools and the sheet and the strain hardening behavior of the sheet material. The analysis includes an isotropic hardening model and a nonlinear kinematic hardening model. All jobs are executed parallel to reduce the overall time as the sensitivity analysis requires much CPU-time. The results of the sensitivity analysis demonstrate the opportunities to improve the properties of roll formed profiles by adjusting technological and material parameters to their optimum interacting performance.

  11. Analysis of an advanced technology subsonic turbofan incorporating revolutionary materials

    NASA Technical Reports Server (NTRS)

    Knip, Gerald, Jr.

    1987-01-01

    Successful implementation of revolutionary composite materials in an advanced turbofan offers the possibility of further improvements in engine performance and thrust-to-weight ratio relative to current metallic materials. The present analysis determines the approximate engine cycle and configuration for an early 21st century subsonic turbofan incorporating all composite materials. The advanced engine is evaluated relative to a current technology baseline engine in terms of its potential fuel savings for an intercontinental quadjet having a design range of 5500 nmi and a payload of 500 passengers. The resultant near optimum, uncooled, two-spool, advanced engine has an overall pressure ratio of 87, a bypass ratio of 18, a geared fan, and a turbine rotor inlet temperature of 3085 R. Improvements result in a 33-percent fuel saving for the specified misssion. Various advanced composite materials are used throughout the engine. For example, advanced polymer composite materials are used for the fan and the low pressure compressor (LPC).

  12. Advanced reliability method for fatigue analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Wirsching, P. H.

    1984-01-01

    When design factors are considered as random variables and the failure condition cannot be expressed by a closed form algebraic inequality, computations of risk (or probability of failure) may become extremely difficult or very inefficient. This study suggests using a simple and easily constructed second degree polynomial to approximate the complicated limit state in the neighborhood of the design point; a computer analysis relates the design variables at selected points. Then a fast probability integration technique (i.e., the Rackwitz-Fiessler algorithm) can be used to estimate risk. The capability of the proposed method is demonstrated in an example of a low cycle fatigue problem for which a computer analysis is required to perform local strain analysis to relate the design variables. A comparison of the performance of this method is made with a far more costly Monte Carlo solution. Agreement of the proposed method with Monte Carlo is considered to be good.

  13. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  14. Modeling and analysis of advanced binary cycles

    SciTech Connect

    Gawlik, K.

    1997-12-31

    A computer model (Cycle Analysis Simulation Tool, CAST) and a methodology have been developed to perform value analysis for small, low- to moderate-temperature binary geothermal power plants. The value analysis method allows for incremental changes in the levelized electricity cost (LEC) to be determined between a baseline plant and a modified plant. Thermodynamic cycle analyses and component sizing are carried out in the model followed by economic analysis which provides LEC results. The emphasis of the present work is on evaluating the effect of mixed working fluids instead of pure fluids on the LEC of a geothermal binary plant that uses a simple Organic Rankine Cycle. Four resources were studied spanning the range of 265{degrees}F to 375{degrees}F. A variety of isobutane and propane based mixtures, in addition to pure fluids, were used as working fluids. This study shows that the use of propane mixtures at a 265{degrees}F resource can reduce the LEC by 24% when compared to a base case value that utilizes commercial isobutane as its working fluid. The cost savings drop to 6% for a 375{degrees}F resource, where an isobutane mixture is favored. Supercritical cycles were found to have the lowest cost at all resources.

  15. Recent advances in capillary electrophoretic migration techniques for pharmaceutical analysis (2013-2015).

    PubMed

    El Deeb, Sami; Wätzig, Hermann; Abd El-Hady, Deia; Sänger-van de Griend, Cari; Scriba, Gerhard K E

    2016-07-01

    This review updates and follows-up a previous review by highlighting recent advancements regarding capillary electromigration methodologies and applications in pharmaceutical analysis. General approaches such as quality by design as well as sample injection methods and detection sensitivity are discussed. The separation and analysis of drug-related substances, chiral CE, and chiral CE-MS in addition to the determination of physicochemical constants are addressed. The advantages of applying affinity capillary electrophoresis in studying receptor-ligand interactions are highlighted. Finally, current aspects related to the analysis of biopharmaceuticals are reviewed. The present review covers the literature between January 2013 and December 2015. PMID:26988029

  16. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  17. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  18. Graphical methods for the sensitivity analysis in discriminant analysis

    SciTech Connect

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.

  19. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGESBeta

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  20. Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania

    PubMed Central

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2016-01-01

    This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634

  1. Progress in Advanced Spectral Analysis of Radioxenon

    SciTech Connect

    Haas, Derek A.; Schrom, Brian T.; Cooper, Matthew W.; Ely, James H.; Flory, Adam E.; Hayes, James C.; Heimbigner, Tom R.; McIntyre, Justin I.; Saunders, Danielle L.; Suckow, Thomas J.

    2010-09-21

    Improvements to a Java based software package developed at Pacific Northwest National Laboratory (PNNL) for display and analysis of radioxenon spectra acquired by the International Monitoring System (IMS) are described here. The current version of the Radioxenon JavaViewer implements the region of interest (ROI) method for analysis of beta-gamma coincidence data. Upgrades to the Radioxenon JavaViewer will include routines to analyze high-purity germanium detector (HPGe) data, Standard Spectrum Method to analyze beta-gamma coincidence data and calibration routines to characterize beta-gamma coincidence detectors. These upgrades are currently under development; the status and initial results will be presented. Implementation of these routines into the JavaViewer and subsequent release is planned for FY 2011-2012.

  2. Recent advances in statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Heron, K. H.

    1992-01-01

    Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.

  3. Advancing Usability Evaluation through Human Reliability Analysis

    SciTech Connect

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.

  4. Advanced Techniques for Root Cause Analysis

    2000-09-19

    Five items make up this package, or can be used individually. The Chronological Safety Management Template utilizes a linear adaptation of the Integrated Safety Management System laid out in the form of a template that greatly enhances the ability of the analyst to perform the first step of any investigation which is to gather all pertinent facts and identify causal factors. The Problem Analysis Tree is a simple three (3) level problem analysis tree whichmore » is easier for organizations outside of WSRC to use. Another part is the Systemic Root Cause Tree. One of the most basic and unique features of Expanded Root Cause Analysis is the Systemic Root Cause portion of the Expanded Root Cause Pyramid. The Systemic Root Causes are even more basic than the Programmatic Root Causes and represent Root Causes that cut across multiple (if not all) programs in an organization. the Systemic Root Cause portion contains 51 causes embedded at the bottom level of a three level Systemic Root Cause Tree that is divided into logical, organizationally based categorie to assist the analyst. The Computer Aided Root Cause Analysis that allows the analyst at each level of the Pyramid to a) obtain a brief description of the cause that is being considered, b) record a decision that the item is applicable, c) proceed to the next level of the Pyramid to see only those items at the next level of the tree that are relevant to the particular cause that has been chosen, and d) at the end of the process automatically print out a summary report of the incident, the causal factors as they relate to the safety management system, the probable causes, apparent causes, Programmatic Root Causes and Systemic Root Causes for each causal factor and the associated corrective action.« less

  5. Advanced CMOS Radiation Effects Testing Analysis

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan Allen; Marshall, Paul W.; Rodbell, Kenneth P.; Gordon, Michael S.; LaBel, Kenneth A.; Schwank, James R.; Dodds, Nathaniel A.; Castaneda, Carlos M.; Berg, Melanie D.; Kim, Hak S.; Phan, Anthony M.; Seidleck, Christina M.

    2014-01-01

    Presentation at the annual NASA Electronic Parts and Packaging (NEPP) Program Electronic Technology Workshop (ETW). The material includes an update of progress in this NEPP task area over the past year, which includes testing, evaluation, and analysis of radiation effects data on the IBM 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) process. The testing was conducted using test vehicles supplied by directly by IBM.

  6. Advanced CMOS Radiation Effects Testing and Analysis

    NASA Technical Reports Server (NTRS)

    Pellish, J. A.; Marshall, P. W.; Rodbell, K. P.; Gordon, M. S.; LaBel, K. A.; Schwank, J. R.; Dodds, N. A.; Castaneda, C. M.; Berg, M. D.; Kim, H. S.; Phan, A. M.; Seidleck, C. M.

    2014-01-01

    Presentation at the annual NASA Electronic Parts and Packaging (NEPP) Program Electronic Technology Workshop (ETW). The material includes an update of progress in this NEPP task area over the past year, which includes testing, evaluation, and analysis of radiation effects data on the IBM 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) process. The testing was conducted using test vehicles supplied by directly by IBM.

  7. Objective analysis of the ARM IOP data: method and sensitivity

    SciTech Connect

    Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H

    1999-04-01

    Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.

  8. Sensitivity analysis of transport modeling in a fractured gneiss aquifer

    NASA Astrophysics Data System (ADS)

    Abdelaziz, Ramadan; Merkel, Broder J.

    2015-03-01

    Modeling solute transport in fractured aquifers is still challenging for scientists and engineers. Tracer tests are a powerful tool to investigate fractured aquifers with complex geometry and variable heterogeneity. This research focuses on obtaining hydraulic and transport parameters from an experimental site with several wells. At the site, a tracer test with NaCl was performed under natural gradient conditions. Observed concentrations of tracer test were used to calibrate a conservative solute transport model by inverse modeling based on UCODE2013, MODFLOW, and MT3DMS. In addition, several statistics are employed for sensitivity analysis. Sensitivity analysis results indicate that hydraulic conductivity and immobile porosity play important role in the late arrive for breakthrough curve. The results proved that the calibrated model fits well with the observed data set.

  9. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Wang, Qiqi; Hu, Rui; Blonigan, Patrick

    2014-06-01

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  10. Control of a mechanical aeration process via topological sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Abdelwahed, M.; Hassine, M.; Masmoudi, M.

    2009-06-01

    The topological sensitivity analysis method gives the variation of a criterion with respect to the creation of a small hole in the domain. In this paper, we use this method to control the mechanical aeration process in eutrophic lakes. A simplified model based on incompressible Navier-Stokes equations is used, only considering the liquid phase, which is the dominant one. The injected air is taken into account through local boundary conditions for the velocity, on the injector holes. A 3D numerical simulation of the aeration effects is proposed using a mixed finite element method. In order to generate the best motion in the fluid for aeration purposes, the optimization of the injector location is considered. The main idea is to carry out topological sensitivity analysis with respect to the insertion of an injector. Finally, a topological optimization algorithm is proposed and some numerical results, showing the efficiency of our approach, are presented.

  11. Sensitivity analysis techniques for models of human behavior.

    SciTech Connect

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  12. Global sensitivity analysis for DSMC simulations of hypersonic shocks

    NASA Astrophysics Data System (ADS)

    Strand, James S.; Goldstein, David B.

    2013-08-01

    Two global, Monte Carlo based sensitivity analyses were performed to determine which reaction rates most affect the results of Direct Simulation Monte Carlo (DSMC) simulations for a hypersonic shock in five-species air. The DSMC code was written and optimized with shock tube simulations in mind, and includes modifications to allow for the efficient simulation of a 1D hypersonic shock. The TCE model is used to convert Arrhenius-form reaction rate constants into reaction cross-sections, after modification to allow accurate modeling of reactions with arbitrarily large rates relative to the VHS collision rate. The square of the Pearson correlation coefficient was used as the measure for sensitivity in the first of the analyses, and the mutual information was used as the measure in the second. The quantity of interest (QoI) for these analyses was the NO density profile across a 1D shock at ˜8000 m/s (M∞ ≈ 23). This vector QoI was broken into a set of scalar QoIs, each representing the density of NO at a specific point downstream of the shock, and sensitivities were calculated for each scalar QoI based on both measures of sensitivity. Profiles of sensitivity vs. location downstream of the shock were then integrated to determine an overall sensitivity for each reaction. A weighting function was used in the integration in order to emphasize sensitivities in the region of greatest thermal and chemical non-equilibrium. Both sensitivity analysis methods agree on the six reactions which most strongly affect the density of NO. These six reactions are the N2 dissociation reaction N2 + N ⇄ 3N, the O2 dissociation reaction O2 + O ⇄ 3O, the NO dissociation reactions NO + N ⇄ 2N + O and NO + O ⇄ N + 2O, and the exchange reactions N2 + O ⇄ NO + N and NO + O ⇄ O2 + N. This analysis lays the groundwork for the application of Bayesian statistical methods for the calibration of parameters relevant to modeling a hypersonic shock layer with the DSMC method.

  13. Sensitivity Analysis of Inverse Methods in Eddy Current Pit Characterization

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Knopp, Jeremy S.

    2010-02-01

    A sensitivity analysis was performed for a pit characterization problem to quantify the impact of potential sources for variation on the performance of inverse methods. Certain data processing steps, including careful feature extraction, background clutter removal and compensation for variation in the scan step size through the tubing, were found to be critical to achieve good estimates of the pit depth and diameter. Variance studied in model probe dimensions did not adversely affect performance.

  14. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis

    NASA Astrophysics Data System (ADS)

    Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.

    2014-12-01

    Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.

  15. Advances in Analysis of Longitudinal Data

    PubMed Central

    Gibbons, Robert D.; Hedeker, Donald; DuToit, Stephen

    2010-01-01

    In this review, we explore recent developments in the area of linear and nonlinear generalized mixed-effects regression models and various alternatives, including generalized estimating equations for analysis of longitudinal data. Methods are described for continuous and normally distributed as well as categorical (binary, ordinal, nominal) and count (Poisson) variables. Extensions of the model to three and four levels of clustering, multivariate outcomes, and incorporation of design weights are also described. Linear and nonlinear models are illustrated using an example involving a study of the relationship between mood and smoking. PMID:20192796

  16. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  17. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  18. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy

    PubMed Central

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy. PMID:26283989

  19. Advanced Orion Optimized Laser System Analysis

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Contractor shall perform a complete analysis of the potential of the solid state laser in the very long pulse mode (100 ns pulse width, 10-30 hz rep-rate) and in the very short pulse mode (100 ps pulse width 10-30 hz rep rate) concentrating on the operation of the device in the 'hot-rod' mode, where no active cooling the laser operation is attempted. Contractor's calculations shall be made of the phase aberrations which develop during the repped-pulse train, and the results shall feed into the adaptive optics analyses. The contractor shall devise solutions to work around ORION track issues. A final report shall be furnished to the MSFC COTR including all calculations and analysis of estimates of bulk phase and intensity aberration distribution in the laser output beam as a function of time during the repped-pulse train for both wave forms (high-energy/long-pulse, as well as low-energy/short-pulse). Recommendations shall be made for mitigating the aberrations by laser re-design and/or changes in operating parameters of optical pump sources and/or designs.

  20. Probabilistic constrained load flow based on sensitivity analysis

    SciTech Connect

    Karakatsanis, T.S.; Hatziargyriou, N.D. )

    1994-11-01

    This paper presents a method for network constrained setting of control variables based on probabilistic load flow analysis. The method determines operating constraint violations for a whole planning period together with the probability of each violation. An iterative algorithm is subsequently employed providing adjustments of the control variables based on sensitivity analysis of the constrained variables with respect to the control variables. The method is applied to the IEEE 14 busbar system and to a realistic model of the Hellenic Interconnected system indicating its suitability for short-term operational planning applications.

  1. Sensitivity of Forecast Skill to Different Objective Analysis Schemes

    NASA Technical Reports Server (NTRS)

    Baker, W. E.

    1979-01-01

    Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.

  2. Value analysis for advanced technology products

    NASA Astrophysics Data System (ADS)

    Soulliere, Mark

    2011-03-01

    Technology by itself can be wondrous, but buyers of technology factor in the price they have to pay along with performance in their decisions. As a result, the ``best'' technology may not always win in the marketplace when ``good enough'' can be had at a lower price. Technology vendors often set pricing by ``cost plus margin,'' or by competitors' offerings. What if the product is new (or has yet to be invented)? Value pricing is a methodology to price products based on the value generated (e.g. money saved) by using one product vs. the next best technical alternative. Value analysis can often clarify what product attributes generate the most value. It can also assist in identifying market forces outside of the control of the technology vendor that also influence pricing. These principles are illustrated with examples.

  3. Advanced stability analysis for laminar flow control

    NASA Technical Reports Server (NTRS)

    Orszag, S. A.

    1981-01-01

    Five classes of problems are addressed: (1) the extension of the SALLY stability analysis code to the full eighth order compressible stability equations for three dimensional boundary layer; (2) a comparison of methods for prediction of transition using SALLY for incompressible flows; (3) a study of instability and transition in rotating disk flows in which the effects of Coriolis forces and streamline curvature are included; (4) a new linear three dimensional instability mechanism that predicts Reynolds numbers for transition to turbulence in planar shear flows in good agreement with experiment; and (5) a study of the stability of finite amplitude disturbances in axisymmetric pipe flow showing the stability of this flow to all nonlinear axisymmetric disturbances.

  4. Performance analysis of advanced spacecraft TPS

    NASA Technical Reports Server (NTRS)

    Pitts, William C.

    1987-01-01

    The analysis on the feasibility for using metal hydrides in the thermal protection system of cryogenic tanks in space was based on the heat capacity of ice as the phase change material (PCM). It was found that with ice the thermal protection system weight could be reduced by, at most, about 20 percent over an all LI-900 insulation. For this concept to be viable, a metal hydride with considerably more capacity than water would be required. None were found. Special metal hydrides were developed for hydrogen fuel storage applications and it may be possible to do so for the current application. Until this appears promising further effort on this feasibility study does not seem warranted.

  5. Sensitivity analysis of fine sediment models using heterogeneous data

    NASA Astrophysics Data System (ADS)

    Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.

    2012-04-01

    Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the

  6. Advanced analysis techniques for uranium assay

    SciTech Connect

    Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.; Beard, C. A.

    2001-01-01

    Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples count rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.

  7. Multispectral laser imaging for advanced food analysis

    NASA Astrophysics Data System (ADS)

    Senni, L.; Burrascano, P.; Ricci, M.

    2016-07-01

    A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.

  8. Advances in carbonate exploration and reservoir analysis

    USGS Publications Warehouse

    Garland, J.; Neilson, J.; Laubach, S.E.; Whidden, Katherine J.

    2012-01-01

    The development of innovative techniques and concepts, and the emergence of new plays in carbonate rocks are creating a resurgence of oil and gas discoveries worldwide. The maturity of a basin and the application of exploration concepts have a fundamental influence on exploration strategies. Exploration success often occurs in underexplored basins by applying existing established geological concepts. This approach is commonly undertaken when new basins ‘open up’ owing to previous political upheavals. The strategy of using new techniques in a proven mature area is particularly appropriate when dealing with unconventional resources (heavy oil, bitumen, stranded gas), while the application of new play concepts (such as lacustrine carbonates) to new areas (i.e. ultra-deep South Atlantic basins) epitomizes frontier exploration. Many low-matrix-porosity hydrocarbon reservoirs are productive because permeability is controlled by fractures and faults. Understanding basic fracture properties is critical in reducing geological risk and therefore reducing well costs and increasing well recovery. The advent of resource plays in carbonate rocks, and the long-standing recognition of naturally fractured carbonate reservoirs means that new fracture and fault analysis and prediction techniques and concepts are essential.

  9. Ultra sensitive magnetic sensors integrating the giant magnetoelectric effect with advanced microelectronics

    NASA Astrophysics Data System (ADS)

    Fang, Zhao

    consisting of magnetostrictive and piezoelectric components shows a promise to make novel ultra-sensitive magnetic sensors capable of operating at room temperature. To achieve such a high sensitivity (˜pT level), piezoelectric sensors are materialized through ME composite laminates, provided piezo-sensors are among the most sensitive while being passive devices at the same time. To further improve the sensitivity and reduce the 1f noise level, several approaches are used such as magnetic flux concentration effect, which is a function of the Metglas sheet aspect ratio, and resonance enhancement. Taking advantage of this effect, the ME voltage coefficient alpha ME=21.46 V/cm·Oe for Metglas 2605SA1/PVDF laminates and alphaME=46.7 V/cm·Oe for Metglas 2605CO/PVDF laminates. The resonance response of Metglas/PZT laminates in FF (Free-Free), FC (Free-Clamped), and CC (Clamped-Clamped) modes are also investigated. alphaME=301.6 V/cm·Oe and the corresponding SNR=4x107 Hz /Oe are achieved for FC mode at resonance frequencies. In addition to this, testing setups were built to characterize the magnetic sensors. LABVIEW codes were also developed to automatize the measurements and consequently get accurate results. Then two commonly used integration methods, i.e., hybrid method and system in package (SIP), are discussed. Then the intrinsic noise analysis including dielectric loss noise, which dominates the intrinsic noise sources, and magnetostrictive noise is introduced. A charge mode readout circuit is made for hybrid method and a voltage mode readout circuit is made for SIP method. For sensors, since SNR is very important since it determines the minimum signal it can detect, the SNR of each configuration is discussed in detail. For charge mode circuit, by taking advantage of the multilayer PVDF configuration, SNR=7.2x10 5 Hz /Oe is achieved at non-resonance frequencies and SNR=2x10 7 Hz /Oe is achieved at resonance frequencies. For voltage mode circuit, a constant SNR=3x103 Hz /Oe

  10. A New Framework for Effective and Efficient Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2015-04-01

    Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol

  11. 6D phase space electron beam analysis and machine sensitivity studies for ELI-NP GBS

    NASA Astrophysics Data System (ADS)

    Giribono, A.; Bacci, A.; Curatolo, C.; Drebot, I.; Palumbo, L.; Petrillo, V.; Rossi, A. R.; Serafini, L.; Vaccarezza, C.; Vannozzi, A.; Variola, A.

    2016-09-01

    The ELI-NP Gamma Beam Source (GBS) is now under construction in Magurele-Bucharest (RO). Here an advanced source of gamma photons with unprecedented specifications of brilliance (>1021), monochromaticity (0.5%) and energy tunability (0.2-19.5 MeV) is being built, based on Inverse Compton Scattering in the head-on configuration between an electron beam of maximum energy 750 MeV and a high quality high power ps laser beam. These requirements make the ELI-NP GBS an advanced and challenging gamma ray source. The electron beam dynamics analysis and control regarding the machine sensitivity to the possible jitter and misalignments are presented. The effects on the beam quality are illustrated providing the basis for the alignment procedure and jitter tolerances.

  12. Analysis of frequency characteristics and sensitivity of compliant mechanisms

    NASA Astrophysics Data System (ADS)

    Liu, Shanzeng; Dai, Jiansheng; Li, Aimin; Sun, Zhaopeng; Feng, Shizhe; Cao, Guohua

    2016-03-01

    Based on a modified pseudo-rigid-body model, the frequency characteristics and sensitivity of the large-deformation compliant mechanism are studied. Firstly, the pseudo-rigid-body model under the static and kinetic conditions is modified to enable the modified pseudo-rigid-body model to be more suitable for the dynamic analysis of the compliant mechanism. Subsequently, based on the modified pseudo-rigid-body model, the dynamic equations of the ordinary compliant four-bar mechanism are established using the analytical mechanics. Finally, in combination with the finite element analysis software ANSYS, the frequency characteristics and sensitivity of the compliant mechanism are analyzed by taking the compliant parallel-guiding mechanism and the compliant bistable mechanism as examples. From the simulation results, the dynamic characteristics of compliant mechanism are relatively sensitive to the structure size, section parameter, and characteristic parameter of material on mechanisms. The results could provide great theoretical significance and application values for the structural optimization of compliant mechanisms, the improvement of their dynamic properties and the expansion of their application range.

  13. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  14. Optimizing human activity patterns using global sensitivity analysis

    SciTech Connect

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  15. Sensitivity-analysis techniques: self-teaching curriculum

    SciTech Connect

    Iman, R.L.; Conover, W.J.

    1982-06-01

    This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.

  16. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  17. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGESBeta

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  18. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  19. Advanced computational tools for 3-D seismic analysis

    SciTech Connect

    Barhen, J.; Glover, C.W.; Protopopescu, V.A.

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  20. Analysis of interior noise ground and flight test data for advanced turboprop aircraft applications

    NASA Technical Reports Server (NTRS)

    Simpson, M. A.; Tran, B. N.

    1991-01-01

    Interior noise ground tests conducted on a DC-9 aircraft test section are described. The objectives were to study ground test and analysis techniques for evaluating the effectiveness of interior noise control treatments for advanced turboprop aircraft, and to study the sensitivity of the ground test results to changes in various test conditions. Noise and vibration measurements were conducted under simulated advanced turboprop excitation, for two interior noise control treatment configurations. These ground measurement results were compared with results of earlier UHB (Ultra High Bypass) Demonstrator flight tests with comparable interior treatment configurations. The Demonstrator is an MD-80 test aircraft with the left JT8D engine replaced with a prototype UHB advanced turboprop engine.

  1. Treatment of body forces in boundary element design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Saigal, Sunil; Kane, James H.; Aithal, R.; Cheng, Jizu

    1989-01-01

    The inclusion of body forces has received a good deal of attention in boundary element research. The consideration of such forces is essential in the desgin of high performance components such as fan and turbine disks in a gas turbine engine. Due to their critical performance requirements, optimal shapes are often desired for these components. The boundary element method (BEM) offers the possibility of being an efficient method for such iterative analysis as shape optimization. The implicit-differentiation of the boundary integral equations is performed to obtain the sensitivity equations. The body forces are accounted for by either the particular integrals for uniform body forces or by a surface integration for non-uniform body forces. The corresponding sensitivity equations for both these cases are presented. The validity of present formulations is established through a close agreement with exact analytical results.

  2. Sensitivity analysis for nonrandom dropout: a local influence approach.

    PubMed

    Verbeke, G; Molenberghs, G; Thijs, H; Lesaffre, E; Kenward, M G

    2001-03-01

    Diggle and Kenward (1994, Applied Statistics 43, 49-93) proposed a selection model for continuous longitudinal data subject to nonrandom dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumptions on which this type of model invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. This paper presents a formal and flexible approach to such a sensitivity assessment based on local influence (Cook, 1986, Journal of the Royal Statistical Society, Series B 48, 133-169). The influence of perturbing a missing-at-random dropout model in the direction of nonrandom dropout is explored. The method is applied to data from a randomized experiment on the inhibition of testosterone production in rats. PMID:11252620

  3. SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL

    SciTech Connect

    Crawford, C; Tommy Edwards, T; Bill Wilmarth, B

    2006-08-01

    A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.

  4. An easily implemented static condensation method for structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  5. Multiplexed analysis of chromosome conformation at vastly improved sensitivity

    PubMed Central

    Davies, James O.J.; Telenius, Jelena M.; McGowan, Simon; Roberts, Nigel A.; Taylor, Stephen; Higgs, Douglas R.; Hughes, Jim R.

    2015-01-01

    Since methods for analysing chromosome conformation in mammalian cells are either low resolution or low throughput and are technically challenging they are not widely used outside of specialised laboratories. We have re-designed the Capture-C method producing a new approach, called next generation (NG) Capture-C. This produces unprecedented levels of sensitivity and reproducibility and can be used to analyse many genetic loci and samples simultaneously. Importantly, high-resolution data can be produced on as few as 100,000 cells and SNPs can be used to generate allele specific tracks. The method is straightforward to perform and should therefore greatly facilitate the task of linking SNPs identified by genome wide association studies with the genes they influence. The complete and detailed protocol presented here, with new publicly available tools for library design and data analysis, will allow most laboratories to analyse chromatin conformation at levels of sensitivity and throughput that were previously impossible. PMID:26595209

  6. Sensitive LC MS quantitative analysis of carbohydrates by Cs+ attachment.

    PubMed

    Rogatsky, Eduard; Jayatillake, Harsha; Goswami, Gayotri; Tomuta, Vlad; Stein, Daniel

    2005-11-01

    The development of a sensitive assay for the quantitative analysis of carbohydrates from human plasma using LC/MS/MS is described in this paper. After sample preparation, carbohydrates were cationized by Cs(+) after their separation by normal phase liquid chromatography on an amino based column. Cesium is capable of forming a quasi-molecular ion [M + Cs](+) with neutral carbohydrate molecules in the positive ion mode of electrospray ionization mass spectrometry. The mass spectrometer was operated in multiple reaction monitoring mode, and transitions [M + 133] --> 133 were monitored (M, carbohydrate molecular weight). The new method is robust, highly sensitive, rapid, and does not require postcolumn addition or derivatization. It is useful in clinical research for measurement of carbohydrate molecules by isotope dilution assay. PMID:16182559

  7. Sensitivity Analysis of Hardwired Parameters in GALE Codes

    SciTech Connect

    Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.

    2008-12-01

    The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.

  8. Sensitivity analysis for dynamic systems with time-lags

    NASA Astrophysics Data System (ADS)

    Rihan, Fathalla A.

    2003-02-01

    Many problems in bioscience for which observations are reported in the literature can be modelled by suitable functional differential equations incorporating time-lags (other terminology: delays) or memory effects, parameterized by scientifically meaningful constant parameters p or/and variable parameters (for example, control functions) u(t). It is often desirable to have information about the effect on the solution of the dynamic system of perturbing the initial data, control functions, time-lags and other parameters appearing in the model. The main purpose of this paper is to derive a general theory for sensitivity analysis of mathematical models that contain time-lags. In this paper, we use adjoint equations and direct methods to estimate the sensitivity functions when the parameters appearing in the model are not only constants but also variables of time. To illustrate the results, the methodology is applied numerically to an example of a delay differential model.

  9. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  10. METHODS ADVANCEMENT FOR MILK ANALYSIS: THE MAMA STUDY

    EPA Science Inventory

    The Methods Advancement for Milk Analysis (MAMA) study was designed by US EPA and CDC investigators to provide data to support the technological and study design needs of the proposed National Children=s Study (NCS). The NCS is a multi-Agency-sponsored study, authorized under the...

  11. Polybrominated Diphenyl Ethers in Dryer Lint: An Advanced Analysis Laboratory

    ERIC Educational Resources Information Center

    Thompson, Robert Q.

    2008-01-01

    An advanced analytical chemistry laboratory experiment is described that involves environmental analysis and gas chromatography-mass spectrometry. Students analyze lint from clothes dryers for traces of flame retardant chemicals, polybrominated diphenylethers (PBDEs), compounds receiving much attention recently. In a typical experiment, ng/g…

  12. A Meta-Analysis of Advanced Organizer Studies.

    ERIC Educational Resources Information Center

    Stone, Carol Leth

    1983-01-01

    Twenty-nine reports yielding 112 studies were analyzed with Glass's meta-analysis technique, and results were compared with predictions from Ausubel's model of assimilative learning. Overall, advance organizers were shown to be associated with increased learning and retention of material to be learned. (Author)

  13. Advanced GIS Exercise: Predicting Rainfall Erosivity Index Using Regression Analysis

    ERIC Educational Resources Information Center

    Post, Christopher J.; Goddard, Megan A.; Mikhailova, Elena A.; Hall, Steven T.

    2006-01-01

    Graduate students from a variety of agricultural and natural resource fields are incorporating geographic information systems (GIS) analysis into their graduate research, creating a need for teaching methodologies that help students understand advanced GIS topics for use in their own research. Graduate-level GIS exercises help students understand…

  14. NASTRAN documentation for flutter analysis of advanced turbopropellers

    NASA Technical Reports Server (NTRS)

    Elchuri, V.; Gallo, A. M.; Skalski, S. C.

    1982-01-01

    An existing capability developed to conduct modal flutter analysis of tuned bladed-shrouded discs was modified to facilitate investigation of the subsonic unstalled flutter characteristics of advanced turbopropellers. The modifications pertain to the inclusion of oscillatory modal aerodynamic loads of blades with large (backward and forward) varying sweep.

  15. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    SciTech Connect

    M. Wasiolek

    2004-10-15

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.

  16. Advanced stress analysis methods applicable to turbine engine structures

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1985-01-01

    Advanced stress analysis methods applicable to turbine engine structures are investigated. Constructions of special elements which containing traction-free circular boundaries are investigated. New versions of mixed variational principle and version of hybrid stress elements are formulated. A method is established for suppression of kinematic deformation modes. semiLoof plate and shell elements are constructed by assumed stress hybrid method. An elastic-plastic analysis is conducted by viscoplasticity theory using the mechanical subelement model.

  17. Immunoassay Methods and their Applications in Pharmaceutical Analysis: Basic Methodology and Recent Advances

    PubMed Central

    Darwish, Ibrahim A.

    2006-01-01

    Immunoassays are bioanalytical methods in which the quantitation of the analyte depends on the reaction of an antigen (analyte) and an antibody. Immunoassays have been widely used in many important areas of pharmaceutical analysis such as diagnosis of diseases, therapeutic drug monitoring, clinical pharmacokinetic and bioequivalence studies in drug discovery and pharmaceutical industries. The importance and widespread of immunoassay methods in pharmaceutical analysis are attributed to their inherent specificity, high-throughput, and high sensitivity for the analysis of wide range of analytes in biological samples. Recently, marked improvements were achieved in the field of immunoassay development for the purposes of pharmaceutical analysis. These improvements involved the preparation of the unique immunoanalytical reagents, analysis of new categories of compounds, methodology, and instrumentation. The basic methodologies and recent advances in immunoassay methods applied in different fields of pharmaceutical analysis have been reviewed. PMID:23674985

  18. Critical analysis on degradation mechanism of dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Mohamad Shahimin, Mukhzeer; Suhaimi, Suriati; Abd Wahid, Mohd Halim; Retnasamy, Vithyacharan; Ahmad Hambali, Nor Azura Malini; Reshak, Ali Hussain

    2015-09-01

    This paper reports on a précis of degradation mechanism for dye-sensitized solar cell (DSSCs). The review indicates progress in the understanding of degradation mechanism, in particular, the large improvement in the analysis of the materials used in DSSCs. The paper discussed on the stability issues of the dye, advancement of the photoelectrode film lifetime, changes in the electrolyte components and degradation analysis of the counter electrode. The photoelectrochemical parameters were evaluated in view of the possible degradation routes via open circuit voltage (Voc), short circuit current (Isc), fill factor (FF) and overall conversion efficiency (η) from the current-voltage curve. This analysis covers several types of materials that have paved the way for better-performing solar cells and directly influenced the stability and reliability of DSSCs. The new research trend together with the previous research has been highlighted to examine the key challenges faced in developing the ultimate DSSCs.

  19. Advanced assessment of the physicochemical characteristics of Remicade® and Inflectra® by sensitive LC/MS techniques

    PubMed Central

    Fang, Jing; Doneanu, Catalin; Alley, William R.; Yu, Ying Qing; Beck, Alain; Chen, Weibin

    2016-01-01

    ABSTRACT In this study, we demonstrate the utility of ultra-performance liquid chromatography coupled to mass spectrometry (MS) and ion-mobility spectrometry (IMS) to characterize and compare reference and biosimilar monoclonal antibodies (mAbs) at an advanced level. Specifically, we focus on infliximab and compared the glycan profiles, higher order structures, and their host cell proteins (HCPs) of the reference and biosimilar products, which have the brand names Remicade® and Inflectra®, respectively. Overall, the biosimilar attributes mirrored those of the reference product to a very high degree. The glycan profiling analysis demonstrated a high degree of similarity, especially among the higher abundance glycans. Some differences were observed for the lower abundance glycans. Glycans terminated with N-glycolylneuraminic acid were generally observed to be at higher normalized abundance levels on the biosimilar mAb, while those possessing α-linked galactose pairs were more often expressed at higher levels on the reference molecule. Hydrogen deuterium exchange (HDX) analyses further confirmed the higher-order similarity of the 2 molecules. These results demonstrated only very slight differences between the 2 products, which, interestingly, seemed to be in the area where the N-linked glycans reside. The HCP analysis by a 2D-UPLC IMS-MS approach revealed that the same 2 HCPs were present in both mAb samples. Our ability to perform these types of analyses and acquire insightful data for biosimilarity assessment is based upon our highly sensitive UPLC MS and IMS methods. PMID:27260215

  20. Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej

    2010-06-01

    Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.

  1. Sensitivity analysis of discrete structural systems: A survey

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.

    1984-01-01

    Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.

  2. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    SciTech Connect

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  3. Path-sensitive analysis for reducing rollback overheads

    SciTech Connect

    O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong

    2014-07-22

    A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.

  4. A comparison of two sampling methods for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tarantola, Stefano; Becker, William; Zeitz, Dirk

    2012-05-01

    We compare the convergence properties of two different quasi-random sampling designs - Sobol's quasi-Monte Carlo, and Latin supercube sampling in variance-based global sensitivity analysis. We use the non-monotonic V-function of Sobol' as base case-study, and compare the performance of both sampling strategies at increasing sample size and dimensionality against analytical values. The results indicate that in almost all cases investigated here, the Sobol' design performs better. This, coupled with the fact that effective Latin supercube sampling requires a priori knowledge of the interaction properties of the function, leads us to recommend Sobol' sampling in most practical cases.

  5. Isolation and analysis of ginseng: advances and challenges

    PubMed Central

    Wang, Chong-Zhi

    2011-01-01

    Ginseng occupies a prominent position in the list of best-selling natural products in the world. Because of its complex constituents, multidisciplinary techniques are needed to validate the analytical methods that support ginseng’s use worldwide. In the past decade, rapid development of technology has advanced many aspects of ginseng research. The aim of this review is to illustrate the recent advances in the isolation and analysis of ginseng, and to highlight their new applications and challenges. Emphasis is placed on recent trends and emerging techniques. The current article reviews the literature between January 2000 and September 2010. PMID:21258738

  6. Sensitivity analysis of state-specific multireference perturbation theory

    NASA Astrophysics Data System (ADS)

    Szabados, Ágnes

    2011-05-01

    State-specific multireference perturbation theory (SS-MRPT) developed by Mukherjee et al. [Int. J. Mol. Sci. 3, 733 (2002)] is examined focusing on the dependence of the perturbed energy on the initial model space coefficients. It has been observed earlier, that non-physical kinks may appear on the potential energy surface obtained by SS-MRPT while related coupled-cluster methods may face convergence difficulties. Though exclusion or damping of the division by small coefficients may alleviate the problem, it is demonstrated here that the effect does not originate in an ill-defined division. It is shown that non-negligible model space coefficients may also be linked with the problem. Sensitivity analysis is suggested as a tool for detecting the coefficient responsible. By monitoring the singular values of sensitivity matrices, orders of magnitude increase is found in the largest value, in the vicinity of the problematic geometry point on the potential energy surface. The drastic increase of coefficient sensitivities is found to be linked with a degeneracy of the target root of the effective Hamiltonian. The nature of the one-electron orbitals has a profound influence on the picture: a rotation among active orbitals may screen or worsen the effect.

  7. Issues affecting advanced passive light-water reactor safety analysis

    SciTech Connect

    Beelman, R.J.; Fletcher, C.D.; Modro, S.M.

    1992-08-01

    Next generation commercial reactor designs emphasize enhanced safety through improved safety system reliability and performance by means of system simplification and reliance on immutable natural forces for system operation. Simulating the performance of these safety systems will be central to analytical safety evaluation of advanced passive reactor designs. Yet the characteristically small driving forces of these safety systems pose challenging computational problems to current thermal-hydraulic systems analysis codes. Additionally, the safety systems generally interact closely with one another, requiring accurate, integrated simulation of the nuclear steam supply system, engineered safeguards and containment. Furthermore, numerical safety analysis of these advanced passive reactor designs wig necessitate simulation of long-duration, slowly-developing transients compared with current reactor designs. The composite effects of small computational inaccuracies on induced system interactions and perturbations over long periods may well lead to predicted results which are significantly different than would otherwise be expected or might actually occur. Comparisons between the engineered safety features of competing US advanced light water reactor designs and analogous present day reactor designs are examined relative to the adequacy of existing thermal-hydraulic safety codes in predicting the mechanisms of passive safety. Areas where existing codes might require modification, extension or assessment relative to passive safety designs are identified. Conclusions concerning the applicability of these codes to advanced passive light water reactor safety analysis are presented.

  8. Issues affecting advanced passive light-water reactor safety analysis

    SciTech Connect

    Beelman, R.J.; Fletcher, C.D.; Modro, S.M.

    1992-01-01

    Next generation commercial reactor designs emphasize enhanced safety through improved safety system reliability and performance by means of system simplification and reliance on immutable natural forces for system operation. Simulating the performance of these safety systems will be central to analytical safety evaluation of advanced passive reactor designs. Yet the characteristically small driving forces of these safety systems pose challenging computational problems to current thermal-hydraulic systems analysis codes. Additionally, the safety systems generally interact closely with one another, requiring accurate, integrated simulation of the nuclear steam supply system, engineered safeguards and containment. Furthermore, numerical safety analysis of these advanced passive reactor designs wig necessitate simulation of long-duration, slowly-developing transients compared with current reactor designs. The composite effects of small computational inaccuracies on induced system interactions and perturbations over long periods may well lead to predicted results which are significantly different than would otherwise be expected or might actually occur. Comparisons between the engineered safety features of competing US advanced light water reactor designs and analogous present day reactor designs are examined relative to the adequacy of existing thermal-hydraulic safety codes in predicting the mechanisms of passive safety. Areas where existing codes might require modification, extension or assessment relative to passive safety designs are identified. Conclusions concerning the applicability of these codes to advanced passive light water reactor safety analysis are presented.

  9. Sensitivity Analysis of a process based erosion model using FAST

    NASA Astrophysics Data System (ADS)

    Gabelmann, Petra; Wienhöfer, Jan; Zehe, Erwin

    2015-04-01

    deposition are related to overland flow velocity using the equation of Engelund and Hansen and the sinking velocity of grain sizes, respectively. The sensitivity analysis was performed based on virtual hillslopes similar to those in the Weiherbach catchment. We applied the FAST-method (Fourier Amplitude Sensitivity Test), which provides a global sensitivity analysis with comparably few model runs. We varied model parameters in predefined and, for the Weiherbach catchment, physically meaningful parameter ranges. Those parameters included rainfall intensity, surface roughness, hillslope geometry, land use, erosion resistance, and soil hydraulic parameters. The results of this study allow guiding further modelling efforts in the Weiherbach catchment with respect to data collection and model modification.

  10. A global sensitivity analysis of crop virtual water content

    NASA Astrophysics Data System (ADS)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  11. Advanced Post-Irradiation Examination Capabilities Alternatives Analysis Report

    SciTech Connect

    Jeff Bryan; Bill Landman; Porter Hill

    2012-12-01

    An alternatives analysis was performed for the Advanced Post-Irradiation Capabilities (APIEC) project in accordance with the U.S. Department of Energy (DOE) Order DOE O 413.3B, “Program and Project Management for the Acquisition of Capital Assets”. The Alternatives Analysis considered six major alternatives: ? No Action ? Modify Existing DOE Facilities – capabilities distributed among multiple locations ? Modify Existing DOE Facilities – capabilities consolidated at a few locations ? Construct New Facility ? Commercial Partnership ? International Partnerships Based on the alternatives analysis documented herein, it is recommended to DOE that the advanced post-irradiation examination capabilities be provided by a new facility constructed at the Materials and Fuels Complex at the Idaho National Laboratory.

  12. Analysis of life cycle costs for electric vans with advanced battery systems

    SciTech Connect

    Marr, W.W.; Walsh, W.J.; Miller, J.F.

    1988-11-01

    The performance of advanced Zn/Br/sub 2/, LiAl/FeS, Na/S, Ni/Fe, and Fe/Air batteries in electric vans was compared to that of tubular lead-acid technology. The MARVEL computer analysis system evaluated these batteries for the G-Van and IDSEP vehicles over two driving schedules. Each of the advanced batteries exhibited the potential for major improvements in both range and life cycle cost compared with tubular lead-acid. A sensitivity analysis revealed specific energy, battery initial cost, and cycle life to be the dominant factors in reducing life cycle cost for the case of vans powered by tubular lead-acid batteries. 5 refs., 8 figs., 2 tabs.

  13. Analysis of life cycle costs for electric vans with advanced battery systems

    SciTech Connect

    Marr, W.W.; Walsh, W.J.; Miller, J.F.

    1989-01-01

    The performance of advanced Zn/Br/sub 2/, LiAl/FeS, Na/S, Ni/Fe, and Fe/Air batteries in electric vans was compared to that of tubular lead-acid technology. The MARVEL computer analysis system evaluated these batteries for the G-Van and IDSEP vehicles over two driving schedules. Each of the advanced batteries exhibited the potential for major improvements in both range and life cycle cost compared with tubular lead-acid. A sensitivity analysis reveals specific energy, battery initial cost, and cycle life to be the dominant factors in reducing life cycle cost for the case of vans powered by tubular lead-acid batteries.

  14. "ATLAS" Advanced Technology Life-cycle Analysis System

    NASA Technical Reports Server (NTRS)

    Lollar, Louis F.; Mankins, John C.; ONeil, Daniel A.

    2004-01-01

    Making good decisions concerning research and development portfolios-and concerning the best systems concepts to pursue - as early as possible in the life cycle of advanced technologies is a key goal of R&D management This goal depends upon the effective integration of information from a wide variety of sources as well as focused, high-level analyses intended to inform such decisions Life-cycle Analysis System (ATLAS) methodology and tool kit. ATLAS encompasses a wide range of methods and tools. A key foundation for ATLAS is the NASA-created Technology Readiness. The toolkit is largely spreadsheet based (as of August 2003). This product is being funded by the Human and Robotics The presentation provides a summary of the Advanced Technology Level (TRL) systems Technology Program Office, Office of Exploration Systems, NASA Headquarters, Washington D.C. and is being integrated by Dan O Neil of the Advanced Projects Office, NASA/MSFC, Huntsville, AL

  15. Develop Advanced Nonlinear Signal Analysis Topographical Mapping System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1997-01-01

    During the development of the SSME, a hierarchy of advanced signal analysis techniques for mechanical signature analysis has been developed by NASA and AI Signal Research Inc. (ASRI) to improve the safety and reliability for Space Shuttle operations. These techniques can process and identify intelligent information hidden in a measured signal which is often unidentifiable using conventional signal analysis methods. Currently, due to the highly interactive processing requirements and the volume of dynamic data involved, detailed diagnostic analysis is being performed manually which requires immense man-hours with extensive human interface. To overcome this manual process, NASA implemented this program to develop an Advanced nonlinear signal Analysis Topographical Mapping System (ATMS) to provide automatic/unsupervised engine diagnostic capabilities. The ATMS will utilize a rule-based Clips expert system to supervise a hierarchy of diagnostic signature analysis techniques in the Advanced Signal Analysis Library (ASAL). ASAL will perform automatic signal processing, archiving, and anomaly detection/identification tasks in order to provide an intelligent and fully automated engine diagnostic capability. The ATMS has been successfully developed under this contract. In summary, the program objectives to design, develop, test and conduct performance evaluation for an automated engine diagnostic system have been successfully achieved. Software implementation of the entire ATMS system on MSFC's OISPS computer has been completed. The significance of the ATMS developed under this program is attributed to the fully automated coherence analysis capability for anomaly detection and identification which can greatly enhance the power and reliability of engine diagnostic evaluation. The results have demonstrated that ATMS can significantly save time and man-hours in performing engine test/flight data analysis and performance evaluation of large volumes of dynamic test data.

  16. Analysis of Transition-Sensitized Turbulent Transport Equations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,

    2005-01-01

    The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.

  17. Comparative Analysis of State Fish Consumption Advisories Targeting Sensitive Populations

    PubMed Central

    Scherer, Alison C.; Tsuchiya, Ami; Younglove, Lisa R.; Burbacher, Thomas M.; Faustman, Elaine M.

    2008-01-01

    Objective Fish consumption advisories are issued to warn the public of possible toxicological threats from consuming certain fish species. Although developing fetuses and children are particularly susceptible to toxicants in fish, fish also contain valuable nutrients. Hence, formulating advice for sensitive populations poses challenges. We conducted a comparative analysis of advisory Web sites issued by states to assess health messages that sensitive populations might access. Data sources We evaluated state advisories accessed via the National Listing of Fish Advisories issued by the U.S. Environmental Protection Agency. Data extraction We created criteria to evaluate advisory attributes such as risk and benefit message clarity. Data synthesis All 48 state advisories issued at the time of this analysis targeted children, 90% (43) targeted pregnant women, and 58% (28) targeted women of childbearing age. Only six advisories addressed single contaminants, while the remainder based advice on 2–12 contaminants. Results revealed that advisories associated a dozen contaminants with specific adverse health effects. Beneficial health effects of any kind were specifically associated only with omega-3 fatty acids found in fish. Conclusions These findings highlight the complexity of assessing and communicating information about multiple contaminant exposure from fish consumption. Communication regarding potential health benefits conferred by specific fish nutrients was minimal and focused primarily on omega-3 fatty acids. This overview suggests some lessons learned and highlights a lack of both clarity and consistency in providing the breadth of information that sensitive populations such as pregnant women need to make public health decisions about fish consumption during pregnancy. PMID:19079708

  18. Global sensitivity analysis of the XUV-ABLATOR code

    NASA Astrophysics Data System (ADS)

    Nevrlý, Václav; Janku, Jaroslav; Dlabka, Jakub; Vašinek, Michal; Juha, Libor; Vyšín, Luděk.; Burian, Tomáš; Lančok, Ján.; Skřínský, Jan; Zelinger, Zdeněk.; Pira, Petr; Wild, Jan

    2013-05-01

    Availability of numerical model providing reliable estimation of the parameters of ablation processes induced by extreme ultraviolet laser pulses in the range of nanosecond and sub-picosecond timescales is highly desirable for recent experimental research as well as for practical purposes. Performance of the one-dimensional thermodynamic code (XUV-ABLATOR) in predicting the relationship of ablation rate and laser fluence is investigated for three reference materials: (i) silicon, (ii) fused silica and (iii) polymethyl methacrylate. The effect of pulse duration and different material properties on the model predictions is studied in the frame of this contribution for the conditions typical for two compact laser systems operating at 46.9 nm. Software implementation of the XUV-ABLATOR code including graphical user's interface and the set of tools for sensitivity analysis was developed. Global sensitivity analysis using high dimensional model representation in combination with quasi-random sampling was applied in order to identify the most critical input data as well as to explore the uncertainty range of model results.

  19. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  20. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  1. Trends in sensitivity analysis practice in the last decade.

    PubMed

    Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano

    2016-10-15

    The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods. PMID:26934843

  2. Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)

    SciTech Connect

    Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.

    2012-10-01

    No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.

  3. Global sensitivity analysis of the Indian monsoon during the Pleistocene

    NASA Astrophysics Data System (ADS)

    Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.

    2015-01-01

    The sensitivity of the Indian monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) development of an experiment plan, designed to efficiently sample a five-dimensional input space spanning Pleistocene astronomical configurations (three parameters), CO2 concentration and a Northern Hemisphere glaciation index; (2) development, calibration and validation of an emulator of HadCM3 in order to estimate the response of the Indian monsoon over the full input space spanned by the experiment design; and (3) estimation and interpreting of sensitivity diagnostics, including sensitivity measures, in order to synthesise the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. By focusing on surface temperature, precipitation, mixed-layer depth and sea-surface temperature over the monsoon region during the summer season (June-July-August-September), we show that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations control temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on precipitation

  4. GPU-based Integration with Application in Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar

    2010-05-01

    The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is

  5. Aerodynamic Shape Sensitivity Analysis and Design Optimization of Complex Configurations Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.

    1997-01-01

    A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.

  6. Adjoint sensitivity analysis of hydrodynamic stability in cyclonic flows

    NASA Astrophysics Data System (ADS)

    Guzman Inigo, Juan; Juniper, Matthew

    2015-11-01

    Cyclonic separators are used in a variety of industries to efficiently separate mixtures of fluid and solid phases by means of centrifugal forces and gravity. In certain circumstances, the vortex core of cyclonic flows is known to precess due to the instability of the flow, which leads to performance reductions. We aim to characterize the unsteadiness using linear stability analysis of the Reynolds Averaged Navier-Stokes (RANS) equations in a global framework. The system of equations, including the turbulence model, is linearised to obtain an eigenvalue problem. Unstable modes corresponding to the dynamics of the large structures of the turbulent flow are extracted. The analysis shows that the most unstable mode is a helical motion which develops around the axis of the flow. This result is in good agreement with LES and experimental analysis, suggesting the validity of the approach. Finally, an adjoint-based sensitivity analysis is performed to determine the regions of the flow that, when altered, have most influence on the frequency and growth-rate of the unstable eigenvalues.

  7. Global sensitivity analysis of a 3D street canyon model—Part II: Application and physical insight using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Benson, James; Ziehn, Tilo; Dixon, Nick S.; Tomlin, Alison S.

    In this work global sensitivity studies using Monte Carlo sampling and high dimensional model representations (HDMR) have been carried out on the k- ɛ closure computational fluid dynamic (CFD) model MISKAM, allowing detailed representation of the effects of changing input parameters on the model outputs. The scenario studied is that of a complex street canyon in the city of York, UK. The sensitivity of the turbulence and mean flow fields to the input parameters is detailed both at specific measurement points and in the associated canyon cross-section to aid comparison with field data. This analysis gives insight into how model parameters can influence the predicted outputs. It also shows the relative strength of each parameter in its influence. Four main input parameters are addressed. Three parameters are surface roughness lengths, determining the flow over a surface, and the fourth is the background wind direction. In order to determine the relative importance of each parameter, sensitivity indices are calculated for the canyon cross-section. The sensitivity of the flow structures in and above the canyon to each parameter is found to be very location dependant. In general, at a particular measurement point, it is the closest wall surface that is most influential on the model output. However, due to the complexity of the flow at different wind angles this is not always the case, for example when a re-circulating canyon flow pattern is present. The background wind direction is shown to be an important parameter as it determines the surface features encountered by the flow. The accuracy with which this is specified when modelling a full-scale situation is therefore an important consideration when considering model uncertainty. Overall, the uncertainty due to roughness lengths is small in comparison to the mean outputs, indicating that the model is well defined even with large ranges of input parameter uncertainty.

  8. Task parallel sensitivity analysis and parameter estimation of groundwater simulations through the SALSSA framework

    SciTech Connect

    Schuchardt, Karen L.; Agarwal, Khushbu; Chase, Jared M.; Rockhold, Mark L.; Freedman, Vicky L.; Elsethagen, Todd O.; Scheibe, Timothy D.; Chin, George; Sivaramakrishnan, Chandrika

    2010-07-15

    The Support Architecture for Large-Scale Subsurface Analysis (SALSSA) provides an extensible framework, sophisticated graphical user interface, and underlying data management system that simplifies the process of running subsurface models, tracking provenance information, and analyzing the model results. Initially, SALSSA supported two styles of job control: user directed execution and monitoring of individual jobs, and load balancing of jobs across multiple machines taking advantage of many available workstations. Recent efforts in subsurface modelling have been directed at advancing simulators to take advantage of leadership class supercomputers. We describe two approaches, current progress, and plans toward enabling efficient application of the subsurface simulator codes via the SALSSA framework: automating sensitivity analysis problems through task parallelism, and task parallel parameter estimation using the PEST framework.

  9. Analysis and design of advanced composite bounded joints

    NASA Technical Reports Server (NTRS)

    Hart-Smith, L. J.

    1974-01-01

    Advances in the analysis of adhesive-bonded joints are presented with particular emphasis on advanced composite structures. The joints analyzed are of double-lap, single-lap, scarf, stepped-lap and tapered-lap configurations. Tensile, compressive, and in-plane shear loads are covered. In addition to the usual geometric variables, the theory accounts for the strength increases attributable to adhesive plasticity (in terms of the elastic-plastic adhesive model) and the joint strength reductions imposed by imbalances between the adherends. The solutions are largely closed-form analytical results, employing iterative solutions on a digital computer for the more complicated joint configurations. In assessing the joint efficiency, three potential failure modes are considered. These are adherend failure outside the joint, adhesive failure in shear, and adherend interlaminar tension failure (or adhesive failure in peel). Each mode is governed by a distinct mathematical analysis and each prevails throughout different ranges of geometric sizes and proportions.

  10. Advanced gamma ray balloon experiment ground checkout and data analysis

    NASA Technical Reports Server (NTRS)

    Blackstone, M.

    1976-01-01

    A software programming package to be used in the ground checkout and handling of data from the advanced gamma ray balloon experiment is described. The Operator's Manual permits someone unfamiliar with the inner workings of the software system (called LEO) to operate on the experimental data as it comes from the Pulse Code Modulation interface, converting it to a form for later analysis, and monitoring the program of an experiment. A Programmer's Manual is included.

  11. [Advances in independent component analysis and its application].

    PubMed

    Chen, Huafu; Yao, Dezhong

    2003-06-01

    The independent component analysis (ICA) is a new technique in statistical signal processing, which decomposes mixed signals into statistical independent components. The reported applications in biomedical and radar signal have demonstrated its good prospect in various blind signal separation. In this paper, the progress of ICA in such as its principle, algorithm and application and advance direction of ICA in future is reviewed. The aim is to promote the research in theory and application in the future. PMID:12856621

  12. Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Lorence, Christopher B.; Hall, Kenneth C.

    1995-01-01

    A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide

  13. Advanced superposition methods for high speed turbopump vibration analysis

    NASA Technical Reports Server (NTRS)

    Nielson, C. E.; Campany, A. D.

    1981-01-01

    The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.

  14. Advanced image analysis for the preservation of cultural heritage

    NASA Astrophysics Data System (ADS)

    France, Fenella G.; Christens-Barry, William; Toth, Michael B.; Boydston, Kenneth

    2010-02-01

    The Library of Congress' Preservation Research and Testing Division has established an advanced preservation studies scientific program for research and analysis of the diverse range of cultural heritage objects in its collection. Using this system, the Library is currently developing specialized integrated research methodologies for extending preservation analytical capacities through non-destructive hyperspectral imaging of cultural objects. The research program has revealed key information to support preservation specialists, scholars and other institutions. The approach requires close and ongoing collaboration between a range of scientific and cultural heritage personnel - imaging and preservation scientists, art historians, curators, conservators and technology analysts. A research project of the Pierre L'Enfant Plan of Washington DC, 1791 had been undertaken to implement and advance the image analysis capabilities of the imaging system. Innovative imaging options and analysis techniques allow greater processing and analysis capacities to establish the imaging technique as the first initial non-invasive analysis and documentation step in all cultural heritage analyses. Mapping spectral responses, organic and inorganic data, topography semi-microscopic imaging, and creating full spectrum images have greatly extended this capacity from a simple image capture technique. Linking hyperspectral data with other non-destructive analyses has further enhanced the research potential of this image analysis technique.

  15. Sensitivity analysis of channel-bend hydraulics influenced by vegetation

    NASA Astrophysics Data System (ADS)

    Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.

    2015-12-01

    Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.

  16. Global sensitivity analysis of the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Neelam, Maheshwari; Mohanty, Binayak P.

    2015-04-01

    With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.

  17. Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations

    SciTech Connect

    Petzold, L; Cao, Y; Li, S; Serban, R

    2005-08-09

    Sensitivity analysis generates essential information for model development, design optimization, parameter estimation, optimal control, model reduction and experimental design. In this paper we describe the forward and adjoint methods for sensitivity analysis, and outline some of our recent work on theory, algorithms and software for sensitivity analysis of differential-algebraic equation (DAE) and time-dependent partial differential equation (PDE) systems.

  18. Sensitivity analysis of water quality for Delhi stretch of the River Yamuna, India.

    PubMed

    Parmar, D L; Keshari, Ashok K

    2012-03-01

    Simulation models are used to aid the decision makers about water pollution control and management in river systems. However, uncertainty of model parameters affects the model predictions and hence the pollution control decision. Therefore, it often is necessary to identify the model parameters that significantly affect the model output uncertainty prior to or as a supplement to model application to water pollution control and planning problems. In this study, sensitivity analysis, as a tool for uncertainty analysis was carried out to assess the sensitivity of water quality to (a) model parameters (b) pollution abatement measures such as wastewater treatment, waste discharge and flow augmentation from upstream reservoir. In addition, sensitivity analysis for the "best practical solution" was carried out to help the decision makers in choosing an appropriate option. The Delhi stretch of the river Yamuna was considered as a case study. The QUAL2E model is used for water quality simulation. The results obtained indicate that parameters K(1) (deoxygenation constant) and K(3) (settling oxygen demand), which is the rate of biochemical decomposition of organic matter and rate of BOD removal by settling, respectively, are the most sensitive parameters for the considered river stretch. Different combinations of variations in K(1) and K(2) also revealed similar results for better understanding of inter-dependability of K(1) and K(2). Also, among the pollution abatement methods, the change (perturbation) in wastewater treatment level at primary, secondary, tertiary, and advanced has the greatest effect on the uncertainty of the simulated dissolved oxygen and biochemical oxygen demand concentrations. PMID:21544505

  19. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  20. [Advanced data analysis and visualization for clinical laboratory].

    PubMed

    Inada, Masanori; Yoneyama, Akiko

    2011-01-01

    This paper describes visualization techniques that help identify hidden structures in clinical laboratory data. The visualization of data is helpful for a rapid and better understanding of the characteristics of data sets. Various charts help the user identify trends in data. Scatter plots help prevent misinterpretations due to invalid data by identifying outliers. The representation of experimental data in figures is always useful for communicating results to others. Currently, flexible methods such as smoothing methods and latent structure analysis are available owing to the presence of advanced hardware and software. Principle component analysis, which is a well-known technique used to reduce multidimensional data sets, can be carried out on a personal computer. These methods could lead to advanced visualization with regard to exploratory data analysis. In this paper, we present 3 examples in order to introduce advanced data analysis. In the first example, a smoothing spline was fitted to a time-series from the control chart which is not in a state of statistical control. The trend line was clearly extracted from the daily measurements of the control samples. In the second example, principal component analysis was used to identify a new diagnostic indicator for Graves' disease. The multi-dimensional data obtained from patients were reduced to lower dimensions, and the principle components thus obtained summarized the variation in the data set. In the final example, a latent structure analysis for a Gaussian mixture model was used to draw complex density functions suitable for actual laboratory data. As a result, 5 clusters were extracted. The mixed density function of these clusters represented the data distribution graphically. The methods used in the above examples make the creation of complicated models for clinical laboratories more simple and flexible. PMID:21404582

  1. Design and implementation of a context-sensitive, flow-sensitive activity analysis algorithm for automatic differentiation.

    SciTech Connect

    Shin, J.; Malusare, P.; Hovland, P. D.; Mathematics and Computer Science

    2008-01-01

    Automatic differentiation (AD) has been expanding its role in scientific computing. While several AD tools have been actively developed and used, a wide range of problems remain to be solved. Activity analysis allows AD tools to generate derivative code for fewer variables, leading to a faster run time of the output code. This paper describes a new context-sensitive, flow-sensitive (CSFS) activity analysis, which is developed by extending an existing context-sensitive, flow-insensitive (CSFI) activity analysis. Our experiments with eight benchmarks show that the new CSFS activity analysis is more than 27 times slower but reduces 8 overestimations for the MIT General Circulation Model (MITgcm) and 1 for an ODE solver (c2) compared with the existing CSFI activity analysis implementation. Although the number of reduced overestimations looks small, the additionally identified passive variables may significantly reduce tedious human effort in maintaining a large code base such as MITgcm.

  2. Cost/benefit analysis of advanced materials technology candidates for the 1980's, part 2

    NASA Technical Reports Server (NTRS)

    Dennis, R. E.; Maertins, H. F.

    1980-01-01

    Cost/benefit analyses to evaluate advanced material technologies projects considered for general aviation and turboprop commuter aircraft through estimated life-cycle costs, direct operating costs, and development costs are discussed. Specifically addressed is the selection of technologies to be evaluated; development of property goals; assessment of candidate technologies on typical engines and aircraft; sensitivity analysis of the changes in property goals on performance and economics, cost, and risk analysis for each technology; and ranking of each technology by relative value. The cost/benefit analysis was applied to a domestic, nonrevenue producing, business-type jet aircraft configured with two TFE731-3 turbofan engines, and to a domestic, nonrevenue producing, business type turboprop aircraft configured with two TPE331-10 turboprop engines. In addition, a cost/benefit analysis was applied to a commercial turboprop aircraft configured with a growth version of the TPE331-10.

  3. Seismically induced relay chatter risk analysis for the Advanced Test Reactor

    SciTech Connect

    Khericha, S.T.; Calley, M.B.; Farmer, F.G.; Eide, S.A.; Ravindra, M.K.; Campbell, R.D.

    1992-12-31

    A seismic probabilistic risk assessment (PRA) was performed as part of the Level I PRA for the Department of Energy (DOE) Advanced Test Reactor (ATR) located at the Idaho National Engineering Laboratory (INEL). This seismic PRA included a comprehensive and efficient seismically-induced relay chatter risk analysis. The key elements to this comprehensive and efficient seismically-induced relay chatter analysis included (1) screening procedures to identify the critical relays to be evaluated, (2) streamlined seismic fragility evaluation, and (3) comprehensive seismic risk evaluation using detailed event trees and fault trees. These key elements were performed to provide a core fuel damage frequency evaluation due to seismically induced relay chatter. A sensitivity analysis was performed to evaluate the impact of including seismically-induced relay chatter events in the seismic PRA. The systems analysis was performed by EG&G Idaho, Inc. and the fragilities for the relays were developed by EQE Engineering Consultants.

  4. Seismically induced relay chatter risk analysis for the Advanced Test Reactor

    SciTech Connect

    Khericha, S.T.; Calley, M.B.; Farmer, F.G. ); Eide, S.A. ); Ravindra, M.K.; Campbell, R.D. )

    1992-01-01

    A seismic probabilistic risk assessment (PRA) was performed as part of the Level I PRA for the Department of Energy (DOE) Advanced Test Reactor (ATR) located at the Idaho National Engineering Laboratory (INEL). This seismic PRA included a comprehensive and efficient seismically-induced relay chatter risk analysis. The key elements to this comprehensive and efficient seismically-induced relay chatter analysis included (1) screening procedures to identify the critical relays to be evaluated, (2) streamlined seismic fragility evaluation, and (3) comprehensive seismic risk evaluation using detailed event trees and fault trees. These key elements were performed to provide a core fuel damage frequency evaluation due to seismically induced relay chatter. A sensitivity analysis was performed to evaluate the impact of including seismically-induced relay chatter events in the seismic PRA. The systems analysis was performed by EG G Idaho, Inc. and the fragilities for the relays were developed by EQE Engineering Consultants.

  5. Neutron activation analysis; A sensitive test for trace elements

    SciTech Connect

    Hossain, T.Z. . Ward Lab.)

    1992-01-01

    This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.

  6. Apparatus and Method for Ultra-Sensitive trace Analysis

    SciTech Connect

    Lu, Zhengtian; Bailey, Kevin G.; Chen, Chun Yen; Li, Yimin; O'Connor, Thomas P.; Young, Linda

    2000-01-03

    An apparatus and method for conducting ultra-sensitive trace element and isotope analysis. The apparatus injects a sample through a fine nozzle to form an atomic beam. A DC discharge is used to elevate select atoms to a metastable energy level. These atoms are then acted on by a laser oriented orthogonally to the beam path to reduce the traverse velocity and to decrease the divergence angle of the beam. The beam then enters a Zeeman slower where a counter-propagating laser beam acts to slow the atoms down. Then select atoms are captured in a magneto-optical trap where they undergo fluorescence. A portion of the scattered photons are imaged onto a photo-detector, and the results analyzed to detect the presence of single atoms of the specific trace elements.

  7. Sensitivity analysis and optimization of thin-film thermoelectric coolers

    NASA Astrophysics Data System (ADS)

    Harsha Choday, Sri; Roy, Kaushik

    2013-06-01

    The cooling performance of a thermoelectric (TE) material is dependent on the figure-of-merit (ZT = S2σT/κ), where S is the Seebeck coefficient, σ and κ are the electrical and thermal conductivities, respectively. The standard definition of ZT assigns equal importance to power factor (S2σ) and thermal conductivity. In this paper, we analyze the relative importance of each thermoelectric parameter on the cooling performance using the mathematical framework of sensitivity analysis. In addition, the impact of the electrical/thermal contact parasitics on bulk and superlattice Bi2Te3 is also investigated. In the presence of significant contact parasitics, we find that the carrier concentration that results in best cooling is lower than that of the highest ZT. We also establish the level of contact parasitics that are needed such that their impact on TE cooling is negligible.

  8. Advanced stress analysis methods applicable to turbine engine structures

    NASA Technical Reports Server (NTRS)

    Pian, Theodore H. H.

    1991-01-01

    The following tasks on the study of advanced stress analysis methods applicable to turbine engine structures are described: (1) constructions of special elements which contain traction-free circular boundaries; (2) formulation of new version of mixed variational principles and new version of hybrid stress elements; (3) establishment of methods for suppression of kinematic deformation modes; (4) construction of semiLoof plate and shell elements by assumed stress hybrid method; and (5) elastic-plastic analysis by viscoplasticity theory using the mechanical subelement model.

  9. Advanced Signal Analysis for Forensic Applications of Ground Penetrating Radar

    SciTech Connect

    Steven Koppenjan; Matthew Streeton; Hua Lee; Michael Lee; Sashi Ono

    2004-06-01

    Ground penetrating radar (GPR) systems have traditionally been used to image subsurface objects. The main focus of this paper is to evaluate an advanced signal analysis technique. Instead of compiling spatial data for the analysis, this technique conducts object recognition procedures based on spectral statistics. The identification feature of an object type is formed from the training vectors by a singular-value decomposition procedure. To illustrate its capability, this procedure is applied to experimental data and compared to the performance of the neural-network approach.

  10. Advances in Computational Stability Analysis of Composite Aerospace Structures

    SciTech Connect

    Degenhardt, R.; Araujo, F. C. de

    2010-09-30

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

  11. Advanced Models for Aeroelastic Analysis of Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Mahajan, Aparajit

    1996-01-01

    This report describes an integrated, multidisciplinary simulation capability for aeroelastic analysis and optimization of advanced propulsion systems. This research is intended to improve engine development, acquisition, and maintenance costs. One of the proposed simulations is aeroelasticity of blades, cowls, and struts in an ultra-high bypass fan. These ducted fans are expected to have significant performance, fuel, and noise improvements over existing engines. An interface program was written to use modal information from COBSTAN and NASTRAN blade models in aeroelastic analysis with a single rotation ducted fan aerodynamic code.

  12. Sensitivity analysis of a Vision 21 coal based zero emission power plant

    NASA Astrophysics Data System (ADS)

    Verma, A.; Rao, A. D.; Samuelsen, G. S.

    The goal of the U.S. Department of Energy's (DOE's) FutureGen project initiative is to develop and demonstrate technology for ultra clean 21st century energy plants that effectively remove environmental concerns associated with the use of fossil fuels for producing electricity, and simultaneously develop highly efficient and cost-effective power plants. The design optimization of an advanced FutureGen plant consisting of an advanced transport reactor (ATR) for coal gasification to generate syngas to fuel an integrated solid oxide fuel cell (SOFC) combined cycle is presented. The overall plant analysis of a baseline system design is performed by identifying the major factors effecting plant performance; these major factors being identified by a strategy consisting of the application of design of experiments (DOEx). A steady state simulation tool is used to perform sensitivity analysis to verify the factors identified through DOEx, and then to perform parametric analysis to identify optimum values for maximum system efficiency. Modifications to baseline system design are made to attain higher system efficiency and to lower the negative impact of reducing the SOFC operating pressure on system efficiency.

  13. Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.

    PubMed

    Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier

    2012-12-01

    The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project. PMID

  14. Structural Configuration Systems Analysis for Advanced Aircraft Fuselage Concepts

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Welstead, Jason R.; Quinlan, Jesse R.; Guynn, Mark D.

    2016-01-01

    Structural configuration analysis of an advanced aircraft fuselage concept is investigated. This concept is characterized by a double-bubble section fuselage with rear mounted engines. Based on lessons learned from structural systems analysis of unconventional aircraft, high-fidelity finite-element models (FEM) are developed for evaluating structural performance of three double-bubble section configurations. Structural sizing and stress analysis are applied for design improvement and weight reduction. Among the three double-bubble configurations, the double-D cross-section fuselage design was found to have a relatively lower structural weight. The structural FEM weights of these three double-bubble fuselage section concepts are also compared with several cylindrical fuselage models. Since these fuselage concepts are different in size, shape and material, the fuselage structural FEM weights are normalized by the corresponding passenger floor area for a relative comparison. This structural systems analysis indicates that an advanced composite double-D section fuselage may have a relative structural weight ratio advantage over a conventional aluminum fuselage. Ten commercial and conceptual aircraft fuselage structural weight estimates, which are empirically derived from the corresponding maximum takeoff gross weight, are also presented and compared with the FEM- based estimates for possible correlation. A conceptual full vehicle FEM model with a double-D fuselage is also developed for preliminary structural analysis and weight estimation.

  15. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  16. Toward Sensitive and Accurate Analysis of Antibody Biotherapeutics by Liquid Chromatography Coupled with Mass Spectrometry

    PubMed Central

    An, Bo; Zhang, Ming

    2014-01-01

    Remarkable methodological advances in the past decade have expanded the application of liquid chromatography coupled with mass spectrometry (LC/MS) analysis of biotherapeutics. Currently, LC/MS represents a promising alternative or supplement to the traditional ligand binding assay (LBA) in the pharmacokinetic, pharmacodynamic, and toxicokinetic studies of protein drugs, owing to the rapid and cost-effective method development, high specificity and reproducibility, low sample consumption, the capacity of analyzing multiple targets in one analysis, and the fact that a validated method can be readily adapted across various matrices and species. While promising, technical challenges associated with sensitivity, sample preparation, method development, and quantitative accuracy need to be addressed to enable full utilization of LC/MS. This article introduces the rationale and technical challenges of LC/MS techniques in biotherapeutics analysis and summarizes recently developed strategies to alleviate these challenges. Applications of LC/MS techniques on quantification and characterization of antibody biotherapeutics are also discussed. We speculate that despite the highly attractive features of LC/MS, it will not fully replace traditional assays such as LBA in the foreseeable future; instead, the forthcoming trend is likely the conjunction of biochemical techniques with versatile LC/MS approaches to achieve accurate, sensitive, and unbiased characterization of biotherapeutics in highly complex pharmaceutical/biologic matrices. Such combinations will constitute powerful tools to tackle the challenges posed by the rapidly growing needs for biotherapeutics development. PMID:25185260

  17. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  18. Recent advances in enhancing the sensitivity and resolution of capillary electrophoresis.

    PubMed

    Zhang, Zhaoxiang; Zhang, Fei; Liu, Ying

    2013-08-01

    An extensive search of published research and review articles indicates that enhancing the sensitivity and resolution of capillary electrophoresis (CE) is a very active area of interest. This review focuses on developments and applications in this field over several years, especially from 2009-present. It first reviews developments in the fields of online sample preconcentration and highly sensitive detection for sensitivity enhancement. The online sample preconcentration techniques cover all methods, from electrophoretic preconcentration, extraction-based preconcentration, micelle-based preconcentration and hybrid preconcentration to nanoparticle-based preconcentration. Attention is also given to multiple dimension separations, additives in buffer solution and capillary inner surface modifications that have been used to enhance the resolution and separation efficiency of complex samples in CE. The additives include nanoparticles, chiral selectors and other buffer additives that have been added to the buffer solution. PMID:23515192

  19. Advanced hydrogen/oxygen thrust chamber design analysis

    NASA Technical Reports Server (NTRS)

    Shoji, J. M.

    1973-01-01

    The results are reported of the advanced hydrogen/oxygen thrust chamber design analysis program. The primary objectives of this program were to: (1) provide an in-depth analytical investigation to develop thrust chamber cooling and fatigue life limitations of an advanced, high pressure, high performance H2/O2 engine design of 20,000-pounds (88960.0 N) thrust; and (2) integrate the existing heat transfer analysis, thermal fatigue and stress aspects for advanced chambers into a comprehensive computer program. Thrust chamber designs and analyses were performed to evaluate various combustor materials, coolant passage configurations (tubes and channels), and cooling circuits to define the nominal 1900 psia (1.31 x 10 to the 7th power N/sq m) chamber pressure, 300-cycle life thrust chamber. The cycle life capability of the selected configuration was then determined for three duty cycles. Also the influence of cycle life and chamber pressure on thrust chamber design was investigated by varying in cycle life requirements at the nominal chamber pressure and by varying the chamber pressure at the nominal cycle life requirement.

  20. Advanced Main Combustion Chamber structural jacket strength analysis

    NASA Technical Reports Server (NTRS)

    Johnston, L. M.; Perkins, L. A.; Denniston, C. L.; Price, J. M.

    1993-01-01

    The structural analysis of the Advanced Main Combustion Chamber (AMCC) is presented. The AMCC is an advanced fabrication concept of the Space Shuttle Main Engine main combustion chamber (MCC). Reduced cost and fabrication time of up to 75 percent were the goals of the AMCC with cast jacket with vacuum plasma sprayed or platelet liner. Since the cast material for the AMCC is much weaker than the wrought material for the MCC, the AMCC is heavier and strength margins much lower in some areas. Proven hand solutions were used to size the manifolds cutout tee areas for combined pressure and applied loads. Detailed finite element strength analyses were used to size the manifolds, longitudinal ribs, and jacket for combined pressure and applied local loads. The design of the gimbal actuator strut attachment lugs were determined by finite element analyses and hand solutions.

  1. Whole-genome CNV analysis: advances in computational approaches

    PubMed Central

    Pirooznia, Mehdi; Goes, Fernando S.; Zandi, Peter P.

    2015-01-01

    Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development. PMID:25918519

  2. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961

  3. Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio

    2006-01-01

    Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.

  4. SENSITIVITY ANALYSIS FOR I-129 WASTES: EFFECT OF HYDRAULIC CONDUCTIVITY

    SciTech Connect

    Ades, M; Leonard Collard, L

    2007-01-12

    Solid low-level radioactive wastes at the Savannah River Site (SRS) are disposed in trenches. In order to determine the permissible radioactive inventory limits for such disposal facilities, it is required to assess the behavior of radioactive waste material over long periods of time. The sensitivity of flow and I-129 (and similar radionuclides) transport in groundwater in the vadose zone to the hydraulic conductivities of the vadose zone subregions and the low-level waste is identified and quantified. A trench configuration and simulation model have been developed to analyze the flow and transport of the radionuclide in the vadose zone as it migrates to the groundwater table. The analysis identifies and quantifies the major dependencies of the flow and radionuclide fractional flux on the subregion hydraulic conductivities. Analysis results indicate the importance of the hydraulic conductivity assigned to the materials modeled, thereby providing the modeler and decision makers with valuable insights on the potential impact of the hydraulic conductivity on flow and radionuclide transport.

  5. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  6. Fault sensitivity and wear-out analysis of VLSI systems

    NASA Astrophysics Data System (ADS)

    Choi, Gwan Seung

    1994-07-01

    This thesis describes simulation approaches to conduct fault sensitivity and wear-out failure analysis of VLSI systems. A fault-injection approach to study transient impact in VLSI systems is developed. Through simulated fault injection at the device level and, subsequent fault propagation at the gate functional and software levels, it is possible to identify critical bottlenecks in dependability. Techniques to speed up the fault simulation and to perform statistical analysis of fault-impact are developed. A wear-out simulation environment is also developed to closely mimic dynamic sequences of wear-out events in a device through time, to localize weak location/aspect of target chip and to allow generation of TTF (Time-to-failure) distribution of VLSI chip as a whole. First, an accurate simulation of a target chip and its application code is performed to acquire trace data (real workload) on switch activity. Then, using this switch activity information, wear-out of the each component in the entire chip is simulated using Monte Carlo techniques.

  7. Design-oriented thermoelastic analysis, sensitivities, and approximations for shape optimization of aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Bhatia, Manav

    Aerospace structures operate under extreme thermal environments. Hot external aerothermal environment at high Mach number flight leads to high structural temperatures. At the same time, cold internal cryogenic-fuel-tanks and thermal management concepts like Thermal Protection System (TPS) and active cooling result in a high temperature gradient through the structure. Multidisciplinary Design Optimization (MDO) of such structures requires a design-oriented approach to this problem. The broad goal of this research effort is to advance the existing state of the art towards MDO of large scale aerospace structures. The components required for this work are the sensitivity analysis formulation encompassing the scope of the physical phenomena being addressed, a set of efficient approximations to cut-down the required CPU cost, and a general purpose design-oriented numerical analysis tool capable of handling problems of this scope. In this work finite element discretization has been used to solve the conduction partial differential equations and the Poljak method has been used to discretize the integral equations for internal cavity radiation. A methodology has been established to couple the conduction finite element analysis to the internal radiation analysis. This formulation is then extended for sensitivity analysis of heat transfer and coupled thermal-structural problems. The most CPU intensive operations in the overall analysis have been identified, and approximation methods have been proposed to reduce the associated CPU cost. Results establish the effectiveness of these approximation methods, which lead to very high savings in CPU cost without any deterioration in the results. The results presented in this dissertation include two cases: a hexahedral cavity with internal and external radiation with conducting walls, and a wing box which is geometrically similar to the orbiter wing.

  8. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  9. The analysis of protein pharmaceuticals: near future advances.

    PubMed

    Middaugh, C R

    1994-01-01

    The analysis of protein pharmaceuticals currently involves a complex series of chromatographic, electrophoretic, spectroscopic, immunological and biological measurements to unequivocally establish their identity, purity and integrity. In this review, I briefly consider the possibility that at least the functional identity and integrity of a protein drug might be established by either a single analysis involving X-ray diffraction, NMR or mass spectrometry, or by a chromatographically based multi-detector system in which a number of critical parameters are essentially simultaneously determined. The use of a protein standard to obtain comparative measurements and new advances in the technology of each of these methods is emphasized. A current major obstacle to the implementation of these approaches is the frequent microheterogeneity of protein preparations. The evolution of biological assays into measurements examining more defined intracellular signal transduction events or based on novel biosensors as well as the analysis of vaccines is also briefly discussed. PMID:7765931

  10. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis

  11. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information

  12. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  13. Limited sensitivity analysis of ARAIM availability for LPV-200 over Australia using real data

    NASA Astrophysics Data System (ADS)

    El-Mowafy, A.; Yang, C.

    2016-01-01

    Current availability of Advanced Receiver Autonomous Integrity Monitoring (ARAIM) for LPV-200 in aviation is experimentally investigated using real navigation data and GPS measurements collected at 60 stations across Australia. ARAIM algorithm and fault probabilities were first discussed. Availability sensitivity analysis due to changes in the elevation mask angle and the error model parameters URA, URE, and nominal biases for integrity and accuracy used for computation of the protection level is presented. It is shown that incorporation of other GNSS constellation with GPS in ARAIM is needed to achieve LPV-200 Australia wide. The inclusion of BeiDou with GPS at two tests sites in Western and Eastern Australia demonstrates the promising potential of achieving this goal.

  14. Composite Structure Modeling and Analysis of Advanced Aircraft Fuselage Concepts

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Sorokach, Michael R.

    2015-01-01

    NASA Environmentally Responsible Aviation (ERA) project and the Boeing Company are collabrating to advance the unitized damage arresting composite airframe technology with application to the Hybrid-Wing-Body (HWB) aircraft. The testing of a HWB fuselage section with Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) construction is presently being conducted at NASA Langley. Based on lessons learned from previous HWB structural design studies, improved finite-element models (FEM) of the HWB multi-bay and bulkhead assembly are developed to evaluate the performance of the PRSEUS construction. In order to assess the comparative weight reduction benefits of the PRSEUS technology, conventional cylindrical skin-stringer-frame models of a cylindrical and a double-bubble section fuselage concepts are developed. Stress analysis with design cabin-pressure load and scenario based case studies are conducted for design improvement in each case. Alternate analysis with stitched composite hat-stringers and C-frames are also presented, in addition to the foam-core sandwich frame and pultruded rod-stringer construction. The FEM structural stress, strain and weights are computed and compared for relative weight/strength benefit assessment. The structural analysis and specific weight comparison of these stitched composite advanced aircraft fuselage concepts demonstrated that the pressurized HWB fuselage section assembly can be structurally as efficient as the conventional cylindrical fuselage section with composite stringer-frame and PRSEUS construction, and significantly better than the conventional aluminum construction and the double-bubble section concept.

  15. Oxidative Lipidomics Coming of Age: Advances in Analysis of Oxidized Phospholipids in Physiology and Pathology

    PubMed Central

    Pitt, Andrew R.

    2015-01-01

    Abstract Significance: Oxidized phospholipids are now well recognized as markers of biological oxidative stress and bioactive molecules with both pro-inflammatory and anti-inflammatory effects. While analytical methods continue to be developed for studies of generic lipid oxidation, mass spectrometry (MS) has underpinned the advances in knowledge of specific oxidized phospholipids by allowing their identification and characterization, and it is responsible for the expansion of oxidative lipidomics. Recent Advances: Studies of oxidized phospholipids in biological samples, from both animal models and clinical samples, have been facilitated by the recent improvements in MS, especially targeted routines that depend on the fragmentation pattern of the parent molecular ion and improved resolution and mass accuracy. MS can be used to identify selectively individual compounds or groups of compounds with common features, which greatly improves the sensitivity and specificity of detection. Application of these methods has enabled important advances in understanding the mechanisms of inflammatory diseases such as atherosclerosis, steatohepatitis, leprosy, and cystic fibrosis, and it offers potential for developing biomarkers of molecular aspects of the diseases. Critical Issues and Future Directions: The future in this field will depend on development of improved MS technologies, such as ion mobility, novel enrichment methods and databases, and software for data analysis, owing to the very large amount of data generated in these experiments. Imaging of oxidized phospholipids in tissue MS is an additional exciting direction emerging that can be expected to advance understanding of physiology and disease. Antioxid. Redox Signal. 22, 1646–1666. PMID:25694038

  16. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  17. Advanced water window x-ray microscope design and analysis

    NASA Technical Reports Server (NTRS)

    Shealy, D. L.; Wang, C.; Jiang, W.; Lin, J.

    1992-01-01

    The project was focused on the design and analysis of an advanced water window soft-x-ray microscope. The activities were accomplished by completing three tasks contained in the statement of work of this contract. The new results confirm that in order to achieve resolutions greater than three times the wavelength of the incident radiation, it will be necessary to use aspherical mirror surfaces and to use graded multilayer coatings on the secondary (to accommodate the large variations of the angle of incidence over the secondary when operating the microscope at numerical apertures of 0.35 or greater). The results are included in a manuscript which is enclosed in the Appendix.

  18. Advanced Wireless Power Transfer Vehicle and Infrastructure Analysis (Presentation)

    SciTech Connect

    Gonder, J.; Brooker, A.; Burton, E.; Wang, J.; Konan, A.

    2014-06-01

    This presentation discusses current research at NREL on advanced wireless power transfer vehicle and infrastructure analysis. The potential benefits of E-roadway include more electrified driving miles from battery electric vehicles, plug-in hybrid electric vehicles, or even properly equipped hybrid electric vehicles (i.e., more electrified miles could be obtained from a given battery size, or electrified driving miles could be maintained while using smaller and less expensive batteries, thereby increasing cost competitiveness and potential market penetration). The system optimization aspect is key given the potential impact of this technology on the vehicles, the power grid and the road infrastructure.

  19. Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.

  20. Creep analysis of fuel plates for the Advanced Neutron Source

    SciTech Connect

    Swinson, W.F.; Yahr, G.T.

    1994-11-01

    The reactor for the planned Advanced Neutron Source will use closely spaced arrays of fuel plates. The plates are thin and will have a core containing enriched uranium silicide fuel clad in aluminum. The heat load caused by the nuclear reactions within the fuel plates will be removed by flowing high-velocity heavy water through narrow channels between the plates. However, the plates will still be at elevated temperatures while in service, and the potential for excessive plate deformation because of creep must be considered. An analysis to include creep for deformation and stresses because of temperature over a given time span has been performed and is reported herein.

  1. Life-cycle cost analysis of advanced design mixer pump

    SciTech Connect

    Hall, M.N., Westinghouse Hanford

    1996-07-23

    This analysis provides cost justification for the Advanced Design Mixer Pump program based on the cost benefit to the Hanford Site of 4 mixer pump systems defined in terms of the life-cycle cost.A computer model is used to estimate the total number of service hours necessary for each mixer pump to operate over the 20-year retrieval sequence period for single-shell tank waste. This study also considered the double-shell tank waste retrieved prior to the single-shell tank waste which is considered the initial retrieval.

  2. Computer modeling for advanced life support system analysis.

    PubMed

    Drysdale, A

    1997-01-01

    This article discusses the equivalent mass approach to advanced life support system analysis, describes a computer model developed to use this approach, and presents early results from modeling the NASA JSC BioPlex. The model is built using an object oriented approach and G2, a commercially available modeling package Cost factor equivalencies are given for the Volosin scenarios. Plant data from NASA KSC and Utah State University (USU) are used, together with configuration data from the BioPlex design effort. Initial results focus on the importance of obtaining high plant productivity with a flight-like configuration. PMID:11540448

  3. ADVISOR: a systems analysis tool for advanced vehicle modeling

    NASA Astrophysics Data System (ADS)

    Markel, T.; Brooker, A.; Hendricks, T.; Johnson, V.; Kelly, K.; Kramer, B.; O'Keefe, M.; Sprik, S.; Wipke, K.

    This paper provides an overview of Advanced Vehicle Simulator (ADVISOR)—the US Department of Energy's (DOE's) ADVISOR written in the MATLAB/Simulink environment and developed by the National Renewable Energy Laboratory. ADVISOR provides the vehicle engineering community with an easy-to-use, flexible, yet robust and supported analysis package for advanced vehicle modeling. It is primarily used to quantify the fuel economy, the performance, and the emissions of vehicles that use alternative technologies including fuel cells, batteries, electric motors, and internal combustion engines in hybrid (i.e. multiple power sources) configurations. It excels at quantifying the relative change that can be expected due to the implementation of technology compared to a baseline scenario. ADVISOR's capabilities and limitations are presented and the power source models that are included in ADVISOR are discussed. Finally, several applications of the tool are presented to highlight ADVISOR's functionality. The content of this paper is based on a presentation made at the 'Development of Advanced Battery Engineering Models' workshop held in Crystal City, Virginia in August 2001.

  4. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  5. Parameter estimation using a complete signal and inspiral templates for low-mass binary black holes with Advanced LIGO sensitivity

    NASA Astrophysics Data System (ADS)

    Cho, Hee-Suk

    2015-12-01

    We study the validity of inspiral templates in gravitational wave data analysis with Advanced LIGO sensitivity for low mass binary black holes with total masses of M≤slant 30{M}⊙ . We mainly focus on the nonspinning system. As our complete inspiral-merger-ringdown waveform model ({I}{M}{R} ), we assume the phenomenological model, ‘PhenomA’, and define our inspiral template model ({{I}}{{merg}}) by taking the inspiral part into account from {I}{M}{R} up to the merger frequency ({f}{{merg}}). We first calculate the true statistical uncertainties using {I}{M}{R} signals and {I}{M}{R} templates. Next, using {I}{M}{R} signals and {{I}}{{merg}} templates, we calculate fitting factors and systematic biases, and compare the biases with the true statistical uncertainties. We find that the valid criteria of the bank of {{I}}{{merg}} templates are obtained as {M}{{crit}}˜ 24{M}⊙ for detection (if M\\gt {M}{{crit}}, the fitting factor is smaller than 0.97), and {M}{{crit}}˜ 26{M}⊙ for parameter estimation (if M\\gt {M}{{crit}}, the systematic bias is larger than the true statistical uncertainty where the signal-to-noise ratio is 20), respectively. In order to see the dependence on the cutoff frequency of the inspiral waveforms, we define another inspiral model {{I}}{{isco}} which is terminated at the innermost-stable-circular-orbit frequency ({f}{{isco}}\\lt {f}{{merg}}). We find that the valid criteria of the bank of {{I}}{{isco}} templates are obtained as {M}{{crit}}˜ 15{M}⊙ and ˜ 17{M}⊙ for detection and parameter estimation, respectively. We investigate the statistical uncertainties for the inspiral template models considering various signal-to-noise ratios, and compare those to the true statistical uncertainties. We also consider the aligned-spinning system with fixed mass ratio ({m}1/{m}2=3) and spin (χ =0.5) by employing the recent phenomenological model, ‘PhenomC’. In this case, we find that the true statistical uncertainties can be much larger

  6. Sensitivity analysis of surface runoff generation in urban flood forecasting.

    PubMed

    Simões, N E; Leitão, J P; Maksimović, C; Sá Marques, A; Pina, R

    2010-01-01

    Reliable flood forecasting requires hydraulic models capable to estimate pluvial flooding fast enough in order to enable successful operational responses. Increased computational speed can be achieved by using a 1D/1D model, since 2D models are too computationally demanding. Further changes can be made by simplifying 1D network models, removing and by changing some secondary elements. The Urban Water Research Group (UWRG) of Imperial College London developed a tool that automatically analyses, quantifies and generates 1D overland flow network. The overland flow network features (ponds and flow pathways) generated by this methodology are dependent on the number of sewer network manholes and sewer inlets, as some of the overland flow pathways start at manholes (or sewer inlets) locations. Thus, if a simplified version of the sewer network has less manholes (or sewer inlets) than the original one, the overland flow network will be consequently different. This paper compares different overland flow networks generated with different levels of sewer network skeletonisation. Sensitivity analysis is carried out in one catchment area in Coimbra, Portugal, in order to evaluate overland flow network characteristics. PMID:20453333

  7. Potassium Buffering in the Neurovascular Unit: Models and Sensitivity Analysis

    PubMed Central

    Witthoft, Alexandra; Filosa, Jessica A.; Karniadakis, George Em

    2013-01-01

    Astrocytes are critical regulators of neural and neurovascular network communication. Potassium transport is a central mechanism behind their many functions. Astrocytes encircle synapses with their distal processes, which express two potassium pumps (Na-K and NKCC) and an inward rectifying potassium channel (Kir), whereas the vessel-adjacent endfeet express Kir and BK potassium channels. We provide a detailed model of potassium flow throughout the neurovascular unit (synaptic region, astrocytes, and arteriole) for the cortex of the young brain. Our model reproduces several phenomena observed experimentally: functional hyperemia, in which neural activity triggers astrocytic potassium release at the perivascular endfoot, inducing arteriole dilation; K+ undershoot in the synaptic space after periods of neural activity; neurally induced astrocyte hyperpolarization during Kir blockade. Our results suggest that the dynamics of the vascular response during functional hyperemia are governed by astrocytic Kir for the fast onset and astrocytic BK for maintaining dilation. The model supports the hypothesis that K+ undershoot is caused by excessive astrocytic uptake through Na-K and NKCC pumps, whereas the effect is balanced by Kir. We address parametric uncertainty using high-dimensional stochastic sensitivity analysis and identify possible model limitations. PMID:24209849

  8. Decentred comparative research: Context sensitive analysis of maternal health care.

    PubMed

    Wrede, Sirpa; Benoit, Cecilia; Bourgeault, Ivy Lynn; van Teijlingen, Edwin R; Sandall, Jane; De Vries, Raymond G

    2006-12-01

    Cross-national comparison is an important tool for health care research, but too often those who use this method fail to consider important inter-national differences in the social organisation of health care and in the relationship between health care practices and social experience. In this article we make the case for a context-sensitive and reflexive analysis of health care that allows researchers to understand the important ways that health care systems and practices are situated in time and place. Our approach--decentred comparative research--addresses the often unacknowledged ethnocentrism of traditional comparative research. Decentred cross-national research is a method that draws on the socially situated and distributed expertise of an international research team to develop key concepts and research questions. We used the decentred method to fashion a multilevel framework that used the meso level of organisation (i.e., health care organisations, professional groups and other concrete organisations) as an analytical starting point in our international study of maternity care in eight countries. Our method departs from traditional comparative health systems research that is most often conducted at the macro level. Our approach will help researchers develop new and socially robust knowledge about health care. PMID:16962695

  9. Sensitivity analysis of near-infrared functional lymphatic imaging

    NASA Astrophysics Data System (ADS)

    Weiler, Michael; Kassis, Timothy; Dixon, J. Brandon

    2012-06-01

    Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time.

  10. Sensitivity analysis of imaging geometries for prostate diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaodong; Zhu, Timothy C.

    2008-02-01

    Endoscopic and interstitial diffuse optical tomography have been studied in clinical investigations for imaging prostate tissues, yet, there is no comprehensive comparison of how these two imaging geometries affect the quality of the reconstruction images. In this study, the effect of imaging geometry is investigated by comparing the cross-section of the Jacobian sensitivity matrix and reconstructed images for three-dimensional mathematical phantoms. Next, the effect of source-detector configurations and number of measurements in both geometries is evaluated using singular value analysis. The amount of information contained for each source-detector configuration and different number of measurements are compared. Further, the effect of different measurements strategies for 3D endoscopic and interstitial tomography is examined. The pros and cons of using the in-plane measurements and off-plane measurements are discussed. Results showed that the reconstruction in the interstitial geometry outperforms the endoscopic geometry when deeper anomalies are present. Eight sources 8 detectors and 6 sources 12 detectors are sufficient for 2D reconstruction with endoscopic and interstitial geometry respectively. For a 3D problem, the quantitative accuracy in the interstitial geometry is significantly improved using off-plane measurements but only slightly in the endoscopic geometry.

  11. A sensitive transcriptome analysis method that can detect unknown transcripts

    PubMed Central

    Fukumura, Ryutaro; Takahashi, Hirokazu; Saito, Toshiyuki; Tsutsumi, Yoko; Fujimori, Akira; Sato, Shinji; Tatsumi, Kouichi; Araki, Ryoko; Abe, Masumi

    2003-01-01

    We have developed an AFLP-based gene expression profiling method called ‘high coverage expression profiling’ (HiCEP) analysis. By making improvements to the selective PCR technique we have reduced the rate of false positive peaks to ∼4% and consequently the number of peaks, including overlapping peaks, has been markedly decreased. As a result we can determine the relationship between peaks and original transcripts unequivocally. This will make it practical to prepare a database of all peaks, allowing gene assignment without having to isolate individual peaks. This precise selection also enables us to easily clone peaks of interest and predict the corresponding gene for each peak in some species. The procedure is highly reproducible and sensitive enough to detect even a 1.2-fold difference in gene expression. Most importantly, the low false positive rate enables us to analyze gene expression with wide coverage by means of four instead of six nucleotide recognition site restriction enzymes for fingerprinting mRNAs. Therefore, the method detects 70–80% of all transcripts, including non-coding transcripts, unknown and known genes. Moreover, the method requires no sequence information and so is applicable even to eukaryotes for which there is no genome information available. PMID:12907746

  12. Nonparametric Bounds and Sensitivity Analysis of Treatment Effects

    PubMed Central

    Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.

    2015-01-01

    This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743

  13. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    NASA Astrophysics Data System (ADS)

    Arbanas, G.; Williams, M. L.; Leal, L. C.; Dunn, M. E.; Khuwaileh, B. A.; Wang, C.; Abdel-Khalik, H.

    2015-01-01

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, "AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications," Trans. Am. Nucl. Soc. 86, 118-119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.

  14. Recent Advances of Cobalt(II/III) Redox Couples for Dye-Sensitized Solar Cell Applications.

    PubMed

    Giribabu, Lingamallu; Bolligarla, Ramababu; Panigrahi, Mallika

    2015-08-01

    In recent years dye-sensitized solar cells (DSSCs) have emerged as one of the alternatives for the global energy crisis. DSSCs have achieved a certified efficiency of >11% by using the I(-) /I3 (-) redox couple. In order to commercialize the technology almost all components of the device have to be improved. Among the various components of DSSCs, the redox couple that regenerates the oxidized sensitizer plays a crucial role in achieving high efficiency and durability of the cell. However, the I(-) /I3 (-) redox couple has certain limitations such as the absorption of triiodide up to 430 nm and the volatile nature of iodine, which also corrodes the silver-based current collectors. These limitations are obstructing the commercialization of this technology. For this reason, one has to identify alternative redox couples. In this regard, the Co(II/III) redox couple is found to be the best alternative to the existing I(-) /I3 (-) redox couple. Recently, DSSC test cell efficiency has risen up to 13% by using the cobalt redox couple. This review emphasizes the recent development of Co(II/III) redox couples for DSSC applications. PMID:26081939

  15. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    SciTech Connect

    Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany

    2015-01-01

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBEs and differential cross section data simultaneously.

  16. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    SciTech Connect

    Arbanas, G.; Williams, M.L.; Leal, L.C.; Dunn, M.E.; Khuwaileh, B.A.; Wang, C.; Abdel-Khalik, H.

    2015-01-15

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, “AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications,” Trans. Am. Nucl. Soc. 86, 118–119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.

  17. Recent advances in trace analysis of pharmaceutical genotoxic impurities.

    PubMed

    Liu, David Q; Sun, Mingjiang; Kord, Alireza S

    2010-04-01

    Genotoxic impurities (GTIs) in pharmaceuticals at trace levels are of increasing concerns to both pharmaceutical industries and regulatory agencies due to their potentials for human carcinogenesis. Determination of these impurities at ppm levels requires highly sensitive analytical methodologies, which poses tremendous challenges on analytical communities in pharmaceutical R&D. Practical guidance with respect to the analytical determination of diverse classes of GTIs is currently lacking in the literature. This article provides an industrial perspective with regard to the analysis of various structural classes of GTIs that are commonly encountered during chemical development. The recent literatures will be reviewed, and several practical approaches for enhancing analyte detectability developed in recent years will be highlighted. As such, this article is organized into the following main sections: (1) trace analysis toolbox including sample introduction, separation, and detection techniques, as well as several 'general' approaches for enhancing detectability; (2) method development: chemical structure and property-based approaches; (3) method validation considerations; and (4) testing and control strategies in process chemistry. The general approaches for enhancing detection sensitivity to be discussed include chemical derivatization, 'matrix deactivation', and 'coordination ion spray-mass spectrometry'. Leveraging the use of these general approaches in method development greatly facilitates the analysis of poorly detectable or unstable/reactive GTIs. It is the authors' intent to provide a contemporary perspective on method development and validation that can guide analytical scientists in the pharmaceutical industries. PMID:20022442

  18. Imaging spectroscopic analysis at the Advanced Light Source

    SciTech Connect

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  19. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  20. Tool for Sizing Analysis of the Advanced Life Support System

    NASA Technical Reports Server (NTRS)

    Yeh, Hue-Hsie Jannivine; Brown, Cheryl B.; Jeng, Frank J.

    2005-01-01

    Advanced Life Support Sizing Analysis Tool (ALSSAT) is a computer model for sizing and analyzing designs of environmental-control and life support systems (ECLSS) for spacecraft and surface habitats involved in the exploration of Mars and Moon. It performs conceptual designs of advanced life support (ALS) subsystems that utilize physicochemical and biological processes to recycle air and water, and process wastes in order to reduce the need of resource resupply. By assuming steady-state operations, ALSSAT is a means of investigating combinations of such subsystems technologies and thereby assisting in determining the most cost-effective technology combination available. In fact, ALSSAT can perform sizing analysis of the ALS subsystems that are operated dynamically or steady in nature. Using the Microsoft Excel spreadsheet software with Visual Basic programming language, ALSSAT has been developed to perform multiple-case trade studies based on the calculated ECLSS mass, volume, power, and Equivalent System Mass, as well as parametric studies by varying the input parameters. ALSSAT s modular format is specifically designed for the ease of future maintenance and upgrades.

  1. Association between polymorphisms of BAG-1 and XPD and chemotherapy sensitivity in advanced non-small-cell lung cancer patients treated with vinorelbine combined cisplatin regimen.

    PubMed

    Li, Ping; Wang, Ya-Di; Cheng, Jian; Chen, Jun-Chen; Ha, Min-Wen

    2015-12-01

    BCL-2 Associated athanogene 1 (BAG-1) and Xeroderma pigmentosum group D (XPD) are involved in the nucleotide excision repair pathway and DNA repair. We aimed to investigate whether polymorphisms in BAG-1 and XPD have effects on chemotherapy sensitivity and survival in patients with advanced non-small-cell lung cancer (NSCLC) treated with vinorelbine combined cisplatin (NP) regimen. A total of 142 patients with diagnosed advanced NSCLC were recruited in the current study. NP regimen was applied for all eligible patients. Polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) was used for BAG-1 (codon 324) and XPD (codons 312 and 751) genotyping. The treatment response was evaluated according to the RECIST guidelines. Progression-free survival (PFS) and overall survival (OS) were record as median and end point, respectively. As for BAG-1 codon 324, the chemotherapy sensitivity in NSCLC patients with CT genotype was 0.383 times of those with CC genotype (P < 0.05). With respect to XPD codon 751, the chemotherapy sensitivity in NSCLC patients with Lys/Gln genotype was 0.400 times of those with Lys/Lys genotype (P < 0.05). In addition, NSCLC patients carrying combined C/C genotype at codon 324 in BAG-1, Asp/Asp of XPD codon 312, and Lys/Lys of XPD codon 751 produced a higher efficacy of NP chemotherapy compared to those carrying mutation genotypes (all P < 0.05). Further, there were significant differences in PFS between patients with combined C/C genotype of BAG-1 codon 324, Lys/Lys genotype of XPD codon 751, and Asp/Asp genotype of XPD codon 312 and patients carrying BAG-1 codon 324 C/T genotype, XPD codon751 Lys/Gln genotype, and XPD codon312 Asp/Asn genotype (P < 0.05). Multivariate Cox regression analysis indicated that the combined wild-type of codon 324 XPD, codon 751 XPD, and codon 312 BAG-1 is the protective factor for OS and PFS, and clinical stages is the risk factor for OS and PFS. In conclusion, our research

  2. Advanced Automation for Ion Trap Mass Spectrometry-New Opportunities for Real-Time Autonomous Analysis

    NASA Technical Reports Server (NTRS)

    Palmer, Peter T.; Wong, C. M.; Salmonson, J. D.; Yost, R. A.; Griffin, T. P.; Yates, N. A.; Lawless, James G. (Technical Monitor)

    1994-01-01

    The utility of MS/MS for both target compound analysis and the structure elucidation of unknowns has been described in a number of references. A broader acceptance of this technique has not yet been realized as it requires large, complex, and costly instrumentation which has not been competitive with more conventional techniques. Recent advancements in ion trap mass spectrometry promise to change this situation. Although the ion trap's small size, sensitivity, and ability to perform multiple stages of mass spectrometry have made it eminently suitable for on-line, real-time monitoring applications, advance automation techniques are required to make these capabilities more accessible to non-experts. Towards this end we have developed custom software for the design and implementation of MS/MS experiments. This software allows the user to take full advantage of the ion trap's versatility with respect to ionization techniques, scan proxies, and ion accumulation/ejection methods. Additionally, expert system software has been developed for autonomous target compound analysis. This software has been linked to ion trap control software and a commercial data system to bring all of the steps in the analysis cycle under control of the expert system. These software development efforts and their utilization for a number of trace analysis applications will be described.

  3. Integrated multidisciplinary design optimization using discrete sensitivity analysis for geometrically complex aeroelastic configurations

    NASA Astrophysics Data System (ADS)

    Newman, James Charles, III

    1997-10-01

    The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of

  4. Adaptive Modeling, Engineering Analysis and Design of Advanced Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Hsu, Su-Yuen; Mason, Brian H.; Hicks, Mike D.; Jones, William T.; Sleight, David W.; Chun, Julio; Spangler, Jan L.; Kamhawi, Hilmi; Dahl, Jorgen L.

    2006-01-01

    This paper describes initial progress towards the development and enhancement of a set of software tools for rapid adaptive modeling, and conceptual design of advanced aerospace vehicle concepts. With demanding structural and aerodynamic performance requirements, these high fidelity geometry based modeling tools are essential for rapid and accurate engineering analysis at the early concept development stage. This adaptive modeling tool was used for generating vehicle parametric geometry, outer mold line and detailed internal structural layout of wing, fuselage, skin, spars, ribs, control surfaces, frames, bulkheads, floors, etc., that facilitated rapid finite element analysis, sizing study and weight optimization. The high quality outer mold line enabled rapid aerodynamic analysis in order to provide reliable design data at critical flight conditions. Example application for structural design of a conventional aircraft and a high altitude long endurance vehicle configuration are presented. This work was performed under the Conceptual Design Shop sub-project within the Efficient Aerodynamic Shape and Integration project, under the former Vehicle Systems Program. The project objective was to design and assess unconventional atmospheric vehicle concepts efficiently and confidently. The implementation may also dramatically facilitate physics-based systems analysis for the NASA Fundamental Aeronautics Mission. In addition to providing technology for design and development of unconventional aircraft, the techniques for generation of accurate geometry and internal sub-structure and the automated interface with the high fidelity analysis codes could also be applied towards the design of vehicles for the NASA Exploration and Space Science Mission projects.

  5. Sensitivity analysis using computer calculus: A nuclear waste isolation application

    SciTech Connect

    Oblow, E.M.; Pin, F.G.; Wright, R.Q.

    1986-09-01

    An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models adding derivative-taking capabilities to the normal calculated results. The theory and applicability of the GRESS codes are described and tested against a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for a test problem involving groundwater flow in the vicinity of the Richton Salt Dome are discussed in detail. Sensitivity results are compared with analytical, perturbation, and alternate sensitivity approaches to the problem. Five-place accuracy in these sensitivity results is verified for all cases in which the effects of nonlinearities are made sufficiently small. Conclusions are drawn as to the applicability of GRESS in the problem studied and for more general large-scale modeling sensitivity studies.

  6. Sensitivity analysis for OMOG and EUV photomasks characterized by UV-NIR spectroscopic ellipsometry

    NASA Astrophysics Data System (ADS)

    Heinrich, A.; Dirnstorfer, I.; Bischoff, J.; Meiner, K.; Richter, U.; Mikolajick, T.

    2013-09-01

    We investigated the potentials, applicability and advantages of spectroscopic ellipsometry (SE) for the characterization of high-end photomasks. The SE measurements were done in the ultraviolet-near infrared (UVNIR) wavelength range from 300 nm to 980 nm, at angle of incidences (AOI) between 10 and 70° and with a microspot size of 45 x 10 μm2 (AOI=70°). The measured Ψ and 𝛥 spectra were modeled using the rigorous coupled wave analysis (RCWA) to determine the structural parameters of a periodic array, i.e. the pitch and critical dimension (CD). Two different types of industrial photomasks consisting of line/space structures were evaluated, the reflecting extreme ultraviolet (EUV) and the transmitting opaque MoSi on glass (OMOG) mask. The Ψ and 𝛥 spectra of both masks show characteristic differences, which were related to the Rayleigh singularities and the missing transmission diffraction in the EUV mask. In the second part of the paper, a simulation based sensitivity analysis of the Fourier coefficients α and β is presented, which is used to define the required measurement precision to detect a CD deviation of 1%. This study was done for both mask types to investigate the influence of the stack transmission. It was found that sensitivities to CD variations are comparable for OMOG and EUV masks. For both masks, the highest sensitivities appear close to the Rayleigh singularities and significantly increase at very low AOI. To detect a 1% CD deviation for pitches below 150 nm a measurement precision in the order of 0.01 is required. This measurement precision can be realized with advanced optical hardware. It is concluded that UV-NIR ellipsometry is qualified to characterize photomasks down to the 13 nm technology node in 2020.

  7. Recent advances in alternative counter electrode materials for Co-mediated dye-sensitized solar cells.

    PubMed

    Yun, Sining; Liu, Yanfang; Zhang, Taihong; Ahmad, Shahzada

    2015-07-28

    Recently, considerable attention has been paid to dye-sensitized solar cells (DSSCs) which are based on Co(2+)/Co(3+) redox shuttles, because of their unparalleled merits including higher redox potential, reduced corrosiveness towards metallic conductors, low costs and high power conversion efficiencies (PCE) (13%). The counter electrode (CE) is an essential component in DSSCs, and plays a crucial role in catalyzing Co(3+) ion reduction in Co-based DSSCs. In this mini-review, we review recent developments in CE materials for Co-mediated DSSCs including: noble metal platinum (Pt), carbon materials, transition metal compounds (TMCs), polymers, and their corresponding hybrids, highlighting important contributions worldwide that promise low cost, efficient, and robust Co-mediated DSSC systems. Additionally, the crucial challenges associated with employing these low-cost CE catalysts for Co-based redox couples in DSSCs are stressed. PMID:26132719

  8. Advanced probabilistic risk analysis using RAVEN and RELAP-7

    SciTech Connect

    Rabiti, Cristian; Alfonsi, Andrea; Mandelli, Diego; Cogliati, Joshua; Kinoshita, Robert

    2014-06-01

    RAVEN, under the support of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program [1], is advancing its capability to perform statistical analyses of stochastic dynamic systems. This is aligned with its mission to provide the tools needed by the Risk Informed Safety Margin Characterization (RISMC) path-lead [2] under the Department Of Energy (DOE) Light Water Reactor Sustainability program [3]. In particular this task is focused on the synergetic development with the RELAP-7 [4] code to advance the state of the art on the safety analysis of nuclear power plants (NPP). The investigation of the probabilistic evolution of accident scenarios for a complex system such as a nuclear power plant is not a trivial challenge. The complexity of the system to be modeled leads to demanding computational requirements even to simulate one of the many possible evolutions of an accident scenario (tens of CPU/hour). At the same time, the probabilistic analysis requires thousands of runs to investigate outcomes characterized by low probability and severe consequence (tail problem). The milestone reported in June of 2013 [5] described the capability of RAVEN to implement complex control logic and provide an adequate support for the exploration of the probabilistic space using a Monte Carlo sampling strategy. Unfortunately the Monte Carlo approach is ineffective with a problem of this complexity. In the following year of development, the RAVEN code has been extended with more sophisticated sampling strategies (grids, Latin Hypercube, and adaptive sampling). This milestone report illustrates the effectiveness of those methodologies in performing the assessment of the probability of core damage following the onset of a Station Black Out (SBO) situation in a boiling water reactor (BWR). The first part of the report provides an overview of the available probabilistic analysis capabilities, ranging from the different types of distributions available, possible sampling

  9. Sorption of redox-sensitive elements: critical analysis

    SciTech Connect

    Strickert, R.G.

    1980-12-01

    The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.

  10. Taxonicity of anxiety sensitivity: a multi-national analysis.

    PubMed

    Bernstein, Amit; Zvolensky, Michael J; Kotov, Roman; Arrindell, Willem A; Taylor, Steven; Sandin, Bonifacio; Cox, Brian J; Stewart, Sherry H; Bouvard, Martine; Cardenas, Samuel Jurado; Eifert, Georg H; Schmidt, Norman B

    2006-01-01

    Taxometric coherent cut kinetic analyses were used to test the latent structure of anxiety sensitivity in samples from North America (Canada and United States of America), France, Mexico, Spain, and The Netherlands (total n = 2741). Anxiety sensitivity was indexed by the 36-item Anxiety Sensitivity Index--Revised (ASI-R; [J. Anxiety Disord. 12(5) (1998) 463]). Four manifest indicators of anxiety sensitivity were constructed using the ASI-R: fear of cardiovascular symptoms, fear of respiratory symptoms, fear of publicly observable anxiety reactions, and fear of mental incapacitation. Results from MAXCOV-HITMAX, internal consistency tests, analyses of simulated Monte Carlo data, and a MAMBAC external consistency test indicated that the latent structure of anxiety sensitivity was taxonic in each of the samples. The estimated base rate of the anxiety sensitivity taxon differed slightly between nations, ranging from 11.5 to 21.5%. In general, the four ASI-R based manifest indicators showed high levels of validity. Results are discussed in relation to the conceptual understanding of anxiety sensitivity, with specific emphasis on theoretical refinement of the construct. PMID:16325111

  11. Advanced analysis of metal distributions in human hair

    SciTech Connect

    Kempson, Ivan M.; Skinner, William M.

    2008-06-09

    A variety of techniques (secondary electron microscopy with energy dispersive X-ray analysis, time-of-flight-secondary ion mass spectrometry, and synchrotron X-ray fluorescence) were utilized to distinguish metal contamination occurring in hair arising from endogenous uptake from an individual exposed to a polluted environment, in this case a lead smelter. Evidence was sought for elements less affected by contamination and potentially indicative of biogenic activity. The unique combination of surface sensitivity, spatial resolution, and detection limits used here has provided new insight regarding hair analysis. Metals such as Ca, Fe, and Pb appeared to have little representative value of endogenous uptake and were mainly due to contamination. Cu and Zn, however, demonstrate behaviors worthy of further investigation into relating hair concentrations to endogenous function.

  12. The Advanced Energetic Pair Telescope (AdEPT), a High Sensitivity Medium-Energy Gamma-Ray Polarimeter

    NASA Astrophysics Data System (ADS)

    Hunter, Stanley D; De Nolfo, Georgia; Hanu, Andrei R; Krizmanic, John F; Stecker, Floyd W.; Timokhin, Andrey; Venters, Tonia M.

    2014-08-01

    Since the launch of AGILE and FERMI, the scientific progress in high-energy (Eg > 200 MeV) gamma-ray science has been, and will continue to be dramatic. Both of these telescopes cover a broad energy range from ~20 MeV to >10 GeV. However, neither instrument is optimized for observations below ~200 MeV where many astrophysical objects exhibit unique, transitory behavior, such as spectral breaks, bursts, and flares. Hence, while significant progress from current observations is expected, a significant sensitivity gap will remain in the medium-energy regime (0.75 - 200 MeV) that has been explored only by COMPTEL and EGRET on CGRO. Tapping into this unexplored regime requires development of a telescope with significant improvement in sensitivity. Our mission concept, covering ~5 to ~200 MeV, is the Advanced Energetic Pair Telescope (AdEPT). The AdEPT telescope will achieve angular resolution of ~0.6 deg at 70 MeV, similar to the angular resolution of Fermi/LAT at ~1 GeV that brought tremendous success in identifying new sources. AdEPT will also provide unprecedented polarization sensitivity, ~1% for a 1 Crab source. The enabling technology for AdEPT is the Three-Dimensional Track Imager (3-DTI) a low-density, large volume, gas time-projection chamber with a 2-dimensional readout. The 3-DTI provides high-resolution three-dimensional electron tracking with minimal Coulomb scattering that is essential to achieve high angular resolution and polarization sensitivity. We describe the design, fabrication, and performance of the 3-DTI detector, describe the development of a 50x50x100 cm3 AdEPT prototype, and highlight a few of the key science questions that AdEPT will address.

  13. Decoupled direct method for sensitivity analysis in combustion kinetics

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1987-01-01

    An efficient, decoupled direct method for calculating the first order sensitivity coefficients of homogeneous, batch combustion kinetic rate equations is presented. In this method the ordinary differential equations for the sensitivity coefficients are solved separately from , but sequentially with, those describing the combustion chemistry. The ordinary differential equations for the thermochemical variables are solved using an efficient, implicit method (LSODE) that automatically selects the steplength and order for each solution step. The solution procedure for the sensitivity coefficients maintains accuracy and stability by using exactly the same steplengths and numerical approximations. The method computes sensitivity coefficients with respect to any combination of the initial values of the thermochemical variables and the three rate constant parameters for the chemical reactions. The method is illustrated by application to several simple problems and, where possible, comparisons are made with exact solutions and those obtained by other techniques.

  14. Computational aspects of sensitivity calculations in linear transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, W. H.; Haftka, R. T.

    1991-01-01

    The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.

  15. AEG-1 as a predictor of sensitivity to neoadjuvant chemotherapy in advanced epithelial ovarian cancer

    PubMed Central

    Wang, Yao; Jin, Xin; Song, Hongtao; Meng, Fanling

    2016-01-01

    Objectives Astrocyte elevated gene-1 (AEG-1) plays a critical role in tumor progression and chemoresistance. The aim of the present study was to investigate the protein expression of AEG-1 in patients with epithelial ovarian cancer (EOC) who underwent debulking surgery after neoadjuvant chemotherapy (NAC). Materials and methods The protein expression of AEG-1 was analyzed using immunohistochemistry in 162 patients with EOC. The relationship between AEG-1 expression and chemotherapy resistance was assessed using univariate and multivariate logistic regression analyses with covariate adjustments. Results High AEG-1 expression was significantly associated with the International Federation of Gynecology and Obstetrics stage, age, serum cancer antigen-125 concentration, histological grade, the presence of residual tumor after the interval debulking surgery, and lymph node metastasis. Furthermore, AEG-1 expression was significantly higher in NAC-resistant disease than in NAC-sensitive disease (P<0.05). Multivariate analyses indicated that elevated AEG-1 expression predicted poor survival. Conclusion Our findings indicate that AEG-1 may be a potential new biomarker for predicting chemoresistance and poor prognoses in patients with EOC. PMID:27143933

  16. Sensitivity Analysis and Neutron Fluence Adjustment for VVER-1000 Rpv

    NASA Astrophysics Data System (ADS)

    Belousov, S.; Ilieva, Kr.; Kirilova, D.

    2003-06-01

    Adjustment of the neutron fluence at the VVER-1000 RPV inner wall has been carried out. For the purpose of this adjustment the neutron flux response sensitivity to the main parameters of calculation uncertainty has been calculated. The obtained sensitivities, the parameters uncertainty and activity measurement data of iron, copper and niobium detectors positioned behind the RPV of Kozloduy NPP Unit 5 have been used in this adjustment.

  17. Phase I Study of Daily Irinotecan as a Radiation Sensitizer for Locally Advanced Pancreatic Cancer

    SciTech Connect

    Fouchardiere, Christelle de la; Negrier, Sylvie; Labrosse, Hugues; Martel Lafay, Isabelle; Desseigne, Francoise; Meeus, Pierre; Tavan, David; Petit-Laurent, Fabien; Rivoire, Michel; Perol, David; Carrie, Christian

    2010-06-01

    Purpose: The study aimed to determine the maximum tolerated dose of daily irinotecan given with concomitant radiotherapy in patients with locally advanced adenocarcinoma of the pancreas. Methods and Materials: Between September 2000 and March 2008, 36 patients with histologically proven unresectable pancreas adenocarcinoma were studied prospectively. Irinotecan was administered daily, 1 to 2 h before irradiation. Doses were started at 6 mg/m{sup 2} per day and then escalated by increments of 2 mg/m{sup 2} every 3 patients. Radiotherapy was administered in 2-Gy fractions, 5 fractions per week, up to a total dose of 50 Gy to the tumor volume. Inoperability was confirmed by a surgeon involved in a multidisciplinary team. All images and responses were centrally reviewed by radiologists. Results: Thirty-six patients were enrolled over a period of 8 years through eight dose levels (6 mg/m{sup 2} to 20 mg/m{sup 2} per day). The maximum tolerated dose was determined to be 18 mg/m{sup 2} per day. The dose-limiting toxicities were nausea/vomiting, diarrhea, anorexia, dehydration, and hypokalemia. The median survival time was 12.6 months with a median follow-up of 53.8 months. The median progression-free survival time was 6.5 months, and 4 patients (11.4%) with very good responses could undergo surgery. Conclusions: The maximum tolerated dose of irinotecan is 18 mg/m{sup 2} per day for 5 weeks. Dose-limiting toxicities are mainly gastrointestinal. Even though efficacy was not the aim of this study, the results are very promising, with a median survival time of 12.6 months.

  18. Sensitivity analysis of the age-structured malaria transmission model

    NASA Astrophysics Data System (ADS)

    Addawe, Joel M.; Lope, Jose Ernie C.

    2012-09-01

    We propose an age-structured malaria transmission model and perform sensitivity analyses to determine the relative importance of model parameters to disease transmission. We subdivide the human population into two: preschool humans (below 5 years) and the rest of the human population (above 5 years). We then consider two sets of baseline parameters, one for areas of high transmission and the other for areas of low transmission. We compute the sensitivity indices of the reproductive number and the endemic equilibrium point with respect to the two sets of baseline parameters. Our simulations reveal that in areas of either high or low transmission, the reproductive number is most sensitive to the number of bites by a female mosquito on the rest of the human population. For areas of low transmission, we find that the equilibrium proportion of infectious pre-school humans is most sensitive to the number of bites by a female mosquito. For the rest of the human population it is most sensitive to the rate of acquiring temporary immunity. In areas of high transmission, the equilibrium proportion of infectious pre-school humans and the rest of the human population are both most sensitive to the birth rate of humans. This suggests that strategies that target the mosquito biting rate on pre-school humans and those that shortens the time in acquiring immunity can be successful in preventing the spread of malaria.

  19. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  20. Advanced Stress, Strain And Geometrical Analysis In Semiconductor Devices

    SciTech Connect

    Neels, Antonia; Dommann, Alex; Niedermann, Philippe; Farub, Claudiu; Kaenel, Hans von

    2010-11-24

    High stresses and defect densities increases the risk of semiconductor device failure. Reliability studies on potential failure sources have an impact on design and are essential to assure the long term functioning of the device. Related to the dramatically smaller volume of semiconductor devices and new bonding techniques on such devices, new methods in testing and qualification are needed. Reliability studies on potential failure sources have an impact on design and are essential to assure the long term functioning of the device. In this paper, the applications of advanced High Resolution X-ray Diffraction (HRXRD) methods in strain, defect and deformation analysis on semiconductor devices are discussed. HRXRD with Rocking Curves (RC's) and Reciprocal Space Maps (RSM's) is used as accurate, non-destructive experimental method to evaluate the crystalline quality, and more precisely for the given samples, the in-situ strain, defects and geometrical parameters such as tilt and bending of device. The combination with advanced FEM simulations gives the possibility to support efficiently semiconductor devices design.

  1. Carbonaceous materials and their advances as a counter electrode in dye-sensitized solar cells: challenges and prospects.

    PubMed

    Kouhnavard, Mojgan; Ludin, Norasikin Ahmad; Ghaffari, Babak V; Sopian, Kamarozzaman; Ikeda, Shoichiro

    2015-05-11

    Dye-sensitized solar cells (DSSCs) serve as low-costing alternatives to silicon solar cells because of their low material and fabrication costs. Usually, they utilize Pt as the counter electrode (CE) to catalyze the iodine redox couple and to complete the electric circuit. Given that Pt is a rare and expensive metal, various carbon materials have been intensively investigated because of their low costs, high surface areas, excellent electrochemical stabilities, reasonable electrochemical activities, and high corrosion resistances. In this feature article, we provide an overview of recent studies on the electrochemical properties and photovoltaic performances of carbon-based CEs (e.g., activated carbon, nanosized carbon, carbon black, graphene, graphite, carbon nanotubes, and composite carbon). We focus on scientific challenges associated with each material and highlight recent advances achieved in overcoming these obstacles. Finally, we discuss possible future directions for this field of research aimed at obtaining highly efficient DSSCs. PMID:25925421

  2. Microstructure-sensitive extreme value probabilities of fatigue in advanced engineering alloys

    NASA Astrophysics Data System (ADS)

    Przybyla, Craig P.

    A novel microstructure-sensitive extreme value probabilistic framework is introduced to evaluate material performance/variability for damage evolution processes (e.g., fatigue, fracture, creep). This framework employs newly developed extreme value marked correlation functions (EVMCF) to identify the coupled microstructure attributes (e.g., phase/grain size, grain orientation, grain misorientation) that have the greatest statistical relevance to the extreme value response variables (e.g., stress, elastic/plastic strain) that describe the damage evolution processes of interest. This is an improvement on previous approaches that account for distributed extreme value response variables that describe the damage evolution process of interest based only on the extreme value distributions of a single microstructure attribute; previous approaches have given no consideration of how coupled microstructure attributes affect the distributions of extreme value response. This framework also utilizes computational modeling techniques to identify correlations between microstructure attributes that significantly raise or lower the magnitudes of the damage response variables of interest through the simulation of multiple statistical volume elements (SVE). Each SVE for a given response is constructed to be a statistical sample of the entire microstructure ensemble (i.e., bulk material); therefore, the response of interest in each SVE is not expected to be the same. This is in contrast to computational simulation of a single representative volume element (RVE), which often is untenably large for response variables dependent on the extreme value microstructure attributes. This framework has been demonstrated in the context of characterizing microstructure-sensitive high cycle fatigue (HCF) variability due to the processes of fatigue crack formation (nucleation and microstructurally small crack growth) in polycrystalline metallic alloys. Specifically, the framework is exercised to

  3. Systems analysis and futuristic designs of advanced biofuel factory concepts.

    SciTech Connect

    Chianelli, Russ; Leathers, James; Thoma, Steven George; Celina, Mathias Christopher; Gupta, Vipin P.

    2007-10-01

    The U.S. is addicted to petroleum--a dependency that periodically shocks the economy, compromises national security, and adversely affects the environment. If liquid fuels remain the main energy source for U.S. transportation for the foreseeable future, the system solution is the production of new liquid fuels that can directly displace diesel and gasoline. This study focuses on advanced concepts for biofuel factory production, describing three design concepts: biopetroleum, biodiesel, and higher alcohols. A general schematic is illustrated for each concept with technical description and analysis for each factory design. Looking beyond current biofuel pursuits by industry, this study explores unconventional feedstocks (e.g., extremophiles), out-of-favor reaction processes (e.g., radiation-induced catalytic cracking), and production of new fuel sources traditionally deemed undesirable (e.g., fusel oils). These concepts lay the foundation and path for future basic science and applied engineering to displace petroleum as a transportation energy source for good.

  4. Advanced XAS Analysis for Investigating Fuel Cell Electrocatalysts

    SciTech Connect

    Witkowska, Agnieszka; Principi, Emiliano; Di Cicco, Andrea; Marassi, Roberto

    2007-02-02

    In the paper we present an accurate structural study of a Pt-based electrode by means of XAS, accounting for both the catalytic nanoparticles size distribution and sample inhomogeneities. Morphology and size distribution of the nanoparticles were investigated by scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-ray diffraction techniques. XAS data-analysis was performed using advanced multiple-scattering techniques (GNXAS), disentangling possible effects due to surface atom contributions in nanoparticles and sample homogeneity, contributing to a reduction of intensity of the structural signal. This approach for XAS investigation of electrodes of FC devices can represent a viable and reliable way to understand structural details, important for producing more efficient catalytic materials.

  5. Analysis of biofluids by paper spray MS: advances and challenges.

    PubMed

    Manicke, Nicholas E; Bills, Brandon J; Zhang, Chengsen

    2016-03-01

    Paper spray MS is part of a cohort of ambient ionization or direct analysis methods that seek to analyze complex samples without prior sample preparation. Extraction and electrospray ionization occur directly from the paper substrate upon which a dried matrix spot is stored. Paper spray MS is capable of detecting drugs directly from dried blood, plasma and urine spots at the low ng/ml to pg/ml levels without sample preparation. No front end separation is performed, so MS/MS or high-resolution MS is required. Here, we discuss paper spray methodology, give a comprehensive literature review of the use of paper spray MS for bioanalysis, discuss technological advancements and variations on this technique and discuss some of its limitations. PMID:26916068

  6. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  7. Recent trends in the advanced analysis of bioactive fatty acids.

    PubMed

    Ruiz-Rodriguez, Alejandro; Reglero, Guillermo; Ibañez, Elena

    2010-01-20

    The consumption of dietary fats have been long associated to chronic diseases such as obesity, diabetes, cancer, arthritis, asthma, and cardiovascular disease; although some controversy still exists in the role of dietary fats in human health, certain fats have demonstrated their positive effect in the modulation of abnormal fatty acid and eicosanoid metabolism, both of them associated to chronic diseases. Among the different fats, some fatty acids can be used as functional ingredients such as alpha-linolenic acid (ALA), arachidonic acid (AA), eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), gamma-linolenic acid (GLA), stearidonic acid (STA) and conjugated linoleic acid (CLA), among others. The present review is focused on recent developments in FAs analysis, covering sample preparation methods such as extraction, fractionation and derivatization as well as new advances in chromatographic methods such as GC and HPLC. Special attention is paid to trans fatty acids due its increasing interest for the food industry. PMID:19525080

  8. Beam Optics Analysis — An Advanced 3D Trajectory Code

    NASA Astrophysics Data System (ADS)

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-01

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  9. Advanced functional network analysis in the geosciences: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Runge, Jakob; Schultz, Hanna C. H.; Wiedermann, Marc; Zech, Alraune; Feldhoff, Jan; Rheinwalt, Aljoscha; Kutza, Hannes; Radebach, Alexander; Marwan, Norbert; Kurths, Jürgen

    2013-04-01

    Functional networks are a powerful tool for analyzing large geoscientific datasets such as global fields of climate time series originating from observations or model simulations. pyunicorn (pythonic unified complex network and recurrence analysis toolbox) is an open-source, fully object-oriented and easily parallelizable package written in the language Python. It allows for constructing functional networks (aka climate networks) representing the structure of statistical interrelationships in large datasets and, subsequently, investigating this structure using advanced methods of complex network theory such as measures for networks of interacting networks, node-weighted statistics or network surrogates. Additionally, pyunicorn allows to study the complex dynamics of geoscientific systems as recorded by time series by means of recurrence networks and visibility graphs. The range of possible applications of the package is outlined drawing on several examples from climatology.

  10. An analytical approach to grid sensitivity analysis for NACA four-digit wing sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1992-01-01

    Sensitivity analysis in computational fluid dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  11. Advanced in aerospace lubricant and wear metal analysis

    SciTech Connect

    Saba, C.S.; Centers, P.W.

    1995-09-01

    Wear metal analysis continues to play an effective diagnostic role for condition monitoring of gas turbine engines. Since the early 1960s the United States` military services have been using spectrometric oil analysis program (SOAP) to monitor the condition of aircraft engines. The SOAP has proven to be effective in increasing reliability, fleet readiness and avoiding losses of lives and machinery. Even though historical data have demonstrated the success of the SOAP in terms of detecting imminent engine failure verified by maintenance personnel, the SOAP is not a stand-alone technique and is limited in its detection of large metallic wear debris. In response, improved laboratory, portable, in-line and on-line diagnostic techniques to perfect SOAP and oil condition monitoring have been sought. The status of research and development as well as the direction of future developmental activities in oil analysis due to technological opportunities, advanced in engine development and changes in military mission are reviewed and discussed. 54 refs.

  12. Thermal-Hydrological Sensitivity Analysis of Underground Coal Gasification

    SciTech Connect

    Buscheck, T A; Hao, Y; Morris, J P; Burton, E A

    2009-10-05

    . Specifically, we conducted a parameter sensitivity analysis of the influence of thermal and hydrological properties of the host coal, caprock, and bedrock on cavity temperature and steam production.

  13. How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Razavi, Saman

    2016-04-01

    Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.

  14. Global sensitivity analysis in control-augmented structural synthesis

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1989-01-01

    In this paper, an integrated approach to structural/control design is proposed in which variables in both the passive (structural) and active (control) disciplines of an optimization process are changed simultaneously. The global sensitivity equation (GSE) method of Sobieszczanski-Sobieski (1988) is used to obtain the behavior sensitivity derivatives necessary for the linear approximations used in the parallel multidisciplinary synthesis problem. The GSE allows for the decoupling of large systems into smaller subsystems and thus makes it possible to determine the local sensitivities of each subsystem's outputs to its inputs and parameters. The advantages in using the GSE method are demonstrated using a finite-element representation of a truss structure equipped with active lateral displacement controllers, which is undergoing forced vibration.

  15. Variability-based global sensitivity analysis of circuit response

    NASA Astrophysics Data System (ADS)

    Opalski, Leszek J.

    2014-11-01

    The research problem of interest to this paper is: how to determine efficiently and objectively the most and the least influential parameters of a multimodule electronic system - given the system model f and the module parameter variation ranges. The author investigates if existing generic global sensitivity methods are applicable for electronic circuit design, even if they were developed (and successfully applied) in quite distant engineering areas. A photodiode detector analog front-end system response time is used to reveal capability of the selected global sensitivity approaches under study.

  16. Fractal Analysis of Stress Sensitivity of Permeability in Porous Media

    NASA Astrophysics Data System (ADS)

    Tan, Xiao-Hua; Li, Xiao-Ping; Liu, Jian-Yi; Zhang, Lie-Hui; Cai, Jianchao

    2015-12-01

    A permeability model for porous media considering the stress sensitivity is derived based on mechanics of materials and the fractal characteristics of solid cluster size distribution. The permeability of porous media considering the stress sensitivity is related to solid cluster fractal dimension, solid cluster fractal tortuosity dimension, solid cluster minimum diameter and solid cluster maximum diameter, Young's modulus, Poisson's ratio, as well as power index. Every parameter has clear physical meaning without the use of empirical constants. The model predictions of permeability show good agreement with those obtained by the available experimental expression. The proposed model may be conducible to a better understanding of the mechanism for flow in elastic porous media.

  17. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  18. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  19. Large-scale transient sensitivity analysis of a radiation damaged bipolar junction transistor.

    SciTech Connect

    Hoekstra, Robert John; Gay, David M.; Bartlett, Roscoe Ainsworth; Phipps, Eric Todd

    2007-11-01

    Automatic differentiation (AD) is useful in transient sensitivity analysis of a computational simulation of a bipolar junction transistor subject to radiation damage. We used forward-mode AD, implemented in a new Trilinos package called Sacado, to compute analytic derivatives for implicit time integration and forward sensitivity analysis. Sacado addresses element-based simulation codes written in C++ and works well with forward sensitivity analysis as implemented in the Trilinos time-integration package Rythmos. The forward sensitivity calculation is significantly more efficient and robust than finite differencing.

  20. Design, analysis, and test verification of advanced encapsulation systems

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Minning, C.

    1982-01-01

    Design sensitivities are established for the development of photovoltaic module criteria and the definition of needed research tasks. The program consists of three phases. In Phase I, analytical models were developed to perform optical, thermal, electrical, and structural analyses on candidate encapsulation systems. From these analyses several candidate systems will be selected for qualification testing during Phase II. Additionally, during Phase II, test specimens of various types will be constructed and tested to determine the validity of the analysis methodology developed in Phase I. In Phse III, a finalized optimum design based on knowledge gained in Phase I and II will be developed. All verification testing was completed during this period. Preliminary results and observations are discussed. Descriptions of the thermal, thermal structural, and structural deflection test setups are included.

  1. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    SciTech Connect

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  2. Sensitivity Analysis for Hierarchical Models Employing "t" Level-1 Assumptions.

    ERIC Educational Resources Information Center

    Seltzer, Michael; Novak, John; Choi, Kilchan; Lim, Nelson

    2002-01-01

    Examines the ways in which level-1 outliers can impact the estimation of fixed effects and random effects in hierarchical models (HMs). Also outlines and illustrates the use of Markov Chain Monte Carlo algorithms for conducting sensitivity analyses under "t" level-1 assumptions, including algorithms for settings in which the degrees of freedom at…

  3. Fecal bacteria source characterization and sensitivity analysis of SWAT 2005

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Soil and Water Assessment Tool (SWAT) version 2005 includes a microbial sub-model to simulate fecal bacteria transport at the watershed scale. The objectives of this study were to demonstrate methods to characterize fecal coliform bacteria (FCB) source loads and to assess the model sensitivity t...

  4. Intelligence and Interpersonal Sensitivity: A Meta-Analysis

    ERIC Educational Resources Information Center

    Murphy, Nora A.; Hall, Judith A.

    2011-01-01

    A meta-analytic review investigated the association between general intelligence and interpersonal sensitivity. The review involved 38 independent samples with 2988 total participants. There was a highly significant small-to-medium effect for intelligence measures to be correlated with decoding accuracy (r=0.19, p less than 0.001). Significant…

  5. Advanced High Temperature Reactor Systems and Economic Analysis

    SciTech Connect

    Holcomb, David Eugene; Peretz, Fred J; Qualls, A L

    2011-09-01

    The Advanced High Temperature Reactor (AHTR) is a design concept for a large-output [3400 MW(t)] fluoride-salt-cooled high-temperature reactor (FHR). FHRs, by definition, feature low-pressure liquid fluoride salt cooling, coated-particle fuel, a high-temperature power cycle, and fully passive decay heat rejection. The AHTR's large thermal output enables direct comparison of its performance and requirements with other high output reactor concepts. As high-temperature plants, FHRs can support either high-efficiency electricity generation or industrial process heat production. The AHTR analysis presented in this report is limited to the electricity generation mission. FHRs, in principle, have the potential to be low-cost electricity producers while maintaining full passive safety. However, no FHR has been built, and no FHR design has reached the stage of maturity where realistic economic analysis can be performed. The system design effort described in this report represents early steps along the design path toward being able to predict the cost and performance characteristics of the AHTR as well as toward being able to identify the technology developments necessary to build an FHR power plant. While FHRs represent a distinct reactor class, they inherit desirable attributes from other thermal power plants whose characteristics can be studied to provide general guidance on plant configuration, anticipated performance, and costs. Molten salt reactors provide experience on the materials, procedures, and components necessary to use liquid fluoride salts. Liquid metal reactors provide design experience on using low-pressure liquid coolants, passive decay heat removal, and hot refueling. High temperature gas-cooled reactors provide experience with coated particle fuel and graphite components. Light water reactors (LWRs) show the potentials of transparent, high-heat capacity coolants with low chemical reactivity. Modern coal-fired power plants provide design experience with

  6. Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances

    PubMed Central

    Alarifi, Abdulrahman; Al-Salman, AbdulMalik; Alsaleh, Mansour; Alnafessah, Ahmad; Al-Hadhrami, Suheer; Al-Ammar, Mai A.; Al-Khalifa, Hend S.

    2016-01-01

    In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space. PMID:27196906

  7. Inside Single Cells: Quantitative Analysis with Advanced Optics and Nanomaterials

    PubMed Central

    Cui, Yi; Irudayaraj, Joseph

    2014-01-01

    Single cell explorations offer a unique window to inspect molecules and events relevant to mechanisms and heterogeneity constituting the central dogma of biology. A large number of nucleic acids, proteins, metabolites and small molecules are involved in determining and fine-tuning the state and function of a single cell at a given time point. Advanced optical platforms and nanotools provide tremendous opportunities to probe intracellular components with single-molecule accuracy, as well as promising tools to adjust single cell activity. In order to obtain quantitative information (e.g. molecular quantity, kinetics and stoichiometry) within an intact cell, achieving the observation with comparable spatiotemporal resolution is a challenge. For single cell studies both the method of detection and the biocompatibility are critical factors as they determine the feasibility, especially when considering live cell analysis. Although a considerable proportion of single cell methodologies depend on specialized expertise and expensive instruments, it is our expectation that the information content and implication will outweigh the costs given the impact on life science enabled by single cell analysis. PMID:25430077

  8. Quantitative Computed Tomography and Image Analysis for Advanced Muscle Assessment

    PubMed Central

    Edmunds, Kyle Joseph; Gíslason, Magnus K.; Arnadottir, Iris D.; Marcante, Andrea; Piccione, Francesco; Gargiulo, Paolo

    2016-01-01

    Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

  9. Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances.

    PubMed

    Alarifi, Abdulrahman; Al-Salman, AbdulMalik; Alsaleh, Mansour; Alnafessah, Ahmad; Al-Hadhrami, Suheer; Al-Ammar, Mai A; Al-Khalifa, Hend S

    2016-01-01

    In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space. PMID:27196906

  10. Inside single cells: quantitative analysis with advanced optics and nanomaterials.

    PubMed

    Cui, Yi; Irudayaraj, Joseph

    2015-01-01

    Single-cell explorations offer a unique window to inspect molecules and events relevant to mechanisms and heterogeneity constituting the central dogma of biology. A large number of nucleic acids, proteins, metabolites, and small molecules are involved in determining and fine-tuning the state and function of a single cell at a given time point. Advanced optical platforms and nanotools provide tremendous opportunities to probe intracellular components with single-molecule accuracy, as well as promising tools to adjust single-cell activity. To obtain quantitative information (e.g., molecular quantity, kinetics, and stoichiometry) within an intact cell, achieving the observation with comparable spatiotemporal resolution is a challenge. For single-cell studies, both the method of detection and the biocompatibility are critical factors as they determine the feasibility, especially when considering live-cell analysis. Although a considerable proportion of single-cell methodologies depend on specialized expertise and expensive instruments, it is our expectation that the information content and implication will outweigh the costs given the impact on life science enabled by single-cell analysis. PMID:25430077

  11. Advances in genome-wide DNA methylation analysis

    PubMed Central

    Gupta, Romi; Nagarajan, Arvindhan; Wajapeyee, Narendra

    2013-01-01

    The covalent DNA modification of cytosine at position 5 (5-methylcytosine; 5mC) has emerged as an important epigenetic mark most commonly present in the context of CpG dinucleotides in mammalian cells. In pluripotent stem cells and plants, it is also found in non-CpG and CpNpG contexts, respectively. 5mC has important implications in a diverse set of biological processes, including transcriptional regulation. Aberrant DNA methylation has been shown to be associated with a wide variety of human ailments and thus is the focus of active investigation. Methods used for detecting DNA methylation have revolutionized our understanding of this epigenetic mark and provided new insights into its role in diverse biological functions. Here we describe recent technological advances in genome-wide DNA methylation analysis and discuss their relative utility and drawbacks, providing specific examples from studies that have used these technologies for genome-wide DNA methylation analysis to address important biological questions. Finally, we discuss a newly identified covalent DNA modification, 5-hydroxymethylcytosine (5hmC), and speculate on its possible biological function, as well as describe a new methodology that can distinguish 5hmC from 5mC. PMID:20964631

  12. The role of experiments and of sensitivity analysis in simulation validation strategies with emphasis on reactor physics

    SciTech Connect

    Giuseppe Palmiotti; Massimo Salvatores

    2013-02-01

    The complementary role of experiments and of sensitivity analysis has been and still is a key feature of validation strategies used in the field of simulation tools for nuclear reactor design. The present paper gives a summary of the development of more and more sophisticated validation strategies up to the present trend for science-based validation approaches. Most examples and some very recent original developments are given, mostly in the field of neutronics that has traditionally provided cutting edge advances for simulation tools validation.

  13. Sensitivity analysis of the GNSS derived Victoria plate motion

    NASA Astrophysics Data System (ADS)

    Apolinário, João; Fernandes, Rui; Bos, Machiel

    2014-05-01

    Fernandes et al. (2013) estimated the angular velocity of the Victoria tectonic block from geodetic data (GNSS derived velocities) only.. GNSS observations are sparse in this region and it is therefore of the utmost importance to use the available data (5 sites) in the most optimal way. Unfortunately, the existing time-series were/are affected by missing data and offsets. In addition, some time-series were close to the considered minimal threshold value to compute one reliable velocity solution: 2.5-3.0 years. In this research, we focus on the sensitivity of the derived angular velocity to changes in the data (longer data-span for some stations) by extending the used data-span: Fernandes et al. (2013) used data until September 2011. We also investigate the effect of adding other stations to the solution, which is now possible since more stations became available in the region. In addition, we study if the conventional power-law plus white noise model is indeed the best stochastic model. In this respect, we apply different noise models using HECTOR (Bos et al. (2013), which can use different noise models and estimate offsets and seasonal signals simultaneously. The seasonal signal estimation is also other important parameter, since the time-series are rather short or have large data spans at some stations, which implies that the seasonal signals still can have some effect on the estimated trends as shown by Blewitt and Lavellee (2002) and Bos et al. (2010). We also quantify the magnitude of such differences in the estimation of the secular velocity and their effect in the derived angular velocity. Concerning the offsets, we investigate how they can, detected and undetected, influence the estimated plate motion. The time of offsets has been determined by visual inspection of the time-series. The influence of undetected offsets has been done by adding small synthetic random walk signals that are too small to be detected visually but might have an effect on the

  14. Advanced In-Situ Detection and Chemical Analysis of Interstellar Dust Particles

    NASA Astrophysics Data System (ADS)

    Sternovsky, Z.; Gemer, A.; Gruen, E.; Horanyi, M.; Kempf, S.; Maute, K.; Postberg, F.; Srama, R.; Williams, E.; O'brien, L.; Rocha, J. R. R.

    2015-12-01

    The Ulysses dust detector discovered that interstellar dust particles pass through the solar system. The Hyperdsut instrument is developed for the in-situ detection and analysis of these particles to determine the elemental, chemical and isotopic compositions. Hyperdust builds on the heritage of previous successful instruments, e.g. the Cosmic Dust Analyzer (CDA) on Cassini. Hyperdust combines a highly sensitive Dust Trajectory Sensor (DTS) and the high mass resolution Chemical Analyzer (CA). The DTS will detect dust particles as small as 0.3 μm in radius, and the velocity vector information is used to confirm the interstellar origin and/or reveal the dynamics from the interactions within the solar system. The effective target area of the CA is > 600 cm2 achieves mass resolution in excess of 200, which is considerably higher than that of CDA, and is acheved by advanced ion optics design. The Hyperdust instrument is in the final phases of development to TRL 6.

  15. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Space Shuttle Main Engine (SSME) has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature, pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system; (2) develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amount of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. High compression ratio can be achieved to allow minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities; and (3) integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of

  16. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1993-01-01

    The SSME has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) Develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system. (2) Develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amounts of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. A high compression ratio can be achieved to allow the minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities. (3) Integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for a quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate

  17. Advanced Diagnostic and Prognostic Testbed (ADAPT) Testability Analysis Report

    NASA Technical Reports Server (NTRS)

    Ossenfort, John

    2008-01-01

    As system designs become more complex, determining the best locations to add sensors and test points for the purpose of testing and monitoring these designs becomes more difficult. Not only must the designer take into consideration all real and potential faults of the system, he or she must also find efficient ways of detecting and isolating those faults. Because sensors and cabling take up valuable space and weight on a system, and given constraints on bandwidth and power, it is even more difficult to add sensors into these complex designs after the design has been completed. As a result, a number of software tools have been developed to assist the system designer in proper placement of these sensors during the system design phase of a project. One of the key functions provided by many of these software programs is a testability analysis of the system essentially an evaluation of how observable the system behavior is using available tests. During the design phase, testability metrics can help guide the designer in improving the inherent testability of the design. This may include adding, removing, or modifying tests; breaking up feedback loops, or changing the system to reduce fault propagation. Given a set of test requirements, the analysis can also help to verify that the system will meet those requirements. Of course, a testability analysis requires that a software model of the physical system is available. For the analysis to be most effective in guiding system design, this model should ideally be constructed in parallel with these efforts. The purpose of this paper is to present the final testability results of the Advanced Diagnostic and Prognostic Testbed (ADAPT) after the system model was completed. The tool chosen to build the model and to perform the testability analysis with is the Testability Engineering and Maintenance System Designer (TEAMS-Designer). The TEAMS toolset is intended to be a solution to span all phases of the system, from design and

  18. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  19. Advanced AEM by Comprehensive Analysis and Modeling of System Drift

    NASA Astrophysics Data System (ADS)

    Schiller, Arnulf; Klune, Klaus; Schattauer, Ingrid

    2010-05-01

    The quality of the assessment of risks outgoing from environmental hazards strongly depends on the spatial and temporal distribution of the data collected in a survey area. Natural hazards generally emerge from wide areas as it is in the case of volcanoes or land slides. Conventional surface measurements are restricted to few lines or locations and often can't be conducted in difficult terrain. So they only give a spatial and temporary limited data set and therefore limit the reliability of risk analysis. Aero-geophysical measurements potentially provide a valuable tool for completing the data set as they can be performed over a wide area, even above difficult terrain within a short time. A most desirable opportunity in course of such measurements is the ascertainment of the dynamics of such potentially hazardous environmental processes. This necessitates repeated and reproducible measurements. Current HEM systems can't accomplish this adequately due to their system immanent drift and - in some cases - bad signal to noise ratio. So, to develop comprising concepts for advancing state of the art HEM-systems to a valuable tool for data acquisition in risk assessment or hydrological problems, different studies have been undertaken which form the contents of the presented work conducted in course of the project HIRISK (Helicopter Based Electromagnetic System for Advanced Environmental Risk Assessment - FWF L-354 N10, supported by the Austrian Science Fund). The methodology is based upon two paths: A - Comprehensive experimental testing on an existing HEM system serving as an experimental platform. B - The setup of a numerical model which is continuously refined according to the results of the experimental data. The model then serves to simulate the experimental as well as alternative configurations and to analyze them subject to their drift behavior. Finally, concepts for minimizing the drift are derived and tested. Different test series - stationary on ground as well

  20. Long vs. short-term energy storage:sensitivity analysis.

    SciTech Connect

    Schoenung, Susan M. (Longitude 122 West, Inc., Menlo Park, CA); Hassenzahl, William V. (,Advanced Energy Analysis, Piedmont, CA)

    2007-07-01

    This report extends earlier work to characterize long-duration and short-duration energy storage technologies, primarily on the basis of life-cycle cost, and to investigate sensitivities to various input assumptions. Another technology--asymmetric lead-carbon capacitors--has also been added. Energy storage technologies are examined for three application categories--bulk energy storage, distributed generation, and power quality--with significant variations in discharge time and storage capacity. Sensitivity analyses include cost of electricity and natural gas, and system life, which impacts replacement costs and capital carrying charges. Results are presented in terms of annual cost, $/kW-yr. A major variable affecting system cost is hours of storage available for discharge.

  1. Sensitive glow discharge ion source for aerosol and gas analysis

    DOEpatents

    Reilly, Peter T. A.

    2007-08-14

    A high sensitivity glow discharge ion source system for analyzing particles includes an aerodynamic lens having a plurality of constrictions for receiving an aerosol including at least one analyte particle in a carrier gas and focusing the analyte particles into a collimated particle beam. A separator separates the carrier gas from the analyte particle beam, wherein the analyte particle beam or vapors derived from the analyte particle beam are selectively transmitted out of from the separator. A glow discharge ionization source includes a discharge chamber having an entrance orifice for receiving the analyte particle beam or analyte vapors, and a target electrode and discharge electrode therein. An electric field applied between the target electrode and discharge electrode generates an analyte ion stream from the analyte vapors, which is directed out of the discharge chamber through an exit orifice, such as to a mass spectrometer. High analyte sensitivity is obtained by pumping the discharge chamber exclusively through the exit orifice and the entrance orifice.

  2. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    SciTech Connect

    Blonigan, Patrick J. Wang, Qiqi

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  3. Sensitivity analysis of eigenvalues for an electro-hydraulic servomechanism

    NASA Astrophysics Data System (ADS)

    Stoia-Djeska, M.; Safta, C. A.; Halanay, A.; Petrescu, C.

    2012-11-01

    Electro-hydraulic servomechanisms (EHSM) are important components of flight control systems and their role is to control the movement of the flying control surfaces in response to the movement of the cockpit controls. As flight-control systems, the EHSMs have a fast dynamic response, a high power to inertia ratio and high control accuracy. The paper is devoted to the study of the sensitivity for an electro-hydraulic servomechanism used for an aircraft aileron action. The mathematical model of the EHSM used in this paper includes a large number of parameters whose actual values may vary within some ranges of uncertainty. It consists in a nonlinear ordinary differential equation system composed by the mass and energy conservation equations, the actuator movement equations and the controller equation. In this work the focus is on the sensitivities of the eigenvalues of the linearized homogeneous system, which are the partial derivatives of the eigenvalues of the state-space system with respect the parameters. These are obtained using a modal approach based on the eigenvectors of the state-space direct and adjoint systems. To calculate the eigenvalues and their sensitivity the system's Jacobian and its partial derivatives with respect the parameters are determined. The calculation of the derivative of the Jacobian matrix with respect to the parameters is not a simple task and for many situations it must be done numerically. The system stability is studied in relation with three parameters: m, the equivalent inertial load of primary control surface reduced to the actuator rod; B, the bulk modulus of oil and p a pressure supply proportionality coefficient. All the sensitivities calculated in this work are in good agreement with those obtained through recalculations.

  4. Thermal analysis of microlens formation on a sensitized gelatin layer

    SciTech Connect

    Muric, Branka; Pantelic, Dejan; Vasiljevic, Darko; Panic, Bratimir; Jelenkovic, Branislav

    2009-07-01

    We analyze a mechanism of direct laser writing of microlenses. We find that thermal effects and photochemical reactions are responsible for microlens formation on a sensitized gelatin layer. An infrared camera was used to assess the temperature distribution during the microlens formation, while the diffraction pattern produced by the microlens itself was used to estimate optical properties. The study of thermal processes enabled us to establish the correlation between thermal and optical parameters.

  5. Simulation of the global contrail radiative forcing: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.

    2012-12-01

    The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.

  6. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  7. Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1997-01-01

    A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.

  8. Temperature sensitivity analysis of dopingless charge-plasma transistor

    NASA Astrophysics Data System (ADS)

    Shrivastava, Vishwas; Kumar, Anup; Sahu, Chitrakant; Singh, Jawar

    2016-03-01

    The junctionless field-effect transistors (JLFETs) have shown potential to scale down in sub-10 nm regime due to simplified fabrication process and less short-channel effects (SCEs), however, sensitivity towards process parameter variation is a major concern. Therefore, in this paper, sensitivity towards temperature variation of recently proposed dopingless (DL) double-gate field-effect transistor (DGFET) and JL-DGFET is reported. Different digital and analog performance metrics were considered and compared for both devices of similar geometries. We observed that the drive current of DL-DGFET decreases with temperature, while, it increases in JL-DGFET because both devices are affected by different scattering mechanisms at higher temperature. The variation in ION and IOFF in DL-DGFET are only 0.095 μ A/K and 0.2 nA /K , respectively, while, in JL-DGFET the changes are 0.25 μ A/K and 0.34 nA /K , respectively, above room temperature. Below room temperature, it was found that the incomplete ionization effect in JL-DGFET severely affects the drive current, however, DL-DGFET remains unaffected. Hence, the proposed DL-DGFET is less sensitive towards temperature variation and can be employed for cryogenics to high temperature applications.

  9. Superconducting Accelerating Cavity Pressure Sensitivity Analysis and Stiffening

    SciTech Connect

    Rodnizki, J; Ben Aliz, Y; Grin, A; Horvitz, Z; Perry, A; Weissman, L; Davis, G Kirk; Delayen, Jean R.

    2014-12-01

    The Soreq Applied Research Accelerator Facility (SARAF) design is based on a 40 MeV 5 mA light ions superconducting RF linac. Phase-I of SARAF delivers up to 2 mA CW proton beams in an energy range of 1.5 - 4.0 MeV. The maximum beam power that we have reached is 5.7 kW. Today, the main limiting factor to reach higher ion energy and beam power is related to the HWR sensitivity to the liquid helium coolant pressure fluctuations. The HWR sensitivity to helium pressure is about 60 Hz/mbar. The cavities had been designed, a decade ago, to be soft in order to enable tuning of their novel shape. However, the cavities turned out to be too soft. In this work we found that increasing the rigidity of the cavities in the vicinity of the external drift tubes may reduce the cavity sensitivity by a factor of three. A preliminary design to increase the cavity rigidity is presented.

  10. Imaging Fourier transform spectrometer (IFTS): parametric sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Keller, Robert A.; Lomheim, Terrence S.

    2005-06-01

    Imaging Fourier transform spectrometers (IFTS) allow for very high spectral resolution hyperspectral imaging while using moderate size 2D focal plane arrays in a staring mode. This is not the case for slit scanning dispersive imaging spectrometers where spectral sampling is related to the focal plane pixel count along the spectral dimension of the 2D focal plane used in such an instrument. This can become a major issue in the longwave infrared (LWIR) where the operability and yield of highly sensitivity arrays (i.e.HgCdTe) of large dimension are generally poor. However using an IFTS introduces its own unique set of issues and tradeoffs. In this paper we develop simplified equations for describing the sensitivity of an IFTS, including the effects of data windowing. These equations provide useful insights into the optical, focal plane and operational design trade space that must be considered when examining IFTS concepts aimed at a specific sensitivity and spectral resolution application. The approach is illustrated by computing the LWIR noise-equivalent spectral radiance (NESR) corresponding to the NASA Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) concept assuming a proven and reasonable noise-equivalent irradiance (NEI) capability for the focal plane.

  11. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex, Hydrogeologic Systems

    NASA Astrophysics Data System (ADS)

    Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  12. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  13. Skin sensitization risk assessment model using artificial neural network analysis of data from multiple in vitro assays.

    PubMed

    Tsujita-Inoue, Kyoko; Hirota, Morihiko; Ashikaga, Takao; Atobe, Tomomi; Kouzuki, Hirokazu; Aiba, Setsuya

    2014-06-01

    The sensitizing potential of chemicals is usually identified and characterized using in vivo methods such as the murine local lymph node assay (LLNA). Due to regulatory constraints and ethical concerns, alternatives to animal testing are needed to predict skin sensitization potential of chemicals. For this purpose, combined evaluation using multiple in vitro and in silico parameters that reflect different aspects of the sensitization process seems promising. We previously reported that LLNA thresholds could be well predicted by using an artificial neural network (ANN) model, designated iSENS ver.1 (integrating in vitro sensitization tests version 1), to analyze data obtained from two in vitro tests: the human Cell Line Activation Test (h-CLAT) and the SH test. Here, we present a more advanced ANN model, iSENS ver.2, which additionally utilizes the results of antioxidant response element (ARE) assay and the octanol-water partition coefficient (LogP, reflecting lipid solubility and skin absorption). We found a good correlation between predicted LLNA thresholds calculated by iSENS ver.2 and reported values. The predictive performance of iSENS ver.2 was superior to that of iSENS ver.1. We conclude that ANN analysis of data from multiple in vitro assays is a useful approach for risk assessment of chemicals for skin sensitization. PMID:24444449

  14. Implementation of terbium-sensitized luminescence in sequential-injection analysis for automatic analysis of orbifloxacin.

    PubMed

    Llorent-Martínez, E J; Ortega-Barrales, P; Molina-Díaz, A; Ruiz-Medina, A

    2008-12-01

    Orbifloxacin (ORBI) is a third-generation fluoroquinolone developed exclusively for use in veterinary medicine, mainly in companion animals. This antimicrobial agent has bactericidal activity against numerous gram-negative and gram-positive bacteria. A few chromatographic methods for its analysis have been described in the scientific literature. Here, coupling of sequential-injection analysis and solid-phase spectroscopy is described in order to develop, for the first time, a terbium-sensitized luminescent optosensor for analysis of ORBI. The cationic resin Sephadex-CM C-25 was used as solid support and measurements were made at 275/545 nm. The system had a linear dynamic range of 10-150 ng mL(-1), with a detection limit of 3.3 ng mL(-1) and an R.S.D. below 3% (n = 10). The analyte was satisfactorily determined in veterinary drugs and dog and horse urine. PMID:18958455

  15. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  16. PRACTICAL SENSITIVITY AND UNCERTAINTY ANALYSIS TECHNIQUES APPLIED TO AGRICULTURAL SYSTEMS MODELS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a practical evaluation framework for analysis of two complex, process-based agricultural system models, WEPP and RZWQM. The evaluation framework combines sensitivity analysis and the uncertainty analysis techniques of first order error analysis (FOA) and Monte Carlo simulation with Latin ...

  17. Steady-state Analysis Model for Advanced Fuelcycle Schemes

    2006-05-12

    The model was developed as a part of the study, "Advanced Fuel Cycles and Waste Management", which was performed during 2003—2005 by an ad-hoc expert group under the Nuclear Development Committee in the OECD/NEA. The model was designed for an efficient conduct of nuclear fuel cycle scheme cost analyses. It is simple, transparent and offers users the capability to track down the cost analysis results. All the fuel cycle schemes considered in the model aremore » represented in a graphic format and all values related to a fuel cycle step are shown in the graphic interface, i.e., there are no hidden values embedded in the calculations. All data on the fuel cycle schemes considered in the study including mass flows, waste generation, cost data, and other data such as activities, decay heat and neutron sources of spent fuel and high—level waste along time are included in the model and can be displayed. The user can modify easily the values of mass flows and/or cost parameters and see the corresponding changes in the results. The model calculates: front—end fuel cycle mass flows such as requirements of enrichment and conversion services and natural uranium; mass of waste based on the waste generation parameters and the mass flow; and all costs. It performs Monte Carlo simulations with changing the values of all unit costs within their respective ranges (from lower to upper bounds).« less

  18. Advances in protein complex analysis using mass spectrometry.

    PubMed

    Gingras, Anne-Claude; Aebersold, Ruedi; Raught, Brian

    2005-02-15

    Proteins often function as components of larger complexes to perform a specific function, and formation of these complexes may be regulated. For example, intracellular signalling events often require transient and/or regulated protein-protein interactions for propagation, and protein binding to a specific DNA sequence, RNA molecule or metabolite is often regulated to modulate a particular cellular function. Thus, characterizing protein complexes can offer important insights into protein function. This review describes recent important advances in mass spectrometry (MS)-based techniques for the analysis of protein complexes. Following brief descriptions of how proteins are identified using MS, and general protein complex purification approaches, we address two of the most important issues in these types of studies: specificity and background protein contaminants. Two basic strategies for increasing specificity and decreasing background are presented: whereas (1) tandem affinity purification (TAP) of tagged proteins of interest can dramatically improve the signal-to-noise ratio via the generation of cleaner samples, (2) stable isotopic labelling of proteins may be used to discriminate between contaminants and bona fide binding partners using quantitative MS techniques. Examples, as well as advantages and disadvantages of each approach, are presented. PMID:15611014

  19. Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Astrophysics Data System (ADS)

    Doyle, Monica M.; O'Neil, Daniel A.; Christensen, Carissa B.

    2005-02-01

    Forecasting technology capabilities requires a tool and a process for capturing state-of-the-art technology metrics and estimates for future metrics. A decision support tool, known as the Advanced Technology Lifecycle Analysis System (ATLAS), contains a Technology Tool Box (TTB) database designed to accomplish this goal. Sections of this database correspond to a Work Breakdown Structure (WBS) developed by NASA's Exploration Systems Research and Technology (ESRT) Program. These sections cover the waterfront of technologies required for human and robotic space exploration. Records in each section include technology performance, operations, and programmatic metrics. Timeframes in the database provide metric values for the state of the art (Timeframe 0) and forecasts for timeframes that correspond to spiral development milestones in NASA's Exploration Systems Mission Directorate (ESMD) development strategy. Collecting and vetting data for the TTB will involve technologists from across the agency, the aerospace industry and academia. Technologists will have opportunities to submit technology metrics and forecasts to the TTB development team. Semi-annual forums will facilitate discussions about the basis of forecast estimates. As the tool and process mature, the TTB will serve as a powerful communication and decision support tool for the ESRT program.

  20. Steady-State Analysis Model for Advanced Fuel Cycle Schemes.

    2008-03-17

    Version 00 SMAFS was developed as a part of the study, "Advanced Fuel Cycles and Waste Management", which was performed during 2003-2005 by an ad-hoc expert group under the Nuclear Development Committee in the OECD/NEA. The model was designed for an efficient conduct of nuclear fuel cycle scheme cost analyses. It is simple, transparent and offers users the capability to track down cost analysis results. All the fuel cycle schemes considered in the model aremore » represented in a graphic format and all values related to a fuel cycle step are shown in the graphic interface, i.e., there are no hidden values embedded in the calculations. All data on the fuel cycle schemes considered in the study including mass flows, waste generation, cost data, and other data such as activities, decay heat and neutron sources of spent fuel and high-level waste along time are included in the model and can be displayed. The user can easily modify values of mass flows and/or cost parameters and see corresponding changes in the results. The model calculates: front-end fuel cycle mass flows such as requirements of enrichment and conversion services and natural uranium; mass of waste based on the waste generation parameters and the mass flow; and all costs.« less

  1. Safety Analysis of Soybean Processing for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Hentges, Dawn L.

    1999-01-01

    Soybeans (cv. Hoyt) is one of the crops planned for food production within the Advanced Life Support System Integration Testbed (ALSSIT), a proposed habitat simulation for long duration lunar/Mars missions. Soybeans may be processed into a variety of food products, including soymilk, tofu, and tempeh. Due to the closed environmental system and importance of crew health maintenance, food safety is a primary concern on long duration space missions. Identification of the food safety hazards and critical control points associated with the closed ALSSIT system is essential for the development of safe food processing techniques and equipment. A Hazard Analysis Critical Control Point (HACCP) model was developed to reflect proposed production and processing protocols for ALSSIT soybeans. Soybean processing was placed in the type III risk category. During the processing of ALSSIT-grown soybeans, critical control points were identified to control microbiological hazards, particularly mycotoxins, and chemical hazards from antinutrients. Critical limits were suggested at each CCP. Food safety recommendations regarding the hazards and risks associated with growing, harvesting, and processing soybeans; biomass management; and use of multifunctional equipment were made in consideration of the limitations and restraints of the closed ALSSIT.

  2. Crashworthiness analysis using advanced material models in DYNA3D

    SciTech Connect

    Logan, R.W.; Burger, M.J.; McMichael, L.D.; Parkinson, R.D.

    1993-10-22

    As part of an electric vehicle consortium, LLNL and Kaiser Aluminum are conducting experimental and numerical studies on crashworthy aluminum spaceframe designs. They have jointly explored the effect of heat treat on crush behavior and duplicated the experimental behavior with finite-element simulations. The major technical contributions to the state of the art in numerical simulation arise from the development and use of advanced material model descriptions for LLNL`s DYNA3D code. Constitutive model enhancements in both flow and failure have been employed for conventional materials such as low-carbon steels, and also for lighter weight materials such as aluminum and fiber composites being considered for future vehicles. The constitutive model enhancements are developed as extensions from LLNL`s work in anisotropic flow and multiaxial failure modeling. Analysis quality as a function of level of simplification of material behavior and mesh is explored, as well as the penalty in computation cost that must be paid for using more complex models and meshes. The lightweight material modeling technology is being used at the vehicle component level to explore the safety implications of small neighborhood electric vehicles manufactured almost exclusively from these materials.

  3. Parameter identification and sensitivity analysis for a robotic manipulator arm

    NASA Technical Reports Server (NTRS)

    Brewer, D. W.; Gibson, J. S.

    1988-01-01

    The development of a nonlinear dynamic model for large oscillations of a robotic manipulator arm about a single joint is described. Optimization routines are formulated and implemented for the identification of electrical and physical parameters from dynamic data taken from an industrial robot arm. Special attention is given to difficulties caused by the large sensitivity of the model with respect to unknown parameters. Performance of the parameter identification algorithm is improved by choosing a control input that allows actuator emf to be included in an electro-mechanical model of the manipulator system.

  4. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  5. Advanced Fuel Cycle Economic Analysis of Symbiotic Light-Water Reactor and Fast Burner Reactor Systems

    SciTech Connect

    D. E. Shropshire

    2009-01-01

    The Advanced Fuel Cycle Economic Analysis of Symbiotic Light-Water Reactor and Fast Burner Reactor Systems, prepared to support the U.S. Advanced Fuel Cycle Initiative (AFCI) systems analysis, provides a technology-oriented baseline system cost comparison between the open fuel cycle and closed fuel cycle systems. The intent is to understand their overall cost trends, cost sensitivities, and trade-offs. This analysis also improves the AFCI Program’s understanding of the cost drivers that will determine nuclear power’s cost competitiveness vis-a-vis other baseload generation systems. The common reactor-related costs consist of capital, operating, and decontamination and decommissioning costs. Fuel cycle costs include front-end (pre-irradiation) and back-end (post-iradiation) costs, as well as costs specifically associated with fuel recycling. This analysis reveals that there are large cost uncertainties associated with all the fuel cycle strategies, and that overall systems (reactor plus fuel cycle) using a closed fuel cycle are about 10% more expensive in terms of electricity generation cost than open cycle systems. The study concludes that further U.S. and joint international-based design studies are needed to reduce the cost uncertainties with respect to fast reactor, fuel separation and fabrication, and waste disposition. The results of this work can help provide insight to the cost-related factors and conditions needed to keep nuclear energy (including closed fuel cycles) economically competitive in the U.S. and worldwide. These results may be updated over time based on new cost information, revised assumptions, and feedback received from additional reviews.

  6. Sensitivity analysis as a general tool for model optimisation - examples for trajectory estimation

    NASA Astrophysics Data System (ADS)

    Schwieger, Volker

    2007-05-01

    This paper outlines the general characteristics of variance-based sensitivity analysis and their advantages with respect to other concepts of sensitivity analysis. The main benefit are qualitative and quantitative correct results independent of the model characteristics. The author focuses on kinematic positioning as required for car navigation, driver assistance systems or machine guidance. The paper compares two different Kalman filter approaches using variance analysis and variance-based sensitivity analysis. The approaches differ with respect to their measurement quantities (input), their state quantities (output), as well as their dynamic vehicle model. The sensitivity analysis shows that each model has its different advantages and input-output relations. Furthermore it is shown that the variance-based sensitivity analysis is well suited to detect the share of the influence of the input quantities on the output quantities, here the estimated positions. Even more important, changes in deterministic and stochastic models lead to obvious effects in the respective variances and sensitivity measures. This emphasises the possibility to optimise the filter models by use of the variance-based sensitivity analysis.

  7. Developing optical traps for ultra-sensitive analysis

    SciTech Connect

    Zhao, X.; Vieira, D.J.; Guckert, R. |; Crane, S.

    1998-09-01

    The authors describe the coupling of a magneto-optical trap to a mass separator for the ultra-sensitive detection of selected radioactive species. As a proof of principle test, they have demonstrated the trapping of {approximately} 6 million {sup 82} Rb (t{sub 1/2} = 75 s) atoms using an ion implantation and heated foil release method for introducing the sample into a trapping cell with minimal gas loading. Gamma-ray counting techniques were used to determine the efficiencies of each step in the process. By far the weakest step in the process is the efficiency of the optical trap itself (0.3%). Further improvements in the quality of the nonstick dryfilm coating on the inside of the trapping cell and the possible use of larger diameter laser beams are indicated. In the presence of a large background of scattered light, this initial work achieved a detection sensitivity of {approximately} 4,000 trapped atoms. Improved detection schemes using a pulsed trap and gated photon detection method are outlined. Application of this technology to the areas of environmental monitoring and nuclear proliferation are foreseen.

  8. Sensitivity analysis of vegetation-induced flow steering in channels

    NASA Astrophysics Data System (ADS)

    Bywater-Reyes, S.; Wilcox, A. C.; Lightbody, A.; Stella, J. C.

    2014-12-01

    Morphodynamic feedbacks result in alternating bars within channels, and the resulting convective accelerations dictate the cross-stream force balance of channels and in turn influence morphology. Pioneer woody riparian trees recruit on river bars and may steer flow and alter this force balance. This study uses two-dimensional hydraulic modeling to test the sensitivity of the flow field to riparian vegetation at the reach scale. We use two test systems with different width-to-depth ratios, substrate sizes, and vegetation structure: the gravel-bed Bitterroot River, MT and the sand-bed Santa Maria River, AZ. We model vegetation explicitly as a drag force by spatially specifying vegetation density, height, and drag coefficient, across varying hydraulic (e.g., discharge, eddy viscosity) conditions and compare velocity vectors between runs. We test variations in vegetation configurations, including the present-day configuration of vegetation in our field systems (extracted from LiDAR), removal of vegetation (e.g., from floods or management actions), and expansion of vegetation. Preliminary model runs suggest that the sensitivity of convective accelerations to vegetation reflects a balance between the extent and density of vegetation inundated and other sources of channel roughness. This research quantifies how vegetation alters hydraulics at the reach scale, a fundamental step to understanding vegetation-morphodynamic interactions.

  9. Advanced Coursework Rates by Ethnicity: An 11-Year, Statewide Analysis

    ERIC Educational Resources Information Center

    Fowler, Janis C.

    2013-01-01

    Purpose: The purpose of this study was to examine advanced coursework completion rates, Advanced Placement (AP)/International Baccalaureate (IB) testing rates, AP/IB exam passage rates, and the percentage of AP/IB exam scores at or above the criterion that may exist among Texas public high school students from 2001 to 2012 to ascertain (a) the…

  10. Male biological clock: a critical analysis of advanced paternal age

    PubMed Central

    Ramasamy, Ranjith; Chiba, Koji; Butler, Peter; Lamb, Dolores J.

    2016-01-01

    Extensive research defines the impact of advanced maternal age on couples’ fecundity and reproductive outcomes, but significantly less research has been focused on understanding the impact of advanced paternal age. Yet it is increasingly common for couples at advanced ages to conceive children. Limited research suggests that the importance of paternal age is significantly less than that of maternal age, but advanced age of the father is implicated in a variety of conditions affecting the offspring. This review examines three aspects of advanced paternal age: the potential problems with conception and pregnancy that couples with advanced paternal age may encounter, the concept of discussing a limit to paternal age in a clinical setting, and the risks of diseases associated with advanced paternal age. As paternal age increases, it presents no absolute barrier to conception, but it does present greater risks and complications. The current body of knowledge does not justify dissuading older men from trying to initiate a pregnancy, but the medical community must do a better job of communicating to couples the current understanding of the risks of conception with advanced paternal age. PMID:25881878

  11. Advanced methods of structural and trajectory analysis for transport aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1995-01-01

    This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.

  12. Sensitivity analysis of DOA estimation algorithms to sensor errors

    NASA Astrophysics Data System (ADS)

    Li, Fu; Vaccaro, Richard J.

    1992-07-01

    A unified statistical performance analysis using subspace perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in the presence of sensor errors. In particular, the multiple signal classification (MUSIC), min-norm, state-space realization (TAM and DDA) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms are analyzed. This analysis assumes that only a finite amount of data is available. An analytical expression for the mean-squared error of the DOA estimates is developed for theoretical comparison in a simple and self-contained fashion. The tractable formulas provide insight into the algorithms. Simulation results verify the analysis.

  13. Integrated design and analysis of advanced airfoil shapes for gas turbine engines

    SciTech Connect

    Hill, B.A.; Rooney, P.J.

    1986-01-01

    An integral process in the mechanical design of gas turbine airfoils is the conversion of hot or running geometry into cold or as-manufactured geometry. New and advanced methods of design and analysis must be created that parallel new and technologically advanced turbine components. In particular, to achieve the high performance required of today's gas turbine engines, the industry is forced to design and manufacture increasingly complex airfoil shapes using advanced analysis and modeling techniques. This paper describes a method of integrating advanced, general purpose finite element analysis techniques in the mechanical design process.

  14. Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.

    PubMed

    Goh, Wilson Wen Bin; Wong, Limsoon

    2016-09-01

    Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts. PMID:27454466

  15. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  16. Generic Repository Concepts and Thermal Analysis for Advanced Fuel Cycles

    SciTech Connect

    Hardin, Ernest; Blink, James; Carter, Joe; Massimiliano, Fratoni; Greenberg, Harris; Howard, Rob L

    2011-01-01

    The current posture of the used nuclear fuel management program in the U.S. following termination of the Yucca Mountain Project, is to pursue research and development (R&D) of generic (i.e., non-site specific) technologies for storage, transportation and disposal. Disposal R&D is directed toward understanding and demonstrating the performance of reference geologic disposal concepts selected to represent the current state-of-the-art in geologic disposal. One of the principal constraints on waste packaging and emplacement in a geologic repository is management of the waste-generated heat. This paper describes the selection of reference disposal concepts, and thermal management strategies for waste from advanced fuel cycles. A geologic disposal concept for spent nuclear fuel (SNF) or high-level waste (HLW) consists of three components: waste inventory, geologic setting, and concept of operations. A set of reference geologic disposal concepts has been developed by the U.S. Department of Energy (DOE) Used Fuel Disposition Campaign, for crystalline rock, clay/shale, bedded salt, and deep borehole (crystalline basement) geologic settings. We performed thermal analysis of these concepts using waste inventory cases representing a range of advanced fuel cycles. Concepts of operation consisting of emplacement mode, repository layout, and engineered barrier descriptions, were selected based on international progress and previous experience in the U.S. repository program. All of the disposal concepts selected for this study use enclosed emplacement modes, whereby waste packages are in direct contact with encapsulating engineered or natural materials. The encapsulating materials (typically clay-based or rock salt) have low intrinsic permeability and plastic rheology that closes voids so that low permeability is maintained. Uniformly low permeability also contributes to chemically reducing conditions common in soft clay, shale, and salt formations. Enclosed modes are associated

  17. Two-dimensional cross-section sensitivity and uncertainty analysis of the LBM (Lithium Blanket Module) experiments at LOTUS

    SciTech Connect

    Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.

    1988-01-01

    In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S/sub N/-transport code ONEDANT, the two-dimensional finite element S/sub N/-transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceeded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed. The goal of this analysis was the determination of the uncertainties of a calculated tritium production per source neutron from lithium along the central Li/sub 2/O rod in the LBM. Considered were the contributions from /sup 1/H, /sup 6/Li, /sup 7/Li, /sup 9/Be, /sup nat/C, /sup 14/N, /sup 16/O, /sup 23/Na, /sup 27/Al, /sup nat/Si, /sup nat/Cr, /sup nat/Fe, /sup nat/Ni, and /sup nat/Pb. 22 refs., 1 fig., 3 tabs.

  18. Advanced microgrid design and analysis for forward operating bases

    NASA Astrophysics Data System (ADS)

    Reasoner, Jonathan

    This thesis takes a holistic approach in creating an improved electric power generation system for a forward operating base (FOB) in the future through the design of an isolated microgrid. After an extensive literature search, this thesis found a need for drastic improvement of the FOB power system. A thorough design process analyzed FOB demand, researched demand side management improvements, evaluated various generation sources and energy storage options, and performed a HOMERRTM discrete optimization to determine the best microgrid design. Further sensitivity analysis was performed to see how changing parameters would affect the outcome. Lastly, this research also looks at some of the challenges which are associated with incorporating a design which relies heavily on inverter-based generation sources, and gives possible solutions to help make a renewable energy powered microgrid a reality. While this thesis uses a FOB as the case study, the process and discussion can be adapted to aide in the design of an off-grid small-scale power grid which utilizes high-penetration levels of renewable energy.

  19. Sensitivity analysis for Probabilistic Tsunami Hazard Assessment (PTHA)

    NASA Astrophysics Data System (ADS)

    Spada, M.; Basili, R.; Selva, J.; Lorito, S.; Sorensen, M. B.; Zonker, J.; Babeyko, A. Y.; Romano, F.; Piatanesi, A.; Tiberti, M.

    2012-12-01

    In modern societies, probabilistic hazard assessment of natural disasters is commonly used by decision makers for designing regulatory standards and, more generally, for prioritizing risk mitigation efforts. Systematic formalization of Probabilistic Tsunami Hazard Assessment (PTHA) has started only in recent years, mainly following the giant tsunami disaster of Sumatra in 2004. Typically, PTHA for earthquake sources exploits the long-standing practices developed in probabilistic seismic hazard assessment (PSHA), even though important differences are evident. In PTHA, for example, it is known that far-field sources are more important and that physical models for tsunami propagation are needed for the highly non-isotropic propagation of tsunami waves. However, considering the high impact that PTHA may have on societies, an important effort to quantify the effect of specific assumptions should be performed. Indeed, specific standard hypotheses made in PSHA may prove inappropriate for PTHA, since tsunami waves are sensitive to different aspects of sources (e.g. fault geometry, scaling laws, slip distribution) and propagate differently. In addition, the necessity of running an explicit calculation of wave propagation for every possible event (tsunami scenario) forces analysts to finding strategies for diminishing the computational burden. In this work, we test the sensitivity of hazard results with respect to several assumptions that are peculiar of PTHA and others that are commonly accepted in PSHA. Our case study is located in the central Mediterranean Sea and considers the Western Hellenic Arc as the earthquake source with Crete and Eastern Sicily as near-field and far-field target coasts, respectively. Our suite of sensitivity tests includes: a) comparison of random seismicity distribution within area sources as opposed to systematically distributed ruptures on fault sources; b) effects of statistical and physical parameters (a- and b-value, Mc, Mmax, scaling laws

  20. A highly multiplexed and sensitive RNA-seq protocol for simultaneous analysis of host and pathogen transcriptomes.

    PubMed

    Avraham, Roi; Haseley, Nathan; Fan, Amy; Bloom-Ackermann, Zohar; Livny, Jonathan; Hung, Deborah T

    2016-08-01

    The ability to simultaneously characterize the bacterial and host expression programs during infection would facilitate a comprehensive understanding of pathogen-host interactions. Although RNA sequencing (RNA-seq) has greatly advanced our ability to study the transcriptomes of prokaryotes and eukaryotes separately, limitations in existing protocols for the generation and analysis of RNA-seq data have hindered simultaneous profiling of host and bacterial pathogen transcripts from the same sample. Here we provide a detailed protocol for simultaneous analysis of host and bacterial transcripts by RNA-seq. Importantly, this protocol details the steps required for efficient host and bacteria lysis, barcoding of samples, technical advances in sample preparation for low-yield sample inputs and a computational pipeline for analysis of both mammalian and microbial reads from mixed host-pathogen RNA-seq data. Sample preparation takes 3 d from cultured cells to pooled libraries. Data analysis takes an additional day. Compared with previous methods, the protocol detailed here provides a sensitive, facile and generalizable approach that is suitable for large-scale studies and will enable the field to obtain in-depth analysis of host-pathogen interactions in infection models. PMID:27442864