Science.gov

Sample records for advanced sensitivity analysis

  1. Advanced Fuel Cycle Economic Sensitivity Analysis

    SciTech Connect

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  2. Advancing sensitivity analysis to precisely characterize temporal parameter dominance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola

    2016-04-01

    Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological

  3. Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers

    NASA Astrophysics Data System (ADS)

    Martynov, Denis

    Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in

  4. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  5. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2003-01-01

    An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.

  6. Demasking the integrated value of discharge - Advanced sensitivity analysis on the components of hydrological models

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Gafurov, Abror; Fohrer, Nicola; Gupta, Hoshin

    2016-04-01

    The hydrologic response variable most often used in sensitivity analysis is discharge which provides an integrated value of all catchment processes. The typical sensitivity analysis evaluates how changes in the model parameters affect the model output. However, due to discharge being the aggregated effect of all hydrological processes, the sensitivity signal of a certain model parameter can be strongly masked. A more advanced form of sensitivity analysis would be achieved if we could investigate how the sensitivity of a certain modelled process variable relates to the changes in a parameter. Based on this, the controlling parameters for different hydrological components could be detected. Towards this end, we apply the approach of temporal dynamics of parameter sensitivity (TEDPAS) to calculate the daily sensitivities for different model outputs with the FAST method. The temporal variations in parameter dominance are then analysed for both the modelled hydrological components themselves, and also for the rates of change (derivatives) in the modelled hydrological components. The daily parameter sensitivities are then compared with the modelled hydrological components using regime curves. Application of this approach shows that when the corresponding modelled process is investigated instead of discharge, we obtain both an increased indication of parameter sensitivity, and also a clear pattern showing how the seasonal patterns of parameter dominance change over time for each hydrological process. By relating these results with the model structure, we can see that the sensitivity of model parameters is influenced by the function of the parameter. While capacity parameters show more sensitivity to the modelled hydrological component, flux parameters tend to have a higher sensitivity to rates of change in the modelled hydrological component. By better disentangling the information hidden in the discharge values, we can use sensitivity analyses to obtain a clearer signal

  7. Sensitivity analysis

    MedlinePlus

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  8. Sensitivity analysis and multidisciplinary optimization for aircraft design - Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  9. Sensitivity analysis and multidisciplinary optimization for aircraft design: Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  10. Probabilistic investigation of sensitivities of advanced test-analysis model correlation methods

    NASA Astrophysics Data System (ADS)

    Bergman, Elizabeth J.; Allen, Matthew S.; Kammer, Daniel C.; Mayes, Randall L.

    2010-06-01

    The industry standard method used to validate finite element models involves correlation of test and analysis mode shapes using reduced Test-Analysis Models (TAMs). Some organizations even require this model validation approach. Considerable effort is required to choose sensor locations and to create a suitable TAM so that the test and analysis mode shapes will be orthogonal to within the required tolerance. This work uses a probabilistic framework to understand and quantify the effect of small errors in the test mode shapes on test-analysis orthogonality. Using the proposed framework, test-orthogonality is a probabilistic metric and the problem becomes one of choosing sensor placement and TAM generation techniques that assure that the orthogonality has a high probability of being within an acceptable range if the model is correct, even though the test measurements are contaminated with random errors. A simple analytical metric is derived that is shown to give a good estimate of the sensitivity of a TAM to errors in the test mode shapes for a certain noise model. These ideas are then applied to a generic satellite system, using TAMs generated by the Static, Modal and Improved Reduced System (IRS) reduction methods. Experimental errors are simulated for a set of mode shapes and Monte Carlo simulation is used to estimate the probability that the orthogonality metric exceeds a threshold due to experimental error alone. For the satellite system considered here, the orthogonality calculation is highly sensitive to experimental errors, so a set of noisy mode shapes has a small probability of passing the orthogonality criteria for some of the TAMs. A number of sensor placement techniques are used in this study, and the comparison reveals that, for this system, the Modal TAM is twice as sensitive to errors on the test mode shapes when it is created on a sensor set optimized for the Static TAM rather than one that was optimized specifically for the Modal TAM. These findings

  11. Sensitivity Analysis for the Optimal Design and Control of Advanced Guidance Systems

    DTIC Science & Technology

    2007-06-01

    Springer-Verlag, New York, 1995. [32] L. Davis and F. Pahlevani , Sensitivity calculations for actuator location for a parabolic PDE. In preparation... Pahlevani , International Journal for Numerical Methods in Fluids, February 2006, Vol. 52, pages 381-392. Published in Un-Reviewed Conference Proceedings 1...Equa- tions”, F. Pahlevani , submitted to SIAM Journal on Numerical Analysis, in revision. 3. “Semi-Implicit Schemes for Transient Navier-Stokes Equations

  12. Advanced Simulation Capability for Environmental Management (ASCEM): Developments in Uncertainty Quantification and Sensitivity Analysis.

    NASA Astrophysics Data System (ADS)

    McKinney, S. W.

    2015-12-01

    Effectiveness of uncertainty quantification (UQ) and sensitivity analysis (SA) has been improved in ASCEM by choosing from a variety of methods to best suit each model. Previously, ASCEM had a small toolset for UQ and SA, leaving out benefits of the many unincluded methods. Many UQ and SA methods are useful for analyzing models with specific characteristics; therefore, programming these methods into ASCEM would have been inefficient. Embedding the R programming language into ASCEM grants access to a plethora of UQ and SA methods. As a result, programming required is drastically decreased, and runtime efficiency and analysis effectiveness are increased relative to each unique model.

  13. Integrated Sensitivity Analysis Workflow

    SciTech Connect

    Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.; Clay, Robert L.

    2014-08-01

    Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.

  14. Advances in Sensitivity Analysis Capabilities with SCALE 6.0 and 6.1

    SciTech Connect

    Rearden, Bradley T; Petrie Jr, Lester M; Williams, Mark L

    2010-01-01

    The sensitivity and uncertainty analysis sequences of SCALE compute the sensitivity of k{sub eff} to each constituent multigroup cross section using perturbation theory based on forward and adjoint transport computations with several available codes. Versions 6.0 and 6.1 of SCALE, released in 2009 and 2010, respectively, include important additions to the TSUNAMI-3D sequence, which computes forward and adjoint solutions in multigroup with the KENO Monte Carlo codes. Previously, sensitivity calculations were performed with the simple and efficient geometry capabilities of KENO V.a, but now calculations can also be performed with the generalized geometry code KENO-VI. TSUNAMI-3D requires spatial refinement of the angular flux moment solutions for the forward and adjoint calculations. These refinements are most efficiently achieved with the use of a mesh accumulator. For SCALE 6.0, a more flexible mesh accumulator capability has been added to the KENO codes, enabling varying granularity of the spatial refinement to optimize the calculation for different regions of the system model. The new mesh capabilities allow the efficient calculation of larger models than were previously possible. Additional improvements in the TSUNAMI calculations were realized in the computation of implicit effects of resonance self-shielding on the final sensitivity coefficients. Multigroup resonance self-shielded cross sections are accurately computed with SCALE's robust deterministic continuous-energy treatment for the resolved and thermal energy range and with Bondarenko shielding factors elsewhere, including the unresolved resonance range. However, the sensitivities of the self-shielded cross sections to the parameters input to the calculation are quantified using only full-range Bondarenko factors.

  15. Advanced Microfabricated Devices for Sensitive Biomarker Detection and Analysis on Mars

    NASA Astrophysics Data System (ADS)

    Skelley, A. M.; Scherer, J. R.; Aubrey, A. D.; Grover, W. H.; Ehrenfreund, P.; Grunthaner, F. J.; Bada, J. L.; Willis, P.; Mathies, R. A.

    2006-12-01

    Detection of life on Mars requires definition of a suitable biomarker and development of sensitive instrumentation capable of performing in situ chemical analyses [1]. Our studies have focused on amino acid analysis because amino acids are more resistant to decomposition than other biomolecules, and because amino acid chirality is a well-defined biomarker. We have developed the Mars Organic Analyzer (MOA), a portable amino acid analysis system that consists of a compact instrument and a novel multi-layer CE microchip, and we have performed extensive laboratory and field validation of this instrument [2]. The heart of the MOA is the 100-mm diameter, 4-mm thick microchip that contains the CE separation channels as well as microfabricated valves and pumps for automated integrated sample preparation and handling. The microfabricated device is operated by a portable instrument that performs CE separation and LIF detection. The limits of detection of fluorescamine-labeled amino acids are in the nM to pM range corresponding to part- per-trillion sensitivities. The MOA has been field tested on soil samples rich in jarosite from the Panoche Valley, CA. These results demonstrate that amines and amino acids can be extracted from sulfate-rich acidic soils such as jarosite and analyzed using the MOA. The MOA was also recently field tested in the Yungay region of the Atacama Desert in Chile. The instrument was successfully operated in this challenging environment and performed over 300 amino acid analyses in a two week period. The MOA has also been used to label and analyze two of the four nucleobases, and methods are being developed to detect PAH's [3]. This presentation will discuss the unique challenges of developing microdevices for sensitive analysis of biomarker compounds. We will also describe current efforts to develop multichannel analysis systems and microfluidic automated analysis systems that will be used to enable flight versions of this instrument. For more details

  16. Advanced Nuclear Measurements - Sensitivity Analysis Emerging Safeguards, Problems and Proliferation Risk

    SciTech Connect

    Dreicer, J.S.

    1999-07-15

    During the past year this component of the Advanced Nuclear Measurements LDRD-DR has focused on emerging safeguards problems and proliferation risk by investigating problems in two domains. The first is related to the analysis, quantification, and characterization of existing inventories of fissile materials, in particular, the minor actinides (MA) formed in the commercial fuel cycle. Understanding material forms and quantities helps identify and define future measurement problems, instrument requirements, and assists in prioritizing safeguards technology development. The second problem (dissertation research) has focused on the development of a theoretical foundation for sensor array anomaly detection. Remote and unattended monitoring or verification of safeguards activities is becoming a necessity due to domestic and international budgetary constraints. However, the ability to assess the trustworthiness of a sensor array has not been investigated. This research is developing an anomaly detection methodology to assess the sensor array.

  17. Development of the High-Order Decoupled Direct Method in Three Dimensions for Particulate Matter: Enabling Advanced Sensitivity Analysis in Air Quality Models

    EPA Science Inventory

    The high-order decoupled direct method in three dimensions for particular matter (HDDM-3D/PM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity...

  18. Successful analysis of anticancer drug sensitivity by CD-DST using pleural fluid and ascites from patients with advanced ovarian cancer: case reports.

    PubMed

    Kawaguchi, Makiko; Banno, Kouji; Susumu, Nobuyuki; Yanokura, Megumi; Kuwabara, Yoshiko; Hirao, Nobumaru; Tsukazaki, Katsumi; Nozawa, Shiro

    2005-01-01

    In vitro anticancer drug sensitivity tests have been performed for various types of cancers, and a relationship with clinical response has been observed. The collagen gel droplet-embedded culture drug sensitivity test (CD-DST) is a new in vitro anticancer drug sensitivity test by Yabushita et al., recently reported to be useful in ovarian cancer. CD-DST allows analysis of a small number of cells, compared to other anticancer drug sensitivity tests. Here, we report a successful analysis of anticancer drug sensitivity by CD-DST using cancerous ascites and pleural fluid samples from 2 patients with advanced ovarian cancer. To our knowledge, this is only the second report of the application of CD-DST in ovarian cancer, and our results suggest that CD-DST could be helpful in the selection of anticancer drugs for neoadjuvant chemotherapy in advanced ovarian cancer.

  19. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  20. A one- and two-dimensional cross-section sensitivity and uncertainty path of the AARE (Advanced Analysis for Reactor Engineering) modular code system

    SciTech Connect

    Davidson, J.W.; Dudziak, D.J.; Higgs, C.E.; Stepanek, J.

    1988-01-01

    AARE, a code package to perform Advanced Analysis for Reactor Engineering, is a linked modular system for fission reactor core and shielding, as well as fusion blanket, analysis. Its cross-section sensitivity and uncertainty path presently includes the cross-section processing and reformatting code TRAMIX, cross-section homogenization and library reformatting code MIXIT, the 1-dimensional transport code ONEDANT, the 2-dimensional transport code TRISM, and the 1- and 2- dimensional cross-section sensitivity and uncertainty code SENSIBL. IN the present work, a short description of the whole AARE system is given, followed by a detailed description of the cross-section sensitivity and uncertainty path. 23 refs., 2 figs.

  1. Advanced protein crystal growth programmatic sensitivity study

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The purpose of this study is to define the costs of various APCG (Advanced Protein Crystal Growth) program options and to determine the parameters which, if changed, impact the costs and goals of the programs and to what extent. This was accomplished by developing and evaluating several alternate programmatic scenarios for the microgravity Advanced Protein Crystal Growth program transitioning from the present shuttle activity to the man tended Space Station to the permanently manned Space Station. These scenarios include selected variations in such sensitivity parameters as development and operational costs, schedules, technology issues, and crystal growth methods. This final report provides information that will aid in planning the Advanced Protein Crystal Growth Program.

  2. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  3. Brief analysis of causes of sensitive skin and advances in evaluation of anti-allergic activity of cosmetic products.

    PubMed

    Fan, L; He, C; Jiang, L; Bi, Y; Dong, Y; Jia, Y

    2016-04-01

    This review focuses on the causes of sensitive skin and elaborates on the relationship between skin sensitivity and skin irritations and allergies, which has puzzled cosmetologists. Here, an overview is presented of the research on active ingredients in cosmetic products for sensitive skin (anti-sensitive ingredients), which is followed by a discussion of their experimental efficacy. Moreover, several evaluation methods for the efficacy of anti-sensitive ingredients are classified and summarized. Through this review, we aim to provide the cosmetic industry with a better understanding of sensitive skin, which could in turn provide some theoretical guidance to the research on targeted cosmetic products.

  4. Advances in identifying beryllium sensitization and disease.

    PubMed

    Middleton, Dan; Kowalski, Peter

    2010-01-01

    Beryllium is a lightweight metal with unique qualities related to stiffness, corrosion resistance, and conductivity. While there are many useful applications, researchers in the 1930s and 1940s linked beryllium exposure to a progressive occupational lung disease. Acute beryllium disease is a pulmonary irritant response to high exposure levels, whereas chronic beryllium disease (CBD) typically results from a hypersensitivity response to lower exposure levels. A blood test, the beryllium lymphocyte proliferation test (BeLPT), was an important advance in identifying individuals who are sensitized to beryllium (BeS) and thus at risk for developing CBD. While there is no true "gold standard" for BeS, basic epidemiologic concepts have been used to advance our understanding of the different screening algorithms.

  5. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  6. Interference and Sensitivity Analysis

    PubMed Central

    VanderWeele, Tyler J.; Tchetgen Tchetgen, Eric J.; Halloran, M. Elizabeth

    2014-01-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the “no-interference” assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted. PMID:25620841

  7. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  8. New sensitivity analysis attack

    NASA Astrophysics Data System (ADS)

    El Choubassi, Maha; Moulin, Pierre

    2005-03-01

    The sensitivity analysis attacks by Kalker et al. constitute a known family of watermark removal attacks exploiting a vulnerability in some watermarking protocols: the attacker's unlimited access to the watermark detector. In this paper, a new attack on spread spectrum schemes is designed. We first examine one of Kalker's algorithms and prove its convergence using the law of large numbers, which gives more insight into the problem. Next, a new algorithm is presented and compared to existing ones. Various detection algorithms are considered including correlation detectors and normalized correlation detectors, as well as other, more complicated algorithms. Our algorithm is noniterative and requires at most n+1 operations, where n is the dimension of the signal. Moreover, the new approach directly estimates the watermark by exploiting the simple geometry of the detection boundary and the information leaked by the detector.

  9. Recent advances in sensitized mesoscopic solar cells.

    PubMed

    Grätzel, Michael

    2009-11-17

    -intensive high vacuum and materials purification steps that are currently employed in the fabrication of all other thin-film solar cells. Organic materials are abundantly available, so that the technology can be scaled up to the terawatt scale without running into feedstock supply problems. This gives organic-based solar cells an advantage over the two major competing thin-film photovoltaic devices, i.e., CdTe and CuIn(As)Se, which use highly toxic materials of low natural abundance. However, a drawback of the current embodiment of OPV cells is that their efficiency is significantly lower than that for single and multicrystalline silicon as well as CdTe and CuIn(As)Se cells. Also, polymer-based OPV cells are very sensitive to water and oxygen and, hence, need to be carefully sealed to avoid rapid degradation. The research discussed within the framework of this Account aims at identifying and providing solutions to the efficiency problems that the OPV field is still facing. The discussion focuses on mesoscopic solar cells, in particular, dye-sensitized solar cells (DSCs), which have been developed in our laboratory and remain the focus of our investigations. The efficiency problem is being tackled using molecular science and nanotechnology. The sensitizer constitutes the heart of the DSC, using sunlight to pump electrons from a lower to a higher energy level, generating in this fashion an electric potential difference, which can exploited to produce electric work. Currently, there is a quest for sensitizers that achieve effective harnessing of the red and near-IR part of sunlight, converting these photons to electricity better than the currently used generation of dyes. Progress in this area has been significant over the past few years, resulting in a boost in the conversion efficiency of the DSC that will be reviewed.

  10. BIM Gene Polymorphism Lowers the Efficacy of EGFR-TKIs in Advanced Nonsmall Cell Lung Cancer With Sensitive EGFR Mutations: A Systematic Review and Meta-Analysis.

    PubMed

    Huang, Wu Feng; Liu, Ai Hua; Zhao, Hai Jin; Dong, Hang Ming; Liu, Lai Yu; Cai, Shao Xi

    2015-08-01

    The strong association between bcl-2-like 11 (BIM) triggered apoptosis and the presence of epidermal growth factor receptor (EGFR) mutations has been proven in nonsmall cell lung cancer (NSCLC). However, the relationship between EGFR-tyrosine kinase inhibitor's (TKI's) efficacy and BIM polymorphism in NSCLC EGFR is still unclear.Electronic databases were searched for eligible literatures. Data on objective response rates (ORRs), disease control rates (DCRs), and progression-free survival (PFS) stratified by BIM polymorphism status were extracted and synthesized based on random-effect model. Subgroup and sensitivity analyses were conducted.A total of 6 studies that involved a total of 773 EGFR mutant advanced NSCLC patients after EGFR-TKI treatment were included. In overall, non-BIM polymorphism patients were associated with significant prolonged PFS (hazard ratio 0.63, 0.47-0.83, P = 0.001) compared to patients with BIM polymorphism. However, only marginal improvements without statistical significance in ORR (odds ratio [OR] 1.71, 0.91-3.24, P = 0.097) and DCR (OR 1.56, 0.85-2.89, P = 0.153) were observed. Subgroup analyses showed that the benefits of PFS in non-BIM polymorphism group were predominantly presented in pooled results of studies involving chemotherapy-naive and the others, and retrospective studies. Additionally, we failed to observe any significant benefit from patients without BIM polymorphism in every subgroup for ORR and DCR.For advanced NSCLC EGFR mutant patients, non-BIM polymorphism ones are associated with longer PFS than those with BIM polymorphism after EGFR-TKIs treatment. BIM polymorphism status should be considered an essential factor in studies regarding EGFR-targeted agents toward EGFR mutant patients.

  11. Sensitivity analysis of uncertainty in model prediction.

    PubMed

    Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael

    2008-03-27

    Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.

  12. D2PC sensitivity analysis

    SciTech Connect

    Lombardi, D.P.

    1992-08-01

    The Chemical Hazard Prediction Model (D2PC) developed by the US Army will play a critical role in the Chemical Stockpile Emergency Preparedness Program by predicting chemical agent transport and dispersion through the atmosphere after an accidental release. To aid in the analysis of the output calculated by D2PC, this sensitivity analysis was conducted to provide information on model response to a variety of input parameters. The sensitivity analysis focused on six accidental release scenarios involving chemical agents VX, GB, and HD (sulfur mustard). Two categories, corresponding to conservative most likely and worst case meteorological conditions, provided the reference for standard input values. D2PC displayed a wide variety of sensitivity to the various input parameters. The model displayed the greatest overall sensitivity to wind speed, mixing height, and breathing rate. For other input parameters, sensitivity was mixed but generally lower. Sensitivity varied not only with parameter, but also over the range of values input for a single parameter. This information on model response can provide useful data for interpreting D2PC output.

  13. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  14. Pain sensitivity profiles in patients with advanced knee osteoarthritis.

    PubMed

    Frey-Law, Laura A; Bohr, Nicole L; Sluka, Kathleen A; Herr, Keela; Clark, Charles R; Noiseux, Nicolas O; Callaghan, John J; Zimmerman, M Bridget; Rakel, Barbara A

    2016-09-01

    The development of patient profiles to subgroup individuals on a variety of variables has gained attention as a potential means to better inform clinical decision making. Patterns of pain sensitivity response specific to quantitative sensory testing (QST) modality have been demonstrated in healthy subjects. It has not been determined whether these patterns persist in a knee osteoarthritis population. In a sample of 218 participants, 19 QST measures along with pain, psychological factors, self-reported function, and quality of life were assessed before total knee arthroplasty. Component analysis was used to identify commonalities across the 19 QST assessments to produce standardized pain sensitivity factors. Cluster analysis then grouped individuals who exhibited similar patterns of standardized pain sensitivity component scores. The QST resulted in 4 pain sensitivity components: heat, punctate, temporal summation, and pressure. Cluster analysis resulted in 5 pain sensitivity profiles: a "low pressure pain" group, an "average pain" group, and 3 "high pain" sensitivity groups who were sensitive to different modalities (punctate, heat, and temporal summation). Pain and function differed between pain sensitivity profiles, along with sex distribution; however, no differences in osteoarthritis grade, medication use, or psychological traits were found. Residualizing QST data by age and sex resulted in similar components and pain sensitivity profiles. Furthermore, these profiles are surprisingly similar to those reported in healthy populations, which suggests that individual differences in pain sensitivity are a robust finding even in an older population with significant disease.

  15. Advances in sequence analysis.

    PubMed

    Califano, A

    2001-06-01

    In its early days, the entire field of computational biology revolved almost entirely around biological sequence analysis. Over the past few years, however, a number of new non-sequence-based areas of investigation have become mainstream, from the analysis of gene expression data from microarrays, to whole-genome association discovery, and to the reverse engineering of gene regulatory pathways. Nonetheless, with the completion of private and public efforts to map the human genome, as well as those of other organisms, sequence data continue to be a veritable mother lode of valuable biological information that can be mined in a variety of contexts. Furthermore, the integration of sequence data with a variety of alternative information is providing valuable and fundamentally new insight into biological processes, as well as an array of new computational methodologies for the analysis of biological data.

  16. Advanced Economic Analysis

    NASA Technical Reports Server (NTRS)

    Greenberg, Marc W.; Laing, William

    2013-01-01

    An Economic Analysis (EA) is a systematic approach to the problem of choosing the best method of allocating scarce resources to achieve a given objective. An EA helps guide decisions on the "worth" of pursuing an action that departs from status quo ... an EA is the crux of decision-support.

  17. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  18. Stiff DAE integrator with sensitivity analysis capabilities

    SciTech Connect

    Serban, R.

    2007-11-26

    IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.

  19. Advances in total scattering analysis

    SciTech Connect

    Proffen, Thomas E; Kim, Hyunjeong

    2008-01-01

    In recent years the analysis of the total scattering pattern has become an invaluable tool to study disordered crystalline and nanocrystalline materials. Traditional crystallographic structure determination is based on Bragg intensities and yields the long range average atomic structure. By including diffuse scattering into the analysis, the local and medium range atomic structure can be unravelled. Here we give an overview of recent experimental advances, using X-rays as well as neutron scattering as well as current trends in modelling of total scattering data.

  20. Impact of neoadjuvant single or dual HER2 inhibition and chemotherapy backbone upon pathological complete response in operable and locally advanced breast cancer: Sensitivity analysis of randomized trials.

    PubMed

    Bria, Emilio; Carbognin, Luisa; Furlanetto, Jenny; Pilotto, Sara; Bonomi, Maria; Guarneri, Valentina; Vicentini, Cecilia; Brunelli, Matteo; Nortilli, Rolando; Pellini, Francesca; Sperduti, Isabella; Giannarelli, Diana; Pollini, Giovanni Paolo; Conte, Pierfranco; Tortora, Giampaolo

    2014-08-01

    The role of the dual HER2 inhibition, and the best chemotherapy backbone for neoadjuvant chemotherapy still represent an issue for clinical practice. A literature-based meta-analysis exploring single versus dual HER2 inhibition in terms of pathological complete response (pCR, breast plus axilla) rate and testing the interaction according to the chemotherapy (anthracyclines-taxanes or taxanes) was conducted. In addition, an event-based pooled analysis by extracting activity and safety events and deriving 95% confidence intervals (CI) was accomplished. Fourteen trials (4149 patients) were identified, with 6 trials (1820 patients) included in the meta-analysis and 31 arms (14 trials, 3580 patients) in the event-based pooled analysis. The dual HER2 inhibition significantly improves pCR rate, in the range of 16-19%, regardless of the chemotherapy backbone (relative risk 1.37, 95% CI 1.23-1.53, p<0.0001); pCR was significantly higher in the hormonal receptor negative population, regardless of the HER2 inhibition and type of chemotherapy. pCR and the rate of breast conserving surgery was higher when anthracyclines were added to taxanes, regardless of the HER2 inhibition. Severe neutropenia was higher with the addition of anthracyclines to taxanes, with an absolute difference of 19.7%, despite no differences in febrile neutropenia. While no significant differences according to the HER2 inhibition were found in terms of cardiotoxicity, a slightly difference for grade 3-4 (1.2%) against the addition of anthracyclines was calculated. The dual HER2 inhibition for the neoadjuvant treatment of HER2-positive breast cancer significantly increases pCR; the combination of anthracyclines, taxanes and anti-Her2 agents should be currently considered the standard of care.

  1. CADDIS Volume 4. Data Analysis: Advanced Analyses - Controlling for Natural Variability

    EPA Pesticide Factsheets

    Methods for controlling natural variability, predicting environmental conditions from biological observations method, biological trait data, species sensitivity distributions, propensity scores, Advanced Analyses of Data Analysis references.

  2. CADDIS Volume 4. Data Analysis: Advanced Analyses - Controlling for Natural Variability: SSD Plot Diagrams

    EPA Pesticide Factsheets

    Methods for controlling natural variability, predicting environmental conditions from biological observations method, biological trait data, species sensitivity distributions, propensity scores, Advanced Analyses of Data Analysis references.

  3. Analysis of Advanced Rotorcraft Configurations

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2000-01-01

    Advanced rotorcraft configurations are being investigated with the objectives of identifying vehicles that are larger, quieter, and faster than current-generation rotorcraft. A large rotorcraft, carrying perhaps 150 passengers, could do much to alleviate airport capacity limitations, and a quiet rotorcraft is essential for community acceptance of the benefits of VTOL operations. A fast, long-range, long-endurance rotorcraft, notably the tilt-rotor configuration, will improve rotorcraft economics through productivity increases. A major part of the investigation of advanced rotorcraft configurations consists of conducting comprehensive analyses of vehicle behavior for the purpose of assessing vehicle potential and feasibility, as well as to establish the analytical models required to support the vehicle development. The analytical work of FY99 included applications to tilt-rotor aircraft. Tilt Rotor Aeroacoustic Model (TRAM) wind tunnel measurements are being compared with calculations performed by using the comprehensive analysis tool (Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD 11)). The objective is to establish the wing and wake aerodynamic models that are required for tilt-rotor analysis and design. The TRAM test in the German-Dutch Wind Tunnel (DNW) produced extensive measurements. This is the first test to encompass air loads, performance, and structural load measurements on tilt rotors, as well as acoustic and flow visualization data. The correlation of measurements and calculations includes helicopter-mode operation (performance, air loads, and blade structural loads), hover (performance and air loads), and airplane-mode operation (performance).

  4. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  5. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  6. Iterative methods for design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Yoon, B. G.

    1989-01-01

    A numerical method is presented for design sensitivity analysis, using an iterative-method reanalysis of the structure generated by a small perturbation in the design variable; a forward-difference scheme is then employed to obtain the approximate sensitivity. Algorithms are developed for displacement and stress sensitivity, as well as for eignevalues and eigenvector sensitivity, and the iterative schemes are modified so that the coefficient matrices are constant and therefore decomposed only once.

  7. Advances in Barkhausen noise analysis

    NASA Astrophysics Data System (ADS)

    Meyendorf, Norbert; Hillmann, Susanne; Cikalova, Ulana; Schreiber, Juergen

    2014-03-01

    The magnetic Barkhausen Noise technique is a well suited method for the characterization of ferromagnetic materials. The Barkhausen effect results in an interaction between the magnetic structure and the microstructure of materials, and is sensitive to the stresses and microstructure related mechanical properties. Barkhausen noise is a complex signal that provides a large amount of information, for example frequency spectrum, amplitude, RMS value, dependence of magnetic field strength, magnetization frequency and fractal behavior. Although this technique has a lot potentials, it is not commonly used in nondestructive material testing. Large sensors and complex calibration procedures made the method impractical for many applications. However, research has progressed in recent years; new sensor designs were developed and evaluated, new algorithms to simplify the calibration and measurement procedures were developed as well as analysis of additional material properties have been introduced.

  8. Recent developments in structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Adelman, Howard M.

    1988-01-01

    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.

  9. Structural sensitivity analysis: Methods, applications and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. The techniques include a finite difference step size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Some of the critical needs in the structural sensitivity area are indicated along with plans for dealing with some of those needs.

  10. Structural sensitivity analysis: Methods, applications, and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Some innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. These techniques include a finite-difference step-size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, a simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Finally, some of the critical needs in the structural sensitivity area are indicated along with Langley plans for dealing with some of these needs.

  11. Sensitivity analysis, optimization, and global critical points

    SciTech Connect

    Cacuci, D.G. )

    1989-11-01

    The title of this paper suggests that sensitivity analysis, optimization, and the search for critical points in phase-space are somehow related; the existence of such a kinship has been undoubtedly felt by many of the nuclear engineering practitioners of optimization and/or sensitivity analysis. However, a unified framework for displaying this relationship has so far been lacking, especially in a global setting. The objective of this paper is to present such a global and unified framework and to suggest, within this framework, a new direction for future developments for both sensitivity analysis and optimization of the large nonlinear systems encountered in practical problems.

  12. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  13. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  14. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  15. Sensitivity analysis for solar plates

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1986-01-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  16. Sensitivity analysis for solar plates

    NASA Astrophysics Data System (ADS)

    Aster, R. W.

    1986-02-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  17. Recent Advances in Multidisciplinary Analysis and Optimization, part 1

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  18. Recent Advances in Multidisciplinary Analysis and Optimization, part 2

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  19. Recent Advances in Multidisciplinary Analysis and Optimization, part 3

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: aircraft design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  20. ADVANCED POWER SYSTEMS ANALYSIS TOOLS

    SciTech Connect

    Robert R. Jensen; Steven A. Benson; Jason D. Laumb

    2001-08-31

    The use of Energy and Environmental Research Center (EERC) modeling tools and improved analytical methods has provided key information in optimizing advanced power system design and operating conditions for efficiency, producing minimal air pollutant emissions and utilizing a wide range of fossil fuel properties. This project was divided into four tasks: the demonstration of the ash transformation model, upgrading spreadsheet tools, enhancements to analytical capabilities using the scanning electron microscopy (SEM), and improvements to the slag viscosity model. The ash transformation model, Atran, was used to predict the size and composition of ash particles, which has a major impact on the fate of the combustion system. To optimize Atran key factors such as mineral fragmentation and coalescence, the heterogeneous and homogeneous interaction of the organically associated elements must be considered as they are applied to the operating conditions. The resulting model's ash composition compares favorably to measured results. Enhancements to existing EERC spreadsheet application included upgrading interactive spreadsheets to calculate the thermodynamic properties for fuels, reactants, products, and steam with Newton Raphson algorithms to perform calculations on mass, energy, and elemental balances, isentropic expansion of steam, and gasifier equilibrium conditions. Derivative calculations can be performed to estimate fuel heating values, adiabatic flame temperatures, emission factors, comparative fuel costs, and per-unit carbon taxes from fuel analyses. Using state-of-the-art computer-controlled scanning electron microscopes and associated microanalysis systems, a method to determine viscosity using the incorporation of grey-scale binning acquired by the SEM image was developed. The image analysis capabilities of a backscattered electron image can be subdivided into various grey-scale ranges that can be analyzed separately. Since the grey scale's intensity is

  1. Pressure-Sensitive Paints Advance Rotorcraft Design Testing

    NASA Technical Reports Server (NTRS)

    2013-01-01

    The rotors of certain helicopters can spin at speeds as high as 500 revolutions per minute. As the blades slice through the air, they flex, moving into the wind and back out, experiencing pressure changes on the order of thousands of times a second and even higher. All of this makes acquiring a true understanding of rotorcraft aerodynamics a difficult task. A traditional means of acquiring aerodynamic data is to conduct wind tunnel tests using a vehicle model outfitted with pressure taps and other sensors. These sensors add significant costs to wind tunnel testing while only providing measurements at discrete locations on the model's surface. In addition, standard sensor solutions do not work for pulling data from a rotor in motion. "Typical static pressure instrumentation can't handle that," explains Neal Watkins, electronics engineer in Langley Research Center s Advanced Sensing and Optical Measurement Branch. "There are dynamic pressure taps, but your costs go up by a factor of five to ten if you use those. In addition, recovery of the pressure tap readings is accomplished through slip rings, which allow only a limited amount of sensors and can require significant maintenance throughout a typical rotor test." One alternative to sensor-based wind tunnel testing is pressure sensitive paint (PSP). A coating of a specialized paint containing luminescent material is applied to the model. When exposed to an LED or laser light source, the material glows. The glowing material tends to be reactive to oxygen, explains Watkins, which causes the glow to diminish. The more oxygen that is present (or the more air present, since oxygen exists in a fixed proportion in air), the less the painted surface glows. Imaged with a camera, the areas experiencing greater air pressure show up darker than areas of less pressure. "The paint allows for a global pressure map as opposed to specific points," says Watkins. With PSP, each pixel recorded by the camera becomes an optical pressure

  2. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, James C., III; Taylor, Arthur C., III; Barnwell, Richard W.; Newman, Perry A.; Hou, Gene J.-W.

    1999-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics (CFD). The focus here is on those methods particularly well-suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid CFD algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid CFDs in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  3. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  4. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  5. Advanced materials: Information and analysis needs

    SciTech Connect

    Curlee, T.R.; Das, S.; Lee, R.; Trumble, D.

    1990-09-01

    This report presents the findings of a study to identify the types of information and analysis that are needed for advanced materials. The project was sponsored by the US Bureau of Mines (BOM). It includes a conceptual description of information needs for advanced materials and the development and implementation of a questionnaire on the same subject. This report identifies twelve fundamental differences between advanced and traditional materials and discusses the implications of these differences for data and analysis needs. Advanced and traditional materials differ significantly in terms of physical and chemical properties. Advanced material properties can be customized more easily. The production of advanced materials may differ from traditional materials in terms of inputs, the importance of by-products, the importance of different processing steps (especially fabrication), and scale economies. The potential for change in advanced materials characteristics and markets is greater and is derived from the marriage of radically different materials and processes. In addition to the conceptual study, a questionnaire was developed and implemented to assess the opinions of people who are likely users of BOM information on advanced materials. The results of the questionnaire, which was sent to about 1000 people, generally confirm the propositions set forth in the conceptual part of the study. The results also provide data on the categories of advanced materials and the types of information that are of greatest interest to potential users. 32 refs., 1 fig., 12 tabs.

  6. Advanced Technology Lifecycle Analysis System (ATLAS)

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  7. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  8. Derivative based sensitivity analysis of gamma index

    PubMed Central

    Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.

    2015-01-01

    Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary

  9. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  10. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  11. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  12. LHC Olympics: Advanced Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Armour, Kyle; Larkoski, Andrew; Gray, Amanda; Ventura, Dan; Walsh, Jon; Schabinger, Rob

    2006-05-01

    The LHC Olympics is a series of workshop aimed at encouraging theorists and experimentalists to prepare for the soon-to-be-online Large Hadron Collider in Geneva, Switzerland. One aspect of the LHC Olympics program consists of the study of simulated data sets which represent various possible new physics signals as they would be seen in LHC detectors. Through this exercise, LHC Olympians learn the phenomenology of possible new physics models and gain experience in analyzing LHC data. Additionally, the LHC Olympics encourages discussion between theorists and experimentalists, and through this collaboration new techniques could be developed. The University of Washington LHC Olympics group consists of several first-year graduate and senior undergraduate students, in both theoretical and experimental particle physics. Presented here is a discussion of some of the more advanced techniques used and the recent results of one such LHC Olympics study.

  13. Advanced analysis methods in particle physics

    SciTech Connect

    Bhat, Pushpalatha C.; /Fermilab

    2010-10-01

    Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.

  14. NIR sensitivity analysis with the VANE

    NASA Astrophysics Data System (ADS)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  15. Sensitive chiral analysis by CE: an update.

    PubMed

    Sánchez-Hernández, Laura; Crego, Antonio Luis; Marina, María Luisa; García-Ruiz, Carmen

    2008-01-01

    A general view of the different strategies used in the last years to enhance the detection sensitivity in chiral analysis by CE is provided in this article. With this purpose and in order to update the previous review by García-Ruiz et al., the articles appeared on this subject from January 2005 to March 2007 are considered. Three were the main strategies employed to increase the detection sensitivity in chiral analysis by CE: (i) the use of off-line sample treatment techniques, (ii) the employment of in-capillary preconcentration techniques based on electrophoretic principles, and (iii) the use of alternative detection systems to the widely employed on-column UV-Vis absorption detection. Combinations of two or three of the above-mentioned strategies gave rise to adequate concentration detection limits up to 10(-10) M enabling enantiomer analysis in a variety of real samples including complex biological matrices.

  16. Engaging Chinese American Adults in Advance Care Planning: A Community-Based, Culturally Sensitive Seminar.

    PubMed

    Lee, Mei Ching; Hinderer, Katherine A; Friedmann, Erika

    2015-08-01

    Ethnic minority groups are less engaged than Caucasian American adults in advance care planning (ACP). Knowledge deficits, language, and culture are barriers to ACP. Limited research exists on ACP and advance directives in the Chinese American adult population. Using a pre-posttest, repeated measures design, the current study explored the effectiveness of a nurseled, culturally sensitive ACP seminar for Chinese American adults on (a) knowledge, completion, and discussion of advance directives; and (b) the relationship between demographic variables, advance directive completion, and ACP discussions. A convenience sample of 72 urban, community-dwelling Chinese American adults (mean age=61 years) was included. Knowledge, advance directive completion, and ACP discussions increased significantly after attending the nurse-led seminar (p<0.01). Increased age correlated with advance directive completion and ACP discussions; female gender correlated with ACP discussions. Nursing education in a community setting increased advance directive knowledge and ACP engagement in Chinese American adults.

  17. Advanced nuclear energy analysis technology.

    SciTech Connect

    Gauntt, Randall O.; Murata, Kenneth K.; Romero, Vicente JosÔe; Young, Michael Francis; Rochau, Gary Eugene

    2004-05-01

    A two-year effort focused on applying ASCI technology developed for the analysis of weapons systems to the state-of-the-art accident analysis of a nuclear reactor system was proposed. The Sandia SIERRA parallel computing platform for ASCI codes includes high-fidelity thermal, fluids, and structural codes whose coupling through SIERRA can be specifically tailored to the particular problem at hand to analyze complex multiphysics problems. Presently, however, the suite lacks several physics modules unique to the analysis of nuclear reactors. The NRC MELCOR code, not presently part of SIERRA, was developed to analyze severe accidents in present-technology reactor systems. We attempted to: (1) evaluate the SIERRA code suite for its current applicability to the analysis of next generation nuclear reactors, and the feasibility of implementing MELCOR models into the SIERRA suite, (2) examine the possibility of augmenting ASCI codes or alternatives by coupling to the MELCOR code, or portions thereof, to address physics particular to nuclear reactor issues, especially those facing next generation reactor designs, and (3) apply the coupled code set to a demonstration problem involving a nuclear reactor system. We were successful in completing the first two in sufficient detail to determine that an extensive demonstration problem was not feasible at this time. In the future, completion of this research would demonstrate the feasibility of performing high fidelity and rapid analyses of safety and design issues needed to support the development of next generation power reactor systems.

  18. SENSITIVITY ANALYSIS FOR OSCILLATING DYNAMICAL SYSTEMS

    PubMed Central

    WILKINS, A. KATHARINA; TIDOR, BRUCE; WHITE, JACOB; BARTON, PAUL I.

    2012-01-01

    Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem, and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods. PMID:23296349

  19. Demonstration sensitivity analysis for RADTRAN III

    SciTech Connect

    Neuhauser, K S; Reardon, P C

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves.

  20. [Sensitivity analysis in health investment projects].

    PubMed

    Arroyave-Loaiza, G; Isaza-Nieto, P; Jarillo-Soto, E C

    1994-01-01

    This paper discusses some of the concepts and methodologies frequently used in sensitivity analyses in the evaluation of investment programs. In addition, a concrete example is presented: a hospital investment in which four indicators were used to design different scenarios and their impact on investment costs. This paper emphasizes the importance of this type of analysis in the field of management of health services, and more specifically in the formulation of investment programs.

  1. Optimal control concepts in design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.

    1987-01-01

    A close link is established between open loop optimal control theory and optimal design by noting certain similarities in the gradient calculations. The resulting benefits include a unified approach, together with physical insights in design sensitivity analysis, and an efficient approach for simultaneous optimal control and design. Both matrix displacement and matrix force methods are considered, and results are presented for dynamic systems, structures, and elasticity problems.

  2. Advanced Placement: Model Policy Components. Policy Analysis

    ERIC Educational Resources Information Center

    Zinth, Jennifer

    2016-01-01

    Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…

  3. Sensitivity Studies of Advanced Reactors Coupled to High Temperature Electrolysis (HTE) Hydrogen Production Processes

    SciTech Connect

    Edwin A. Harvego; Michael G. McKellar; James E. O'Brien; J. Stephen Herring

    2007-04-01

    High Temperature Electrolysis (HTE), when coupled to an advanced nuclear reactor capable of operating at reactor outlet temperatures of 800 °C to 950 °C, has the potential to efficiently produce the large quantities of hydrogen needed to meet future energy and transportation needs. To evaluate the potential benefits of nuclear-driven hydrogen production, the UniSim process analysis software was used to evaluate different reactor concepts coupled to a reference HTE process design concept. The reference HTE concept included an Intermediate Heat Exchanger and intermediate helium loop to separate the reactor primary system from the HTE process loops and additional heat exchangers to transfer reactor heat from the intermediate loop to the HTE process loops. The two process loops consisted of the water/steam loop feeding the cathode side of a HTE electrolysis stack, and the steam or air sweep loop used to remove oxygen from the anode side. The UniSim model of the process loops included pumps to circulate the working fluids and heat exchangers to recover heat from the oxygen and hydrogen product streams to improve the overall hydrogen production efficiencies. The reference HTE process loop model was coupled to separate UniSim models developed for three different advanced reactor concepts (a high-temperature helium cooled reactor concept and two different supercritical CO2 reactor concepts). Sensitivity studies were then performed to evaluate the affect of reactor outlet temperature on the power cycle efficiency and overall hydrogen production efficiency for each of the reactor power cycles. The results of these sensitivity studies showed that overall power cycle and hydrogen production efficiencies increased with reactor outlet temperature, but the power cycle producing the highest efficiencies varied depending on the temperature range considered.

  4. Sensitivity Analysis of Automated Ice Edge Detection

    NASA Astrophysics Data System (ADS)

    Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien

    2016-08-01

    The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.

  5. Measuring Road Network Vulnerability with Sensitivity Analysis

    PubMed Central

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  6. Advanced Interval Management: A Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Timer, Sebastian; Peters, Mark

    2016-01-01

    This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.

  7. Advances in Mid-Infrared Spectroscopy for Chemical Analysis

    NASA Astrophysics Data System (ADS)

    Haas, Julian; Mizaikoff, Boris

    2016-06-01

    Infrared spectroscopy in the 3-20 μm spectral window has evolved from a routine laboratory technique into a state-of-the-art spectroscopy and sensing tool by benefitting from recent progress in increasingly sophisticated spectra acquisition techniques and advanced materials for generating, guiding, and detecting mid-infrared (MIR) radiation. Today, MIR spectroscopy provides molecular information with trace to ultratrace sensitivity, fast data acquisition rates, and high spectral resolution catering to demanding applications in bioanalytics, for example, and to improved routine analysis. In addition to advances in miniaturized device technology without sacrificing analytical performance, selected innovative applications for MIR spectroscopy ranging from process analysis to biotechnology and medical diagnostics are highlighted in this review.

  8. Chemistry in Protoplanetary Disks: A Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Vasyunin, A. I.; Semenov, D.; Henning, Th.; Wakelam, V.; Herbst, Eric; Sobolev, A. M.

    2008-01-01

    We study how uncertainties in the rate coefficients of chemical reactions in the RATE 06 database affect abundances and column densities of key molecules in protoplanetary disks. We randomly varied the gas-phase reaction rates within their uncertainty limits and calculated the time-dependent abundances and column densities using a gas-grain chemical model and a flaring steady state disk model. We find that key species can be separated into two distinct groups according to the sensitivity of their column densities to the rate uncertainties. The first group includes CO, C+, H+3, H2O, NH3, N2H+, and HCNH+. For these species the column densities are not very sensitive to the rate uncertainties, but the abundances in specific regions are. The second group includes CS, CO2, HCO+, H2CO, C2H, CN, HCN, HNC, and other, more complex species, for which high abundances and abundance uncertainties coexist in the same disk region, leading to larger scatters in column densities. However, even for complex and heavy molecules, the dispersion in their column densities is not more than a factor of ~4. We perform a sensitivity analysis of the computed abundances to rate uncertainties and identify those reactions with the most problematic rate coefficients. We conclude that the rate coefficients of about a hundred chemical reactions need to be determined more accurately in order to greatly improve the reliability of modern astrochemical models. This improvement should be an ultimate goal of future laboratory studies and theoretical investigations.

  9. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.

  10. Recent Advances in Morphological Cell Image Analysis

    PubMed Central

    Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei

    2012-01-01

    This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215

  11. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  12. Advanced Climate Analysis and Long Range Forecasting

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Advanced Climate Analysis and Long Range Forecasting...project is to improve the long range and climate support provided by the U.S. Naval Oceanography Enterprise (NOe) for planning, conducting, and...months, several seasons, several years). The primary transition focus is on improving the long range and climate support capabilities of the Fleet

  13. Design, analysis and test verification of advanced encapsulation systems

    NASA Technical Reports Server (NTRS)

    Garcia, A., III

    1982-01-01

    An analytical methodology for advanced encapsulation designs was developed. From these methods design sensitivities are established for the development of photovoltaic module criteria and the definition of needed research tasks. Analytical models were developed to perform optical, thermal, electrical and analyses on candidate encapsulation systems. From these analyses several candidate systems were selected for qualification testing. Additionally, test specimens of various types are constructed and tested to determine the validity of the analysis methodology developed. Identified deficiencies and/or discrepancies between analytical models and relevant test data are corrected. Prediction capability of analytical models is improved. Encapsulation engineering generalities, principles, and design aids for photovoltaic module designers is generated.

  14. Hormetic modulation of hepatic insulin sensitivity by advanced glycation end products.

    PubMed

    Fabre, Nelly T; Thieme, Karina; Silva, Karolline S; Catanozi, Sérgio; Cavaleiro, Ana Mercedes; Pinto, Danilo A C; Okamoto, Maristela M; Morais, Mychel Raony P T; Falquetto, Bárbara; Zorn, Telma M; Machado, Ubiratan F; Passarelli, Marisa; Correa-Giannella, Maria Lúcia

    2017-05-15

    Because of the paucity of information regarding metabolic effects of advanced glycation end products (AGEs) on liver, we evaluated effects of AGEs chronic administration in (1) insulin sensitivity; (2) hepatic expression of genes involved in AGEs, glucose and fat metabolism, oxidative stress and inflammation and; (3) hepatic morphology and glycogen content. Rats received intraperitoneally albumin modified (AlbAGE) or not by advanced glycation for 12 weeks. AlbAGE induced whole-body insulin resistance concomitantly with increased hepatic insulin sensitivity, evidenced by activation of AKT, inactivation of GSK3, increased hepatic glycogen content, and decreased expression of gluconeogenesis genes. Additionally there was reduction in hepatic fat content, in expression of lipogenic, pro-inflamatory and pro-oxidative genes and increase in reactive oxygen species and in nuclear expression of NRF2, a transcription factor essential to cytoprotective response. Although considered toxic, AGEs become protective when administered chronically, stimulating AKT signaling, which is involved in cellular defense and insulin sensitivity.

  15. The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites.

  16. Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)

    DTIC Science & Technology

    2015-04-01

    ARL-TR-7250 ● APR 2015 US Army Research Laboratory Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) by William M Sherrill Weapons and Materials Research Directorate...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  17. A Sensitivity Analysis of SOLPS Plasma Detachment

    NASA Astrophysics Data System (ADS)

    Green, D. L.; Canik, J. M.; Eldon, D.; Meneghini, O.; AToM SciDAC Collaboration

    2016-10-01

    Predicting the scrape off layer plasma conditions required for the ITER plasma to achieve detachment is an important issue when considering divertor heat load management options that are compatible with desired core plasma operational scenarios. Given the complexity of the scrape off layer, such predictions often rely on an integrated model of plasma transport with many free parameters. However, the sensitivity of any given prediction to the choices made by the modeler is often overlooked due to the logistical difficulties in completing such a study. Here we utilize an OMFIT workflow to enable a sensitivity analysis of the midplane density at which detachment occurs within the SOLPS model. The workflow leverages the TaskFarmer technology developed at NERSC to launch many instances of the SOLPS integrated model in parallel to probe the high dimensional parameter space of SOLPS inputs. We examine both predictive and interpretive models where the plasma diffusion coefficients are chosen to match an empirical scaling for divertor heat flux width or experimental profiles respectively. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, and is supported under Contracts DE-AC02-05CH11231, DE-AC05-00OR22725 and DE-SC0012656.

  18. Stormwater quality models: performance and sensitivity analysis.

    PubMed

    Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W

    2010-01-01

    The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.

  19. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  20. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  1. Sensitivity Analysis of OECD Benchmark Tests in BISON

    SciTech Connect

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  2. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  3. Advanced Power Plant Development and Analysis Methodologies

    SciTech Connect

    A.D. Rao; G.S. Samuelsen; F.L. Robson; B. Washom; S.G. Berenyi

    2006-06-30

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into advanced power plant systems with goals of achieving high efficiency and minimized environmental impact while using fossil fuels. These power plant concepts include 'Zero Emission' power plants and the 'FutureGen' H2 co-production facilities. The study is broken down into three phases. Phase 1 of this study consisted of utilizing advanced technologies that are expected to be available in the 'Vision 21' time frame such as mega scale fuel cell based hybrids. Phase 2 includes current state-of-the-art technologies and those expected to be deployed in the nearer term such as advanced gas turbines and high temperature membranes for separating gas species and advanced gasifier concepts. Phase 3 includes identification of gas turbine based cycles and engine configurations suitable to coal-based gasification applications and the conceptualization of the balance of plant technology, heat integration, and the bottoming cycle for analysis in a future study. Also included in Phase 3 is the task of acquiring/providing turbo-machinery in order to gather turbo-charger performance data that may be used to verify simulation models as well as establishing system design constraints. The results of these various investigations will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  4. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  5. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  6. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight

  7. Advanced analysis and event reconstruction for the CTA Observatory

    NASA Astrophysics Data System (ADS)

    Becherini, Y.; Khélifi, B.; Pita, S.; Punch, M.; CTA Consortium

    2012-12-01

    The planned Cherenkov Telescope Array (CTA) is a future observatory for very-high-energy (VHE) gamma-ray astronomy composed of one site per hemisphere [1]. It aims at 10 times better sensitivity, a better angular resolution and wider energy coverage than current installations such as H.E.S.S., MAGIC and VERITAS. In order to achieve this level of performance, both the design of the telescopes and the analysis algorithms are being studied and optimized within the CTA Monte-Carlo working group. Here, we present ongoing work on the data analysis for both the event reconstruction (energy, direction) and gamma/hadron separation, carried out within the HAP (H.E.S.S. Analysis Package) software framework of the H.E.S.S. collaboration, for this initial study. The event reconstruction uses both Hillas-parameter-based algorithms and an improved version of the 3D-Model algorithm [2]. For the gamma/hadron discrimination, original and robust discriminant variables are used and treated with Boosted Decision Trees (BDTs) in the TMVA [3] (Toolkit for Multivariate Data Analysis) framework. With this advanced analysis, known as Paris-MVA [4], the sensitivity is improved by a factor of ~ 2 in the core range of CTA relative to the standard analyses. Here we present the algorithms used for the reconstruction and discrimination, together with the resulting performance characteristics, with good confidence, since the method has been successfully applied for H.E.S.S.

  8. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  9. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  10. Advanced digital I&C systems in nuclear power plants: Risk- sensitivities to environmental stressors

    SciTech Connect

    Hassan, M.; Vesely, W.E.

    1996-06-01

    Microprocessor-based advanced digital systems are being used for upgrading analog instrumentation and control (I&C) systems in nuclear power plants (NPPs) in the United States. A concern with using such advanced systems for safety-related applications in NPPs is the limited experience with this equipment in these environments. In this study, we investigate the risk effects of environmental stressors by quantifying the plant`s risk-sensitivities to them. The risk- sensitivities are changes in plant risk caused by the stressors, and are quantified by estimating their effects on I&C failure occurrences and the consequent increase in risk in terms of core damage frequency (CDF). We used available data, including military and NPP operating experience, on the effects of environmental stressors on the reliability of digital I&C equipment. The methods developed are applied to determine and compare risk-sensitivities to temperature, humidity, vibration, EMI (electromagnetic interference) from lightning and smoke as stressors in an example plant using a PRA (Probabilistic Risk Assessment). Uncertainties in the estimates of the stressor effects on the equipment`s reliability are expressed in terms of ranges for risk-sensitivities. The results show that environmental stressors potentially can cause a significant increase in I&C contributions to the CDF. Further, considerable variations can be expected in some stressor effects, depending on where the equipment is located.

  11. Plasma mRNA as liquid biopsy predicts chemo-sensitivity in advanced gastric cancer patients.

    PubMed

    Shen, Jie; Kong, Weiwei; Wu, Yuanna; Ren, Haozhen; Wei, Jia; Yang, Yang; Yang, Yan; Yu, Lixia; Guan, Wenxian; Liu, Baorui

    2017-01-01

    Predictive biomarkers based individualized chemotherapy can improve efficacy. However, for those advanced patients, it may be impossible to obtain the tissues from operation. Tissues from biopsy may not be always enough for gene detection. Thus, biomarker from blood could be a non-invasive and useful tool to provide real-time information in the procedure of treatment. To further understand the role of plasma mRNA in chemo-efficiency prediction, several mRNA expression levels were assessed in plasma and paired tumor tissues from 133 locally advanced gastric cancer patients (stage III), and mRNA levels were correlated with chemosensitivity to docetaxel, pemetrexed, platinum, and irinotecan. mRNA expression level in 64 advanced gastric cancer patients (stage IV) was also examined (55 in test group, and 9 in control), and chemotherapy in the test group were given according to the plasma gene detection. As a result, in the 133 patients with locally advanced gastric cancer (Stage III), correlations were observed between the mRNA expression of plasma/tumor BRCA1 levels and docetaxel sensitivity (P<0.001), plasma/tumor TS and pemetrexed sensitivity (P<0.001), plasma/tumor BRCA1 and platinum sensitivity (plasma, P=0.016; tumor, P<0.001), and plasma/tumor TOPO1 and irinotecan sensitivity (plasma, P=0.015; tumor, P=0.011). Among another 64 patients with advanced cancer (Stage IV), the median OS of test group was 15.5m (95% CI=10.1 to 20.9m), the PFS was 9.1m (95% CI=8.0 to 10.2m), which were significant longer than the control (P=0.047 for OS, P=0.038 for PFS). The mortality risk was higher in the control than patients treated according to the plasma gene detection (HR in the control=2.34, 95% CI=0.93 to 5.88, P=0.071). Plasma mRNA as liquid biopsy could be ideal recourse for examination to predict chemo-sensitivity in gastric cancer.

  12. Plasma mRNA as liquid biopsy predicts chemo-sensitivity in advanced gastric cancer patients

    PubMed Central

    Shen, Jie; Kong, Weiwei; Wu, Yuanna; Ren, Haozhen; Wei, Jia; Yang, Yang; Yang, Yan; Yu, Lixia; Guan, Wenxian; Liu, Baorui

    2017-01-01

    Predictive biomarkers based individualized chemotherapy can improve efficacy. However, for those advanced patients, it may be impossible to obtain the tissues from operation. Tissues from biopsy may not be always enough for gene detection. Thus, biomarker from blood could be a non-invasive and useful tool to provide real-time information in the procedure of treatment. To further understand the role of plasma mRNA in chemo-efficiency prediction, several mRNA expression levels were assessed in plasma and paired tumor tissues from 133 locally advanced gastric cancer patients (stage III), and mRNA levels were correlated with chemosensitivity to docetaxel, pemetrexed, platinum, and irinotecan. mRNA expression level in 64 advanced gastric cancer patients (stage IV) was also examined (55 in test group, and 9 in control), and chemotherapy in the test group were given according to the plasma gene detection. As a result, in the 133 patients with locally advanced gastric cancer (Stage III), correlations were observed between the mRNA expression of plasma/tumor BRCA1 levels and docetaxel sensitivity (P<0.001), plasma/tumor TS and pemetrexed sensitivity (P<0.001), plasma/tumor BRCA1 and platinum sensitivity (plasma, P=0.016; tumor, P<0.001), and plasma/tumor TOPO1 and irinotecan sensitivity (plasma, P=0.015; tumor, P=0.011). Among another 64 patients with advanced cancer (Stage IV), the median OS of test group was 15.5m (95% CI=10.1 to 20.9m), the PFS was 9.1m (95% CI=8.0 to 10.2m), which were significant longer than the control (P=0.047 for OS, P=0.038 for PFS). The mortality risk was higher in the control than patients treated according to the plasma gene detection (HR in the control=2.34, 95% CI=0.93 to 5.88, P=0.071). Plasma mRNA as liquid biopsy could be ideal recourse for examination to predict chemo-sensitivity in gastric cancer.

  13. Sensitivity analysis of textural parameters for vertebroplasty

    NASA Astrophysics Data System (ADS)

    Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.

    2002-05-01

    Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2

  14. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  15. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  16. Advancing the sensitivity of selected reaction monitoring-based targeted quantitative proteomics

    SciTech Connect

    Shi, Tujin; Su, Dian; Liu, Tao; Tang, Keqi; Camp, David G.; Qian, Weijun; Smith, Richard D.

    2012-04-01

    Selected reaction monitoring (SRM)—also known as multiple reaction monitoring (MRM)—has emerged as a promising high-throughput targeted protein quantification technology for candidate biomarker verification and systems biology applications. A major bottleneck for current SRM technology, however, is insufficient sensitivity for e.g., detecting low-abundance biomarkers likely present at the pg/mL to low ng/mL range in human blood plasma or serum, or extremely low-abundance signaling proteins in the cells or tissues. Herein we review recent advances in methods and technologies, including front-end immunoaffinity depletion, fractionation, selective enrichment of target proteins/peptides or their posttranslational modifications (PTMs), as well as advances in MS instrumentation, which have significantly enhanced the overall sensitivity of SRM assays and enabled the detection of low-abundance proteins at low to sub- ng/mL level in human blood plasma or serum. General perspectives on the potential of achieving sufficient sensitivity for detection of pg/mL level proteins in plasma are also discussed.

  17. GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis

    NASA Astrophysics Data System (ADS)

    Kennedy, Christopher Brandon

    model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the kappa-metric, developed from Wilks' order statistics, on the user-defined response functionals that involve the flux state-space. Because the kappa-metric is formed from Wilks' order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error. This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.

  18. Advanced techniques in current signature analysis

    NASA Astrophysics Data System (ADS)

    Smith, S. F.; Castleberry, K. N.

    1992-02-01

    In general, both ac and dc motors can be characterized as weakly nonlinear systems, in which both linear and nonlinear effects occur simultaneously. Fortunately, the nonlinearities are generally well behaved and understood and can be handled via several standard mathematical techniques already well developed in the systems modeling area; examples are piecewise linear approximations and Volterra series representations. Field measurements of numerous motors and motor-driven systems confirm the rather complex nature of motor current spectra and illustrate both linear and nonlinear effects (including line harmonics and modulation components). Although previous current signature analysis (CSA) work at Oak Ridge and other sites has principally focused on the modulation mechanisms and detection methods (AM, PM, and FM), more recent studies have been conducted on linear spectral components (those appearing in the electric current at their actual frequencies and not as modulation sidebands). For example, large axial-flow compressors (approximately 3300 hp) in the US gaseous diffusion uranium enrichment plants exhibit running-speed (approximately 20 Hz) and high-frequency vibrational information (greater than 1 kHz) in their motor current spectra. Several signal-processing techniques developed to facilitate analysis of these components, including specialized filtering schemes, are presented. Finally, concepts for the designs of advanced digitally based CSA units are offered, which should serve to foster the development of much more computationally capable 'smart' CSA instrumentation in the next several years.

  19. Advances in radiation biology: Relative radiation sensitivities of human organ systems. Volume 12

    SciTech Connect

    Lett, J.T.; Altman, K.I.; Ehmann, U.K.; Cox, A.B.

    1987-01-01

    This volume is a thematically focused issue of Advances in Radiation Biology. The topic surveyed is relative radiosensitivity of human organ systems. Topics considered include relative radiosensitivities of the thymus, spleen, and lymphohemopoietic systems; relative radiosensitivities of the small and large intestine; relative rediosensitivities of the oral cavity, larynx, pharynx, and esophagus; relative radiation sensitivity of the integumentary system; dose response of the epidermal; microvascular, and dermal populations; relative radiosensitivity of the human lung; relative radiosensitivity of fetal tissues; and tolerance of the central and peripheral nervous system to therapeutic irradiation.

  20. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  1. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  2. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  3. Advanced Coal Wind Hybrid: Economic Analysis

    SciTech Connect

    Phadke, Amol; Goldman, Charles; Larson, Doug; Carr, Tom; Rath, Larry; Balash, Peter; Yih-Huei, Wan

    2008-11-28

    Growing concern over climate change is prompting new thinking about the technologies used to generate electricity. In the future, it is possible that new government policies on greenhouse gas emissions may favor electric generation technology options that release zero or low levels of carbon emissions. The Western U.S. has abundant wind and coal resources. In a world with carbon constraints, the future of coal for new electrical generation is likely to depend on the development and successful application of new clean coal technologies with near zero carbon emissions. This scoping study explores the economic and technical feasibility of combining wind farms with advanced coal generation facilities and operating them as a single generation complex in the Western US. The key questions examined are whether an advanced coal-wind hybrid (ACWH) facility provides sufficient advantages through improvements to the utilization of transmission lines and the capability to firm up variable wind generation for delivery to load centers to compete effectively with other supply-side alternatives in terms of project economics and emissions footprint. The study was conducted by an Analysis Team that consists of staff from the Lawrence Berkeley National Laboratory (LBNL), National Energy Technology Laboratory (NETL), National Renewable Energy Laboratory (NREL), and Western Interstate Energy Board (WIEB). We conducted a screening level analysis of the economic competitiveness and technical feasibility of ACWH generation options located in Wyoming that would supply electricity to load centers in California, Arizona or Nevada. Figure ES-1 is a simple stylized representation of the configuration of the ACWH options. The ACWH consists of a 3,000 MW coal gasification combined cycle power plant equipped with carbon capture and sequestration (G+CC+CCS plant), a fuel production or syngas storage facility, and a 1,500 MW wind plant. The ACWH project is connected to load centers by a 3,000 MW

  4. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  5. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  6. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  7. Pressure Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Bencic, T.; Sullivan, J. P.

    1999-01-01

    This article reviews new advances and applications of pressure sensitive paints in aerodynamic testing. Emphasis is placed on important technical aspects of pressure sensitive paint including instrumentation, data processing, and uncertainty analysis.

  8. Sensitivity Analysis of Situational Awareness Measures

    NASA Technical Reports Server (NTRS)

    Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames

  9. Sensitivity of the Advanced LIGO detectors at the beginning of gravitational wave astronomy

    NASA Astrophysics Data System (ADS)

    Martynov, D. V.; Hall, E. D.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Adams, C.; Adhikari, R. X.; Anderson, R. A.; Anderson, S. B.; Arai, K.; Arain, M. A.; Aston, S. M.; Austin, L.; Ballmer, S. W.; Barbet, M.; Barker, D.; Barr, B.; Barsotti, L.; Bartlett, J.; Barton, M. A.; Bartos, I.; Batch, J. C.; Bell, A. S.; Belopolski, I.; Bergman, J.; Betzwieser, J.; Billingsley, G.; Birch, J.; Biscans, S.; Biwer, C.; Black, E.; Blair, C. D.; Bogan, C.; Bork, R.; Bridges, D. O.; Brooks, A. F.; Celerier, C.; Ciani, G.; Clara, F.; Cook, D.; Countryman, S. T.; Cowart, M. J.; Coyne, D. C.; Cumming, A.; Cunningham, L.; Damjanic, M.; Dannenberg, R.; Danzmann, K.; Costa, C. F. Da Silva; Daw, E. J.; DeBra, D.; DeRosa, R. T.; DeSalvo, R.; Dooley, K. L.; Doravari, S.; Driggers, J. C.; Dwyer, S. E.; Effler, A.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fair, H.; Feldbaum, D.; Fisher, R. P.; Foley, S.; Frede, M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Galdi, V.; Giaime, J. A.; Giardina, K. D.; Gleason, J. R.; Goetz, R.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Grote, H.; Guido, C. J.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hammond, G.; Hanks, J.; Hanson, J.; Hardwick, T.; Harry, G. M.; Heefner, J.; Heintze, M. C.; Heptonstall, A. W.; Hoak, D.; Hough, J.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jones, R.; Kandhasamy, S.; Karki, S.; Kasprzack, M.; Kaufer, S.; Kawabe, K.; Kells, W.; Kijbunchoo, N.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kokeyama, K.; Korth, W. Z.; Kuehn, G.; Kwee, P.; Landry, M.; Lantz, B.; Le Roux, A.; Levine, B. M.; Lewis, J. B.; Lhuillier, V.; Lockerbie, N. A.; Lormand, M.; Lubinski, M. J.; Lundgren, A. P.; MacDonald, T.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Massinger, T. J.; Matichard, F.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McIntyre, G.; McIver, J.; Merilh, E. L.; Meyer, M. S.; Meyers, P. M.; Miller, J.; Mittleman, R.; Moreno, G.; Mueller, C. L.; Mueller, G.; Mullavey, A.; Munch, J.; Nuttall, L. K.; Oberling, J.; O'Dell, J.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Osthelder, C.; Ottaway, D. J.; Overmier, H.; Palamos, J. R.; Paris, H. R.; Parker, W.; Patrick, Z.; Pele, A.; Penn, S.; Phelps, M.; Pickenpack, M.; Pierro, V.; Pinto, I.; Poeld, J.; Principe, M.; Prokhorov, L.; Puncken, O.; Quetschke, V.; Quintero, E. A.; Raab, F. J.; Radkins, H.; Raffai, P.; Ramet, C. R.; Reed, C. M.; Reid, S.; Reitze, D. H.; Robertson, N. A.; Rollins, J. G.; Roma, V. J.; Romie, J. H.; Rowan, S.; Ryan, K.; Sadecki, T.; Sanchez, E. J.; Sandberg, V.; Sannibale, V.; Savage, R. L.; Schofield, R. M. S.; Schultz, B.; Schwinberg, P.; Sellers, D.; Sevigny, A.; Shaddock, D. A.; Shao, Z.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sigg, D.; Slagmolen, B. J. J.; Smith, J. R.; Smith, M. R.; Smith-Lefebvre, N. D.; Sorazu, B.; Staley, A.; Stein, A. J.; Stochino, A.; Strain, K. A.; Taylor, R.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Torrie, C. I.; Traylor, G.; Vajente, G.; Valdes, G.; van Veggel, A. A.; Vargas, M.; Vecchio, A.; Veitch, P. J.; Venkateswara, K.; Vo, T.; Vorvick, C.; Waldman, S. J.; Walker, M.; Ward, R. L.; Warner, J.; Weaver, B.; Weiss, R.; Welborn, T.; Weßels, P.; Wilkinson, C.; Willems, P. A.; Williams, L.; Willke, B.; Winkelmann, L.; Wipf, C. C.; Worden, J.; Wu, G.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Zhang, L.; Zucker, M. E.; Zweizig, J.

    2016-06-01

    The Laser Interferometer Gravitational Wave Observatory (LIGO) consists of two widely separated 4 km laser interferometers designed to detect gravitational waves from distant astrophysical sources in the frequency range from 10 Hz to 10 kHz. The first observation run of the Advanced LIGO detectors started in September 2015 and ended in January 2016. A strain sensitivity of better than 10-23/√{Hz } was achieved around 100 Hz. Understanding both the fundamental and the technical noise sources was critical for increasing the astrophysical strain sensitivity. The average distance at which coalescing binary black hole systems with individual masses of 30 M⊙ could be detected above a signal-to-noise ratio (SNR) of 8 was 1.3 Gpc, and the range for binary neutron star inspirals was about 75 Mpc. With respect to the initial detectors, the observable volume of the Universe increased by a factor 69 and 43, respectively. These improvements helped Advanced LIGO to detect the gravitational wave signal from the binary black hole coalescence, known as GW150914.

  10. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  11. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  12. Central sensitization and neuropathic features of ongoing pain in a rat model of advanced osteoarthritis

    PubMed Central

    Havelin, Joshua; Imbert, Ian; Cormier, Jennifer; Allen, Joshua; Porreca, Frank; King, Tamara

    2015-01-01

    Osteoarthritis (OA) pain is most commonly characterized by movement-triggered joint pain. However, in advanced disease, OA pain becomes persistent, ongoing and resistant to treatment with NSAIDs. The mechanisms underlying ongoing pain in advanced OA are poorly understood. We recently showed that intra-articular (i.a.) injection of monosodium iodoacetate (MIA) into the rat knee joint produces concentration-dependent outcomes. Thus, a low dose of i.a. MIA produces NSAID-sensitive weight asymmetry without evidence of ongoing pain while a high i.a. MIA dose produces weight asymmetry and NSAID-resistant ongoing pain. In the present studies, palpation of the ipsilateral hindlimb of rats treated 14 days previously with high, but not low, doses of i.a. MIA produced FOS expression in the spinal dorsal horn. Inactivation of descending pain facilitatory pathways by microinjection of lidocaine within the rostral ventromedial medulla (RVM) induced conditioned place preference (CPP) selectively in rats treated with the high dose of MIA. CPP to intra-articular lidocaine was blocked by pretreatment with duloxetine (30 mg/kg, i.p. at −30 min). These observations are consistent with the likelihood of a neuropathic component of OA that elicits ongoing, NSAID resistant pain and central sensitization that is mediated, in part, by descending modulatory mechanisms. This model provides a basis for exploration of underlying mechanisms promoting neuropathic components of OA pain and for the identification of mechanisms that may guide drug discovery for treatment of advanced OA pain without the need for joint replacement. PMID:26694132

  13. Sensitivity analysis and optimization of the nuclear fuel cycle

    SciTech Connect

    Passerini, S.; Kazimi, M. S.; Shwageraus, E.

    2012-07-01

    A sensitivity study has been conducted to assess the robustness of the conclusions presented in the MIT Fuel Cycle Study. The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycles. The options include limited recycling in LWRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. The analysis allowed optimization of the fast reactor conversion ratio with respect to desired fuel cycle performance characteristics. The following parameters were found to significantly affect the performance of recycling technologies and their penetration over time: Capacity Factors of the fuel cycle facilities, Spent Fuel Cooling Time, Thermal Reprocessing Introduction Date, and in core and Out-of-core TRU Inventory Requirements for recycling technology. An optimization scheme of the nuclear fuel cycle is proposed. Optimization criteria and metrics of interest for different stakeholders in the fuel cycle (economics, waste management, environmental impact, etc.) are utilized for two different optimization techniques (linear and stochastic). Preliminary results covering single and multi-variable and single and multi-objective optimization demonstrate the viability of the optimization scheme. (authors)

  14. Advanced Materials and Solids Analysis Research Core (AMSARC)

    EPA Science Inventory

    The Advanced Materials and Solids Analysis Research Core (AMSARC), centered at the U.S. Environmental Protection Agency's (EPA) Andrew W. Breidenbach Environmental Research Center in Cincinnati, Ohio, is the foundation for the Agency's solids and surfaces analysis capabilities. ...

  15. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  16. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  17. Aero-Structural Interaction, Analysis, and Shape Sensitivity

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1999-01-01

    A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.

  18. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    SciTech Connect

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  19. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  20. Baseline Industry Analysis, Advance Ceramics Industry

    DTIC Science & Technology

    1993-04-01

    Commerce , Department of Defense, and the National Critical Technologies Panel. Advanced Ceramics, which include ceramic matrix composites, are found in...ceramics and materials industry being identified as a National Critical Technology, Commerce Emerging Technology, and Defense Critical Technology.’ There is...total procurement cost in advanced systems, and as much as ten percent of the electronics portion of those weapons. Ceramic capacitors are almost as

  1. The GOES-R Advanced Baseline Imager: polarization sensitivity and potential impacts

    NASA Astrophysics Data System (ADS)

    Pearlman, Aaron J.; Cao, Changyong; Wu, Xiangqian

    2015-09-01

    In contrast to the National Oceanic and Atmospheric Administration's (NOAA's) current geostationary imagers for operational weather forecasting, the next generation imager, the Advanced Baseline Imager (ABI) aboard the Geostationary Operational Environmental Satellite R-Series (GOES-R), will have six reflective solar bands - five more than currently available. These bands will be used for applications such as aerosol retrievals, which are influenced by polarization effects. These effects are determined by two factors: instrument polarization sensitivity and the polarization states of the observations. The former is measured as part of the pre-launch testing program performed by the instrument vendor. We analyzed the results of the pre-launch polarization sensitivity measurements of the 0.47 μm and 0.64 μm channels and used them in conjunction with simulated scene polarization states to estimate potential on-orbit radiometric impacts. The pre-launch test setups involved illuminating the ABI with an integrating sphere through either one or two polarizers. The measurement with one (rotating) polarizer yields the degree of linear polarization of ABI, and the measurements using two polarizers (one rotating and one fixed) characterized the non-ideal properties of the polarizer. To estimate the radiometric performance impacts from the instrument polarization sensitivity, we simulated polarized scenes using a radiative transfer code and accounted for the instrument polarization sensitivity over its field of regard. The results show the variation in the polarization impacts over the day and by regions of the full disk can reach up to 3.2% for the 0.47μm channel and 4.8% for the 0.64μm channel. Geostationary orbiters like the ABI give the unique opportunity to show these impacts throughout the day compared to low earth orbiters, which are more limited to certain times of day. This work may enhance the ability to diagnose anomalies on-orbit.

  2. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  3. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  4. Selecting step sizes in sensitivity analysis by finite differences

    NASA Technical Reports Server (NTRS)

    Iott, J.; Haftka, R. T.; Adelman, H. M.

    1985-01-01

    This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.

  5. Sensitivity Analysis and Computation for Partial Differential Equations

    DTIC Science & Technology

    2008-03-14

    Example, Journal of Mathematical Analysis and Applications , to appear. 11 [22] John R. Singler, Transition to Turbulence, Small Disturbances, and...Sensitivity Analysis II: The Navier-Stokes Equations, Journal of Mathematical Analysis and Applications , to appear. [23] A. M. Stuart and A. R. Humphries

  6. Sensitivity analysis for electromagnetic topology optimization problems

    NASA Astrophysics Data System (ADS)

    Zhou, Shiwei; Li, Wei; Li, Qing

    2010-06-01

    This paper presents a level set based method to design the metal shape in electromagnetic field such that the incident current flow on the metal surface can be minimized or maximized. We represent the interface of the free space and conducting material (solid phase) by the zero-order contour of a higher dimensional level set function. Only the electrical component of the incident wave is considered in the current study and the distribution of the induced current flow on the metallic surface is governed by the electric field integral equation (EFIE). By minimizing or maximizing a costing function relative to the current flow, its distribution can be controlled to some extent. This method paves a new avenue to many electromagnetic applications such as antenna and metamaterial whose performance or properties are dominated by their surface current flow. The sensitivity of the objective function to the shape change, an integral formulation including both the solutions to the electric field integral equation and its adjoint equation, is obtained using a variational method and shape derivative. The advantages of the level set model lie in its flexibility of disposing complex topological changes and facilitating the mathematical expression of the electromagnetic configuration. Moreover, the level set model makes the optimization an elegant evolution process during which the volume of the metallic component keeps a constant while the free space/metal interface gradually approaching its optimal position. The effectiveness of this method is demonstrated through a self-adjoint 2D topology optimization example.

  7. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    PubMed Central

    Curtis, Janelle M.R.

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  8. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    PubMed

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  9. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  10. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    PubMed Central

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2015-01-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541

  11. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    NASA Astrophysics Data System (ADS)

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.

  12. Advance ultra sensitive multi-layered nano plasmonic devices for label free biosensing targeting immunodiagnostics

    NASA Astrophysics Data System (ADS)

    Sharma, Divya; Dwivedi, R. P.

    2016-09-01

    The rapid advancement in technology has envisaged and drafted the use of optical bio-sensing units into label free and multiplexed bio-sensing, exploring the surface plasmon polaritons, which has turned into a gold standard on the commercial basis, but they are bulky and find difficulty in scaling up for the throughput detection. The integration of plasmonic crystals with microfluidics on the bio-sensing frontier offers a multi-level validation of results with the ease of real-time detection and imaging and holds a great promise to develop ultra-sensitive, fast, portable device for the point-of-care diagnostics. The paper describes the fast, low cost approach of designing and simulating label free biosensor using open source MEEP and other software tools targeting Immunodiagnostics.

  13. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  14. Sensitivity Analysis of QSAR Models for Assessing Novel Military Compounds

    DTIC Science & Technology

    2009-01-01

    erties, such as log P, would aid in estimating a chemical’s environmental fate and toxicology when applied to QSAR modeling. Granted, QSAR mod- els, such...ER D C TR -0 9 -3 Strategic Environmental Research and Development Program Sensitivity Analysis of QSAR Models for Assessing Novel...Environmental Research and Development Program ERDC TR-09-3 January 2009 Sensitivity Analysis of QSAR Models for Assessing Novel Military Compound

  15. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization.

    PubMed

    Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M

    2013-11-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR <0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent MA levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization.

  16. On the sensitivity analysis of separated-loop MRS data

    NASA Astrophysics Data System (ADS)

    Behroozmand, A.; Auken, E.; Fiandaca, G.

    2013-12-01

    In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the separated-loop with the conventional coincident-loop MRS data. MRS has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop (typically 25 - 100 m in side length/diameter) deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record SNMR signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the sensitivity kernels of different separated-loop MRS soundings are studied and compared with

  17. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  18. A study of turbulent flow with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Dwyer, H. A.; Peterson, T.

    1980-07-01

    In this paper a new type of analysis is introduced that can be used in numerical fluid mechanics. The method is known as sensitivity analysis and it has been widely used in the field of automatic control theory. Sensitivity analysis addresses in a systematic way to the question of 'how' the solution to an equation will change due to variations in the equation's parameters and boundary conditions. An important application is turbulent flow where there exists a large uncertainty in the models used for closure. In the present work the analysis is applied to the three-dimensional planetary boundary layer equations, and sensitivity equations are generated for various parameters in turbulence model. The solution of these equations with the proper techniques leads to considerable insight into the flow field and its dependence on turbulence parameters. Also, the analysis allows for unique decompositions of the parameter dependence and is efficient.

  19. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  20. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite

  1. Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.

    1994-01-01

    During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important

  2. Sensitivity analysis of small circular cylinders as wake control

    NASA Astrophysics Data System (ADS)

    Meneghini, Julio; Patino, Gustavo; Gioria, Rafael

    2016-11-01

    We apply a sensitivity analysis to a steady external force regarding control vortex shedding from a circular cylinder using active and passive small control cylinders. We evaluate the changes on the flow produced by the device on the flow near the primary instability, transition to wake. We numerically predict by means of sensitivity analysis the effective regions to place the control devices. The quantitative effect of the hydrodynamic forces produced by the control devices is also obtained by a sensitivity analysis supporting the prediction of minimum rotation rate. These results are extrapolated for higher Reynolds. Also, the analysis provided the positions of combined passive control cylinders that suppress the wake. The latter shows that these particular positions for the devices are adequate to suppress the wake unsteadiness. In both cases the results agree very well with experimental cases of control devices previously published.

  3. Proposed neutron activation analysis facilities in the Advanced Neutron Source

    SciTech Connect

    Robinson, L.; Dyer, F.F.; Emery, J.F.

    1990-01-01

    A number of analytical chemistry experimental facilities are being proposed for the Advanced Neutron Source. Experimental capabilities will include gamma-ray analysis and neutron depth profiling. This paper describes the various systems proposed and some of their important characteristics.

  4. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    NASA Technical Reports Server (NTRS)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  5. Advanced Fingerprint Analysis Project Fingerprint Constituents

    SciTech Connect

    GM Mong; CE Petersen; TRW Clauss

    1999-10-29

    The work described in this report was focused on generating fundamental data on fingerprint components which will be used to develop advanced forensic techniques to enhance fluorescent detection, and visualization of latent fingerprints. Chemical components of sweat gland secretions are well documented in the medical literature and many chemical techniques are available to develop latent prints, but there have been no systematic forensic studies of fingerprint sweat components or of the chemical and physical changes these substances undergo over time.

  6. Advanced Trending Analysis/EDS Data Program.

    DTIC Science & Technology

    1982-01-01

    Fault Detection and Isolation (TEFDI) Program, SCT was to use the Advanced Trend...detailed discussion of the algorithm and its underlying theory, the reader is directed to SCT’s Turbine Engine Fault Detection and Isolation (TEF!I) Program...SCT’s Turbine Engine Fault Detection and Isolation (TEFDI) Program Final Report scheduled for release in early 1982. 2. DISCUSSION OF RESULTS -

  7. Advanced nuclear rocket engine mission analysis

    SciTech Connect

    Ramsthaler, J.; Farbman, G.; Sulmeisters, T.; Buden, D.; Harris, P.

    1987-12-01

    The use of a derivative of the NERVA engine developed from 1955 to 1973 was evluated for potential application to Air Force orbital transfer and maneuvering missions in the time period 1995 to 2020. The NERVA stge was found to have lower life cycle costs (LCC) than an advanced chemical stage for performing low earth orbit (LEO) to geosynchronous orbit (GEO0 missions at any level of activity greater than three missions per year. It had lower life cycle costs than a high performance nuclear electric engine at any level of LEO to GEO mission activity. An examination of all unmanned orbital transfer and maneuvering missions from the Space Transportation Architecture study (STAS 111-3) indicated a LCC advantage for the NERVA stage over the advanced chemical stage of fifteen million dollars. The cost advanced accured from both the orbital transfer and maneuvering missions. Parametric analyses showed that the specific impulse of the NERVA stage and the cost of delivering material to low earth orbit were the most significant factors in the LCC advantage over the chemical stage. Lower development costs and a higher thrust gave the NERVA engine an LCC advantage over the nuclear electric stage. An examination of technical data from the Rover/NERVA program indicated that development of the NERVA stage has a low technical risk, and the potential for high reliability and safe operation. The data indicated the NERVA engine had a great flexibility which would permit a single stage to perform all Air Force missions.

  8. Image analysis in medical imaging: recent advances in selected examples.

    PubMed

    Dougherty, G

    2010-01-01

    Medical imaging has developed into one of the most important fields within scientific imaging due to the rapid and continuing progress in computerised medical image visualisation and advances in analysis methods and computer-aided diagnosis. Several research applications are selected to illustrate the advances in image analysis algorithms and visualisation. Recent results, including previously unpublished data, are presented to illustrate the challenges and ongoing developments.

  9. A comprehensive sensitivity analysis of central-loop MRS data

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad; Auken, Esben; Dalgaard, Esben; Rejkjaer, Simon

    2014-05-01

    In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the central-loop with the conventional coincident-loop MRS data. MRS, also called surface NMR, has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content and, through empirical relations, is linked to hydraulic properties of the subsurface such as hydraulic conductivity. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record MRS signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the

  10. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  11. Advanced Cd(II) complexes as high efficiency co-sensitizers for enhanced dye-sensitized solar cell performance.

    PubMed

    Gao, Song; Fan, Rui Qing; Wang, Xin Ming; Qiang, Liang Sheng; Wei, Li Guo; Wang, Ping; Yang, Yu Lin; Wang, Yu Lei

    2015-11-07

    This work reports on two new complexes with the general formula [Cd3(IBA)3(Cl)2(HCOO)(H2O)]n (1) and {[Cd1.5(IBA)3(H2O)6]·3.5H2O}n (2), which can be synthesized by the reaction of Cd(II) with rigid linear ligand 4-HIBA containing imidazolyl and carboxylate functional groups [4-HIBA = 4-(1H-imidazol-1-yl)benzoic acid]. Single-crystal X-ray diffraction analyses indicate that complex 1 is a 2D "wave-like" layer structure constructed from trinuclear units and complex 2 is just a mononuclear structure. Surprisingly, both complexes 1 and 2 appear as a 3D supramolecular network via intermolecular hydrogen bonding interactions. What's more, due to their strong UV-visible absorption, 1 and 2 can be employed as co-sensitizers in combination with N719 to enhance dye-sensitized solar cell (DSSC) performance. Both of them could overcome the deficiency of the ruthenium complex N719 absorption in the region of ultraviolet and blue-violet, and the charge collection efficiency is also improved when 1 and 2 are used as co-sensitizers, which are all in favor of enhancing the performance. The DSSC devices using co-sensitizers of 1/N719 and 2/N719 show an overall conversion efficiency of 8.27% and 7.73% with a short circuit current density of 17.48 mA cm(-2) and 17.39 mA cm(-2), and an open circuit voltage of 0.75 V and 0.74 V, respectively. The overall conversion efficiency is 27.23% and 18.92% higher than that of a device solely sensitized by N719 (6.50%). Consequently, the prepared complexes are high efficiency co-sensitizers for enhancing the performance of N719 sensitized solar cells.

  12. Advanced Software Methods for Physics Analysis

    NASA Astrophysics Data System (ADS)

    Lista, L.

    2006-01-01

    Unprecedented data analysis complexity is experienced in modern High Energy Physics experiments. The complexity arises from the growing size of recorded data samples, the large number of data analyses performed by different users in each single experiment, and the level of complexity of each single analysis. For this reason, the requirements on software for data analysis impose a very high level of reliability. We present two concrete examples: the former from BaBar experience with the migration to a new Analysis Model with the definition of a new model for the Event Data Store, the latter about a toolkit for multivariate statistical and parametric Monte Carlo analysis developed using generic programming.

  13. Advanced tracking systems design and analysis

    NASA Technical Reports Server (NTRS)

    Potash, R.; Floyd, L.; Jacobsen, A.; Cunningham, K.; Kapoor, A.; Kwadrat, C.; Radel, J.; Mccarthy, J.

    1989-01-01

    The results of an assessment of several types of high-accuracy tracking systems proposed to track the spacecraft in the National Aeronautics and Space Administration (NASA) Advanced Tracking and Data Relay Satellite System (ATDRSS) are summarized. Tracking systems based on the use of interferometry and ranging are investigated. For each system, the top-level system design and operations concept are provided. A comparative system assessment is presented in terms of orbit determination performance, ATDRSS impacts, life-cycle cost, and technological risk.

  14. Advanced surface design for logistics analysis

    NASA Astrophysics Data System (ADS)

    Brown, Tim R.; Hansen, Scott D.

    The development of anthropometric arm/hand and tool models and their manipulation in a large system model for maintenance simulation are discussed. The use of Advanced Surface Design and s-fig technology in anthropometrics, and three-dimensional graphics simulation tools, are found to achieve a good balance between model manipulation speed and model accuracy. The present second generation models are shown to be twice as fast to manipulate as the first generation b-surf models, to be easier to manipulate into various configurations, and to more closely approximate human contours.

  15. Advances in the analysis of iminocyclitols: Methods, sources and bioavailability.

    PubMed

    Amézqueta, Susana; Torres, Josep Lluís

    2016-05-01

    Iminocyclitols are chemically and metabolically stable, naturally occurring sugar mimetics. Their biological activities make them interesting and extremely promising as both drug leads and functional food ingredients. The first iminocyclitols were discovered using preparative isolation and purification methods followed by chemical characterization using nuclear magnetic resonance spectroscopy. In addition to this classical approach, gas and liquid chromatography coupled to mass spectrometry are increasingly used; they are highly sensitive techniques capable of detecting minute amounts of analytes in a broad spectrum of sources after only minimal sample preparation. These techniques have been applied to identify new iminocyclitols in plants, microorganisms and synthetic mixtures. The separation of iminocyclitol mixtures by chromatography is particularly difficult however, as the most commonly used matrices have very low selectivity for these highly hydrophilic structurally similar molecules. This review critically summarizes recent advances in the analysis of iminocyclitols from plant sources and findings regarding their quantification in dietary supplements and foodstuffs, as well as in biological fluids and organs, from bioavailability studies.

  16. Automation of primal and sensitivity analysis of transient coupled problems

    NASA Astrophysics Data System (ADS)

    Korelc, Jože

    2009-10-01

    The paper describes a hybrid symbolic-numeric approach to automation of primal and sensitivity analysis of computational models formulated and solved by finite element method. The necessary apparatus for the automation of steady-state, steady-state coupled, transient and transient coupled problems is introduced as combination of a symbolic system, an automatic differentiation (AD) technique and an automatic code generation. For this purpose the paper extends the classical formulation of AD by additional operators necessary for a high abstract description of primal and sensitivity analysis of the typical computational models. An appropriate abstract description for the fully implicit primal and sensitivity analysis of hyperelastic and elasto-plastic problems and a symbolic input for the generation of necessary user subroutines for the two-dimensional, hyperelastic finite element are presented at the end.

  17. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  18. Sensitivity analysis for missing data in regulatory submissions.

    PubMed

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  19. New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi

    2012-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.

  20. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  1. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  2. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  3. Sensitivity analysis of dynamic biological systems with time-delays

    PubMed Central

    2010-01-01

    Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex

  4. Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures

    DTIC Science & Technology

    2005-03-01

    respect to the nominal alloy composition at the center of weld surface (Point 6 of Figure 7) -21 - U CO 2000 - * cE axc -2000 o" "....". . -401.11𔃺 1󈧄...Final Report Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Office of Naval Research 800 North Quincy Street Arlington...3/31/05 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Sb. GRANT NUMBER N000

  5. Sensitivity analysis of the fission gas behavior model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  6. Preliminary sensitivity analysis of the Devonian shale in Ohio

    SciTech Connect

    Covatch, G.L.

    1985-06-01

    A preliminary sensitivity analysis of gas reserves in Devonian shale in Ohio was made on the six partitioned areas, based on a payout time of 3 years. Data sets were obtained from Lewin and Associates for the six partitioned areas in Ohio and used as a base case for the METC sensitivity analysis. A total of five different well stimulation techniques were evaluated in both the METC and Lewin studies. The five techniques evaluated were borehole shooting, a small radial stimulation, a large radial stimulation, a small vertical fracture, and a large vertical fracture.

  7. Stable locality sensitive discriminant analysis for image recognition.

    PubMed

    Gao, Quanxue; Liu, Jingjing; Cui, Kai; Zhang, Hailin; Wang, Xiaogang

    2014-06-01

    Locality Sensitive Discriminant Analysis (LSDA) is one of the prevalent discriminant approaches based on manifold learning for dimensionality reduction. However, LSDA ignores the intra-class variation that characterizes the diversity of data, resulting in unstableness of the intra-class geometrical structure representation and not good enough performance of the algorithm. In this paper, a novel approach is proposed, namely stable locality sensitive discriminant analysis (SLSDA), for dimensionality reduction. SLSDA constructs an adjacency graph to model the diversity of data and then integrates it in the objective function of LSDA. Experimental results in five databases show the effectiveness of the proposed approach.

  8. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  9. Advanced assessment of the physicochemical characteristics of Remicade® and Inflectra® by sensitive LC/MS techniques.

    PubMed

    Fang, Jing; Doneanu, Catalin; Alley, William R; Yu, Ying Qing; Beck, Alain; Chen, Weibin

    2016-01-01

    In this study, we demonstrate the utility of ultra-performance liquid chromatography coupled to mass spectrometry (MS) and ion-mobility spectrometry (IMS) to characterize and compare reference and biosimilar monoclonal antibodies (mAbs) at an advanced level. Specifically, we focus on infliximab and compared the glycan profiles, higher order structures, and their host cell proteins (HCPs) of the reference and biosimilar products, which have the brand names Remicade® and Inflectra®, respectively. Overall, the biosimilar attributes mirrored those of the reference product to a very high degree. The glycan profiling analysis demonstrated a high degree of similarity, especially among the higher abundance glycans. Some differences were observed for the lower abundance glycans. Glycans terminated with N-glycolylneuraminic acid were generally observed to be at higher normalized abundance levels on the biosimilar mAb, while those possessing α-linked galactose pairs were more often expressed at higher levels on the reference molecule. Hydrogen deuterium exchange (HDX) analyses further confirmed the higher-order similarity of the 2 molecules. These results demonstrated only very slight differences between the 2 products, which, interestingly, seemed to be in the area where the N-linked glycans reside. The HCP analysis by a 2D-UPLC IMS-MS approach revealed that the same 2 HCPs were present in both mAb samples. Our ability to perform these types of analyses and acquire insightful data for biosimilarity assessment is based upon our highly sensitive UPLC MS and IMS methods.

  10. Stochastic averaging and sensitivity analysis for two scale reaction networks

    NASA Astrophysics Data System (ADS)

    Hashemi, Araz; Núñez, Marcel; Plecháč, Petr; Vlachos, Dionisios G.

    2016-02-01

    In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.

  11. Efficient sensitivity analysis method for chaotic dynamical systems

    SciTech Connect

    Liao, Haitao

    2016-05-15

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  12. Recent Advances in Anthocyanin Analysis and Characterization

    PubMed Central

    Welch, Cara R.; Wu, Qingli; Simon, James E.

    2009-01-01

    Anthocyanins are a class of polyphenols responsible for the orange, red, purple and blue colors of many fruits, vegetables, grains, flowers and other plants. Consumption of anthocyanins has been linked as protective agents against many chronic diseases and possesses strong antioxidant properties leading to a variety of health benefits. In this review, we examine the advances in the chemical profiling of natural anthocyanins in plant and biological matrices using various chromatographic separations (HPLC and CE) coupled with different detection systems (UV, MS and NMR). An overview of anthocyanin chemistry, prevalence in plants, biosynthesis and metabolism, bioactivities and health properties, sample preparation and phytochemical investigations are discussed while the major focus examines the comparative advantages and disadvantages of each analytical technique. PMID:19946465

  13. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    SciTech Connect

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  14. Analysis of an advanced technology subsonic turbofan incorporating revolutionary materials

    NASA Technical Reports Server (NTRS)

    Knip, Gerald, Jr.

    1987-01-01

    Successful implementation of revolutionary composite materials in an advanced turbofan offers the possibility of further improvements in engine performance and thrust-to-weight ratio relative to current metallic materials. The present analysis determines the approximate engine cycle and configuration for an early 21st century subsonic turbofan incorporating all composite materials. The advanced engine is evaluated relative to a current technology baseline engine in terms of its potential fuel savings for an intercontinental quadjet having a design range of 5500 nmi and a payload of 500 passengers. The resultant near optimum, uncooled, two-spool, advanced engine has an overall pressure ratio of 87, a bypass ratio of 18, a geared fan, and a turbine rotor inlet temperature of 3085 R. Improvements result in a 33-percent fuel saving for the specified misssion. Various advanced composite materials are used throughout the engine. For example, advanced polymer composite materials are used for the fan and the low pressure compressor (LPC).

  15. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  16. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  17. Sensitivity analysis applied to stalled airfoil wake and steady control

    NASA Astrophysics Data System (ADS)

    Patino, Gustavo; Gioria, Rafael; Meneghini, Julio

    2014-11-01

    The sensitivity of an eigenvalue to base flow modifications induced by an external force is applied to the global unstable modes associated to the onset of vortex shedding in the wake of a stalled airfoil. In this work, the flow regime is close to the first instability of the system and its associated eigenvalue/eigenmode is determined. The sensitivity analysis to a general punctual external force allows establishing the regions where control devices must be in order to stabilize the global modes. Different types of steady control devices, passive and active, are used in the regions predicted by the sensitivity analysis to check the vortex shedding suppression, i.e. the primary instability bifurcation is delayed. The new eigenvalue, modified by the action of the device, is also calculated. Finally the spectral finite element method is employed to determine flow characteristics before and after of the bifurcation in order to cross check the results.

  18. Uncertainty and sensitivity analysis and its applications in OCD measurements

    NASA Astrophysics Data System (ADS)

    Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio

    2009-03-01

    This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.

  19. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  20. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    PubMed Central

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  1. Beyond the GUM: variance-based sensitivity analysis in metrology

    NASA Astrophysics Data System (ADS)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  2. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    ERIC Educational Resources Information Center

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  3. Sensitivity analysis of the Ohio phosphorus risk index

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...

  4. Advances in microfluidics for environmental analysis.

    PubMed

    Jokerst, Jana C; Emory, Jason M; Henry, Charles S

    2012-01-07

    During the past few years, a growing number of groups have recognized the utility of microfluidic devices for environmental analysis. Microfluidic devices offer a number of advantages and in many respects are ideally suited to environmental analyses. Challenges faced in environmental monitoring, including the ability to handle complex and highly variable sample matrices, lead to continued growth and research. Additionally, the need to operate for days to months in the field requires further development of robust, integrated microfluidic systems. This review examines recently published literature on the applications of microfluidic systems for environmental analysis and provides insight in the future direction of the field.

  5. Sensitivity analysis of a ground-water-flow model

    USGS Publications Warehouse

    Torak, Lynn J.; ,

    1991-01-01

    A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.

  6. Advanced Durability Analysis. Volume 1. Analytical Methods

    DTIC Science & Technology

    1987-07-31

    for microstruc .- tural behavior . This approach for representing the IFQ, when properly used, can provide reasonable durability analysis rt,- sults for...equivalent initial flaw size distribution (EIFSD) function. Engineering principles rather than mechanistic-based theories for microstructural behavior are...accurate EIFS distribution and a service crack growth behavior . The determinations of EIFS distribution have been described in detail previously. In this

  7. Modeling and analysis of advanced binary cycles

    SciTech Connect

    Gawlik, K.

    1997-12-31

    A computer model (Cycle Analysis Simulation Tool, CAST) and a methodology have been developed to perform value analysis for small, low- to moderate-temperature binary geothermal power plants. The value analysis method allows for incremental changes in the levelized electricity cost (LEC) to be determined between a baseline plant and a modified plant. Thermodynamic cycle analyses and component sizing are carried out in the model followed by economic analysis which provides LEC results. The emphasis of the present work is on evaluating the effect of mixed working fluids instead of pure fluids on the LEC of a geothermal binary plant that uses a simple Organic Rankine Cycle. Four resources were studied spanning the range of 265{degrees}F to 375{degrees}F. A variety of isobutane and propane based mixtures, in addition to pure fluids, were used as working fluids. This study shows that the use of propane mixtures at a 265{degrees}F resource can reduce the LEC by 24% when compared to a base case value that utilizes commercial isobutane as its working fluid. The cost savings drop to 6% for a 375{degrees}F resource, where an isobutane mixture is favored. Supercritical cycles were found to have the lowest cost at all resources.

  8. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  9. Recent advances in statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Heron, K. H.

    1992-01-01

    Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.

  10. Advancing Usability Evaluation through Human Reliability Analysis

    SciTech Connect

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.

  11. Progress in Advanced Spectral Analysis of Radioxenon

    SciTech Connect

    Haas, Derek A.; Schrom, Brian T.; Cooper, Matthew W.; Ely, James H.; Flory, Adam E.; Hayes, James C.; Heimbigner, Tom R.; McIntyre, Justin I.; Saunders, Danielle L.; Suckow, Thomas J.

    2010-09-21

    Improvements to a Java based software package developed at Pacific Northwest National Laboratory (PNNL) for display and analysis of radioxenon spectra acquired by the International Monitoring System (IMS) are described here. The current version of the Radioxenon JavaViewer implements the region of interest (ROI) method for analysis of beta-gamma coincidence data. Upgrades to the Radioxenon JavaViewer will include routines to analyze high-purity germanium detector (HPGe) data, Standard Spectrum Method to analyze beta-gamma coincidence data and calibration routines to characterize beta-gamma coincidence detectors. These upgrades are currently under development; the status and initial results will be presented. Implementation of these routines into the JavaViewer and subsequent release is planned for FY 2011-2012.

  12. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  13. Sensitive Chiral Analysis via Microwave Three-Wave Mixing

    NASA Astrophysics Data System (ADS)

    Patterson, David; Doyle, John M.

    2013-07-01

    We demonstrate chirality-induced three-wave mixing in the microwave regime, using rotational transitions in cold gas-phase samples of 1,2-propanediol and 1,3-butanediol. We show that bulk three-wave mixing, which can only be realized in a chiral environment, provides a sensitive, species-selective probe of enantiomeric excess and is applicable to a broad class of molecules. The doubly resonant condition provides simultaneous identification of species and of handedness, which should allow sensitive chiral analysis even within a complex mixture.

  14. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    SciTech Connect

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  15. Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris

    2015-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.

  16. Advanced CMOS Radiation Effects Testing and Analysis

    NASA Technical Reports Server (NTRS)

    Pellish, J. A.; Marshall, P. W.; Rodbell, K. P.; Gordon, M. S.; LaBel, K. A.; Schwank, J. R.; Dodds, N. A.; Castaneda, C. M.; Berg, M. D.; Kim, H. S.; Phan, A. M.; Seidleck, C. M.

    2014-01-01

    Presentation at the annual NASA Electronic Parts and Packaging (NEPP) Program Electronic Technology Workshop (ETW). The material includes an update of progress in this NEPP task area over the past year, which includes testing, evaluation, and analysis of radiation effects data on the IBM 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) process. The testing was conducted using test vehicles supplied by directly by IBM.

  17. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  18. Adjoint based sensitivity analysis of a reacting jet in crossflow

    NASA Astrophysics Data System (ADS)

    Sashittal, Palash; Sayadi, Taraneh; Schmid, Peter

    2016-11-01

    With current advances in computational resources, high fidelity simulations of reactive flows are increasingly being used as predictive tools in various industrial applications. In order to capture the combustion process accurately, detailed/reduced chemical mechanisms are employed, which in turn rely on various model parameters. Therefore, it would be of great interest to quantify the sensitivities of the predictions with respect to the introduced models. Due to the high dimensionality of the parameter space, methods such as finite differences which rely on multiple forward simulations prove to be very costly and adjoint based techniques are a suitable alternative. The complex nature of the governing equations, however, renders an efficient strategy in finding the adjoint equations a challenging task. In this study, we employ the modular approach of Fosas de Pando et al. (2012), to build a discrete adjoint framework applied to a reacting jet in crossflow. The developed framework is then used to extract the sensitivity of the integrated heat release with respect to the existing combustion parameters. Analyzing the sensitivities in the three-dimensional domain provides insight towards the specific regions of the flow that are more susceptible to the choice of the model.

  19. Advanced Risk Analysis for High-Performing Organizations

    DTIC Science & Technology

    2006-01-01

    using traditional risk analysis techniques. Mission Assurance Analysis Protocol (MAAP) is one technique that high performers can use to identify and mitigate the risks arising from operational complexity....The operational environment for many types of organizations is changing. Changes in operational environments are driving the need for advanced risk ... analysis techniques. Many types of risk prevalent in today’s operational environments (e.g., event risks, inherited risk) are not readily identified

  20. Ultra sensitive magnetic sensors integrating the giant magnetoelectric effect with advanced microelectronics

    NASA Astrophysics Data System (ADS)

    Fang, Zhao

    consisting of magnetostrictive and piezoelectric components shows a promise to make novel ultra-sensitive magnetic sensors capable of operating at room temperature. To achieve such a high sensitivity (˜pT level), piezoelectric sensors are materialized through ME composite laminates, provided piezo-sensors are among the most sensitive while being passive devices at the same time. To further improve the sensitivity and reduce the 1f noise level, several approaches are used such as magnetic flux concentration effect, which is a function of the Metglas sheet aspect ratio, and resonance enhancement. Taking advantage of this effect, the ME voltage coefficient alpha ME=21.46 V/cm·Oe for Metglas 2605SA1/PVDF laminates and alphaME=46.7 V/cm·Oe for Metglas 2605CO/PVDF laminates. The resonance response of Metglas/PZT laminates in FF (Free-Free), FC (Free-Clamped), and CC (Clamped-Clamped) modes are also investigated. alphaME=301.6 V/cm·Oe and the corresponding SNR=4x107 Hz /Oe are achieved for FC mode at resonance frequencies. In addition to this, testing setups were built to characterize the magnetic sensors. LABVIEW codes were also developed to automatize the measurements and consequently get accurate results. Then two commonly used integration methods, i.e., hybrid method and system in package (SIP), are discussed. Then the intrinsic noise analysis including dielectric loss noise, which dominates the intrinsic noise sources, and magnetostrictive noise is introduced. A charge mode readout circuit is made for hybrid method and a voltage mode readout circuit is made for SIP method. For sensors, since SNR is very important since it determines the minimum signal it can detect, the SNR of each configuration is discussed in detail. For charge mode circuit, by taking advantage of the multilayer PVDF configuration, SNR=7.2x10 5 Hz /Oe is achieved at non-resonance frequencies and SNR=2x10 7 Hz /Oe is achieved at resonance frequencies. For voltage mode circuit, a constant SNR=3x103 Hz /Oe

  1. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  2. Metabolic systems analysis to advance algal biotechnology.

    PubMed

    Schmidt, Brian J; Lin-Schmidt, Xiefan; Chamberlin, Austin; Salehi-Ashtiani, Kourosh; Papin, Jason A

    2010-07-01

    Algal fuel sources promise unsurpassed yields in a carbon neutral manner that minimizes resource competition between agriculture and fuel crops. Many challenges must be addressed before algal biofuels can be accepted as a component of the fossil fuel replacement strategy. One significant challenge is that the cost of algal fuel production must become competitive with existing fuel alternatives. Algal biofuel production presents the opportunity to fine-tune microbial metabolic machinery for an optimal blend of biomass constituents and desired fuel molecules. Genome-scale model-driven algal metabolic design promises to facilitate both goals by directing the utilization of metabolites in the complex, interconnected metabolic networks to optimize production of the compounds of interest. Network analysis can direct microbial development efforts towards successful strategies and enable quantitative fine-tuning of the network for optimal product yields while maintaining the robustness of the production microbe. Metabolic modeling yields insights into microbial function, guides experiments by generating testable hypotheses, and enables the refinement of knowledge on the specific organism. While the application of such analytical approaches to algal systems is limited to date, metabolic network analysis can improve understanding of algal metabolic systems and play an important role in expediting the adoption of new biofuel technologies.

  3. Advances in Mössbauer data analysis

    NASA Astrophysics Data System (ADS)

    de Souza, Paulo A.

    1998-08-01

    The whole Mössbauer community generates a huge amount of data in several fields of human knowledge since the first publication of Rudolf Mössbauer. Interlaboratory measurements of the same substance may result in minor differences in the Mössbauer Parameters (MP) of isomer shift, quadrupole splitting and internal magnetic field. Therefore, a conventional data bank of published MP will be of limited help in identification of substances. Data bank search for exact information became incapable to differentiate the values of Mössbauer parameters within the experimental errors (e.g., IS = 0.22 mm/s from IS = 0.23 mm/s), but physically both values may be considered the same. An artificial neural network (ANN) is able to identify a substance and its crystalline structure from measured MP, and its slight variations do not represent an obstacle for the ANN identification. A barrier to the popularization of Mössbauer spectroscopy as an analytical technique is the absence of a full automated equipment, since the analysis of a Mössbauer spectrum normally is time-consuming and requires a specialist. In this work, the fitting process of a Mössbauer spectrum was completely automated through the use of genetic algorithms and fuzzy logic. Both software and hardware systems were implemented turning out to be a fully automated Mössbauer data analysis system. The developed system will be presented.

  4. Advanced stability analysis for laminar flow control

    NASA Technical Reports Server (NTRS)

    Orszag, S. A.

    1981-01-01

    Five classes of problems are addressed: (1) the extension of the SALLY stability analysis code to the full eighth order compressible stability equations for three dimensional boundary layer; (2) a comparison of methods for prediction of transition using SALLY for incompressible flows; (3) a study of instability and transition in rotating disk flows in which the effects of Coriolis forces and streamline curvature are included; (4) a new linear three dimensional instability mechanism that predicts Reynolds numbers for transition to turbulence in planar shear flows in good agreement with experiment; and (5) a study of the stability of finite amplitude disturbances in axisymmetric pipe flow showing the stability of this flow to all nonlinear axisymmetric disturbances.

  5. Performance analysis of advanced spacecraft TPS

    NASA Technical Reports Server (NTRS)

    Pitts, William C.

    1987-01-01

    The analysis on the feasibility for using metal hydrides in the thermal protection system of cryogenic tanks in space was based on the heat capacity of ice as the phase change material (PCM). It was found that with ice the thermal protection system weight could be reduced by, at most, about 20 percent over an all LI-900 insulation. For this concept to be viable, a metal hydride with considerably more capacity than water would be required. None were found. Special metal hydrides were developed for hydrogen fuel storage applications and it may be possible to do so for the current application. Until this appears promising further effort on this feasibility study does not seem warranted.

  6. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  7. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  8. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  9. Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania

    PubMed Central

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2016-01-01

    This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634

  10. Analysis of interior noise ground and flight test data for advanced turboprop aircraft applications

    NASA Technical Reports Server (NTRS)

    Simpson, M. A.; Tran, B. N.

    1991-01-01

    Interior noise ground tests conducted on a DC-9 aircraft test section are described. The objectives were to study ground test and analysis techniques for evaluating the effectiveness of interior noise control treatments for advanced turboprop aircraft, and to study the sensitivity of the ground test results to changes in various test conditions. Noise and vibration measurements were conducted under simulated advanced turboprop excitation, for two interior noise control treatment configurations. These ground measurement results were compared with results of earlier UHB (Ultra High Bypass) Demonstrator flight tests with comparable interior treatment configurations. The Demonstrator is an MD-80 test aircraft with the left JT8D engine replaced with a prototype UHB advanced turboprop engine.

  11. Advanced analysis techniques for uranium assay

    SciTech Connect

    Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.; Beard, C. A.

    2001-01-01

    Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples count rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.

  12. Advances in carbonate exploration and reservoir analysis

    USGS Publications Warehouse

    Garland, J.; Neilson, J.; Laubach, S.E.; Whidden, Katherine J.

    2012-01-01

    The development of innovative techniques and concepts, and the emergence of new plays in carbonate rocks are creating a resurgence of oil and gas discoveries worldwide. The maturity of a basin and the application of exploration concepts have a fundamental influence on exploration strategies. Exploration success often occurs in underexplored basins by applying existing established geological concepts. This approach is commonly undertaken when new basins ‘open up’ owing to previous political upheavals. The strategy of using new techniques in a proven mature area is particularly appropriate when dealing with unconventional resources (heavy oil, bitumen, stranded gas), while the application of new play concepts (such as lacustrine carbonates) to new areas (i.e. ultra-deep South Atlantic basins) epitomizes frontier exploration. Many low-matrix-porosity hydrocarbon reservoirs are productive because permeability is controlled by fractures and faults. Understanding basic fracture properties is critical in reducing geological risk and therefore reducing well costs and increasing well recovery. The advent of resource plays in carbonate rocks, and the long-standing recognition of naturally fractured carbonate reservoirs means that new fracture and fault analysis and prediction techniques and concepts are essential.

  13. Advances in the environmental analysis of polychlorinated naphthalenes and toxaphene.

    PubMed

    Kucklick, John R; Helm, Paul A

    2006-10-01

    Recent advances in the analysis of the chlorinated environmental pollutants polychlorinated naphthalenes (PCNs) and toxaphene are highlighted in this review. Method improvements have been realized for PCNs over the past decade in isomer-specific quantification, peak resolution, and the availability of mass-labeled standards. Toxaphene method advancements include the application of new capillary gas chromatographic (GC) stationary phases, mass spectrometry (MS), especially ion trap MS, and the availability of Standard Reference Materials that are value-assigned for total toxaphene and selected congener concentrations. An area of promise for the separation of complex mixtures such as PCNs and toxaphene is the development of multidimensional GC techniques. The need for continued advancements and efficiencies in the analysis of contaminants such as PCNs and toxaphene remains as monitoring requirements for these compound classes are established under international agreements.

  14. Advanced computational tools for 3-D seismic analysis

    SciTech Connect

    Barhen, J.; Glover, C.W.; Protopopescu, V.A.

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  15. Sensitivity analysis techniques for models of human behavior.

    SciTech Connect

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  16. Spectrograph sensitivity analysis: an efficient tool for different design phases

    NASA Astrophysics Data System (ADS)

    Genoni, M.; Riva, M.; Pariani, G.; Aliverti, M.; Moschetti, M.

    2016-08-01

    In this paper we present an efficient tool developed to perform opto-mechanical tolerance and sensitivity analysis both for the preliminary and final design phases of a spectrograph. With this tool it will be possible to evaluate the effect of mechanical perturbation of each single spectrograph optical element in terms of image stability, i.e. the motion of the echellogram on the spectrograph focal plane, and of image quality, i.e. the spot size of the different echellogram wavelengths. We present the MATLAB-Zemax script architecture of the tool. In addition we present the detailed results concerning its application to the sensitivity analysis of the ESPRESSO spectrograph (the Echelle Spectrograph for Rocky Exoplanets and Stable Spectroscopic Observations which will be soon installed on ESO's Very Large Telescope) in the framework of the incoming assembly, alignment and integration phases.

  17. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Wang, Qiqi; Hu, Rui; Blonigan, Patrick

    2014-06-01

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  18. Adaptive approach for nonlinear sensitivity analysis of reaction kinetics.

    PubMed

    Horenko, Illia; Lorenz, Sönke; Schütte, Christof; Huisinga, Wilhelm

    2005-07-15

    We present a unified approach for linear and nonlinear sensitivity analysis for models of reaction kinetics that are stated in terms of systems of ordinary differential equations (ODEs). The approach is based on the reformulation of the ODE problem as a density transport problem described by a Fokker-Planck equation. The resulting multidimensional partial differential equation is herein solved by extending the TRAIL algorithm originally introduced by Horenko and Weiser in the context of molecular dynamics (J. Comp. Chem. 2003, 24, 1921) and discussed it in comparison with Monte Carlo techniques. The extended TRAIL approach is fully adaptive and easily allows to study the influence of nonlinear dynamical effects. We illustrate the scheme in application to an enzyme-substrate model problem for sensitivity analysis w.r.t. to initial concentrations and parameter values.

  19. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    SciTech Connect

    Wang, Qiqi Hu, Rui Blonigan, Patrick

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  20. Objective analysis of the ARM IOP data: method and sensitivity

    SciTech Connect

    Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H

    1999-04-01

    Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.

  1. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGES

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  2. Graphical methods for the sensitivity analysis in discriminant analysis

    SciTech Connect

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.

  3. Advanced assessment of the physicochemical characteristics of Remicade® and Inflectra® by sensitive LC/MS techniques

    PubMed Central

    Fang, Jing; Doneanu, Catalin; Alley, William R.; Yu, Ying Qing; Beck, Alain; Chen, Weibin

    2016-01-01

    ABSTRACT In this study, we demonstrate the utility of ultra-performance liquid chromatography coupled to mass spectrometry (MS) and ion-mobility spectrometry (IMS) to characterize and compare reference and biosimilar monoclonal antibodies (mAbs) at an advanced level. Specifically, we focus on infliximab and compared the glycan profiles, higher order structures, and their host cell proteins (HCPs) of the reference and biosimilar products, which have the brand names Remicade® and Inflectra®, respectively. Overall, the biosimilar attributes mirrored those of the reference product to a very high degree. The glycan profiling analysis demonstrated a high degree of similarity, especially among the higher abundance glycans. Some differences were observed for the lower abundance glycans. Glycans terminated with N-glycolylneuraminic acid were generally observed to be at higher normalized abundance levels on the biosimilar mAb, while those possessing α-linked galactose pairs were more often expressed at higher levels on the reference molecule. Hydrogen deuterium exchange (HDX) analyses further confirmed the higher-order similarity of the 2 molecules. These results demonstrated only very slight differences between the 2 products, which, interestingly, seemed to be in the area where the N-linked glycans reside. The HCP analysis by a 2D-UPLC IMS-MS approach revealed that the same 2 HCPs were present in both mAb samples. Our ability to perform these types of analyses and acquire insightful data for biosimilarity assessment is based upon our highly sensitive UPLC MS and IMS methods. PMID:27260215

  4. A Sensitivity Analysis of Entry Age Normal Military Retirement Costs.

    DTIC Science & Technology

    1983-09-01

    sensitivity analysis of both the individual and aggregate entryU age normal actuarial cost models under differing economic, man- agerial and legal assumptions... actuarial cost models under dif- fering economic, managerial and legal assumptions. In addition to the above, a set of simple estimating equations... actuarially com- * puted variables are listed since the model uses each pay- grade’s individual actuarial data (e.g. the life expectancy of a retiring

  5. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  6. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  7. Polybrominated Diphenyl Ethers in Dryer Lint: An Advanced Analysis Laboratory

    ERIC Educational Resources Information Center

    Thompson, Robert Q.

    2008-01-01

    An advanced analytical chemistry laboratory experiment is described that involves environmental analysis and gas chromatography-mass spectrometry. Students analyze lint from clothes dryers for traces of flame retardant chemicals, polybrominated diphenylethers (PBDEs), compounds receiving much attention recently. In a typical experiment, ng/g…

  8. Advanced GIS Exercise: Predicting Rainfall Erosivity Index Using Regression Analysis

    ERIC Educational Resources Information Center

    Post, Christopher J.; Goddard, Megan A.; Mikhailova, Elena A.; Hall, Steven T.

    2006-01-01

    Graduate students from a variety of agricultural and natural resource fields are incorporating geographic information systems (GIS) analysis into their graduate research, creating a need for teaching methodologies that help students understand advanced GIS topics for use in their own research. Graduate-level GIS exercises help students understand…

  9. Advances in NMR-based biofluid analysis and metabolite profiling.

    PubMed

    Zhang, Shucha; Nagana Gowda, G A; Ye, Tao; Raftery, Daniel

    2010-07-01

    Significant improvements in NMR technology and methods have propelled NMR studies to play an important role in a rapidly expanding number of applications involving the profiling of metabolites in biofluids. This review discusses recent technical advances in NMR spectroscopy based metabolite profiling methods, data processing and analysis over the last three years.

  10. METHODS ADVANCEMENT FOR MILK ANALYSIS: THE MAMA STUDY

    EPA Science Inventory

    The Methods Advancement for Milk Analysis (MAMA) study was designed by US EPA and CDC investigators to provide data to support the technological and study design needs of the proposed National Children=s Study (NCS). The NCS is a multi-Agency-sponsored study, authorized under the...

  11. NASTRAN documentation for flutter analysis of advanced turbopropellers

    NASA Technical Reports Server (NTRS)

    Elchuri, V.; Gallo, A. M.; Skalski, S. C.

    1982-01-01

    An existing capability developed to conduct modal flutter analysis of tuned bladed-shrouded discs was modified to facilitate investigation of the subsonic unstalled flutter characteristics of advanced turbopropellers. The modifications pertain to the inclusion of oscillatory modal aerodynamic loads of blades with large (backward and forward) varying sweep.

  12. On the variational data assimilation problem solving and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Arcucci, Rossella; D'Amore, Luisa; Pistoia, Jenny; Toumi, Ralf; Murli, Almerico

    2017-04-01

    We consider the Variational Data Assimilation (VarDA) problem in an operational framework, namely, as it results when it is employed for the analysis of temperature and salinity variations of data collected in closed and semi closed seas. We present a computing approach to solve the main computational kernel at the heart of the VarDA problem, which outperforms the technique nowadays employed by the oceanographic operative software. The new approach is obtained by means of Tikhonov regularization. We provide the sensitivity analysis of this approach and we also study its performance in terms of the accuracy gain on the computed solution. We provide validations on two realistic oceanographic data sets.

  13. Sensitivity of Forecast Skill to Different Objective Analysis Schemes

    NASA Technical Reports Server (NTRS)

    Baker, W. E.

    1979-01-01

    Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.

  14. 6D phase space electron beam analysis and machine sensitivity studies for ELI-NP GBS

    NASA Astrophysics Data System (ADS)

    Giribono, A.; Bacci, A.; Curatolo, C.; Drebot, I.; Palumbo, L.; Petrillo, V.; Rossi, A. R.; Serafini, L.; Vaccarezza, C.; Vannozzi, A.; Variola, A.

    2016-09-01

    The ELI-NP Gamma Beam Source (GBS) is now under construction in Magurele-Bucharest (RO). Here an advanced source of gamma photons with unprecedented specifications of brilliance (>1021), monochromaticity (0.5%) and energy tunability (0.2-19.5 MeV) is being built, based on Inverse Compton Scattering in the head-on configuration between an electron beam of maximum energy 750 MeV and a high quality high power ps laser beam. These requirements make the ELI-NP GBS an advanced and challenging gamma ray source. The electron beam dynamics analysis and control regarding the machine sensitivity to the possible jitter and misalignments are presented. The effects on the beam quality are illustrated providing the basis for the alignment procedure and jitter tolerances.

  15. Advanced stress analysis methods applicable to turbine engine structures

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1985-01-01

    Advanced stress analysis methods applicable to turbine engine structures are investigated. Constructions of special elements which containing traction-free circular boundaries are investigated. New versions of mixed variational principle and version of hybrid stress elements are formulated. A method is established for suppression of kinematic deformation modes. semiLoof plate and shell elements are constructed by assumed stress hybrid method. An elastic-plastic analysis is conducted by viscoplasticity theory using the mechanical subelement model.

  16. Safety analysis of the advanced thermionic initiative reactor

    NASA Astrophysics Data System (ADS)

    Lee, Hsing H.; Klein, Andrew C.

    1995-01-01

    Previously, detailed analysis was conducted to assess the technology developed for the Advanced Thermionic Initiative reactor. This analysis included the development of an overall system design code capability and the improvement of analytical models necessary for the assessment of the use of single cell thermionic fuel elements in a low power space nuclear reactor. The present analysis extends this effort to assess the nuclear criticality safety of the ATI reactor for various different scenarios. The analysis discusses the efficacy of different methods of reactor control such as control rods, and control drums.

  17. Immunoassay Methods and their Applications in Pharmaceutical Analysis: Basic Methodology and Recent Advances.

    PubMed

    Darwish, Ibrahim A

    2006-09-01

    Immunoassays are bioanalytical methods in which the quantitation of the analyte depends on the reaction of an antigen (analyte) and an antibody. Immunoassays have been widely used in many important areas of pharmaceutical analysis such as diagnosis of diseases, therapeutic drug monitoring, clinical pharmacokinetic and bioequivalence studies in drug discovery and pharmaceutical industries. The importance and widespread of immunoassay methods in pharmaceutical analysis are attributed to their inherent specificity, high-throughput, and high sensitivity for the analysis of wide range of analytes in biological samples. Recently, marked improvements were achieved in the field of immunoassay development for the purposes of pharmaceutical analysis. These improvements involved the preparation of the unique immunoanalytical reagents, analysis of new categories of compounds, methodology, and instrumentation. The basic methodologies and recent advances in immunoassay methods applied in different fields of pharmaceutical analysis have been reviewed.

  18. Sensitivity analysis of fine sediment models using heterogeneous data

    NASA Astrophysics Data System (ADS)

    Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.

    2012-04-01

    Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the

  19. Species sensitivity analysis of heavy metals to freshwater organisms.

    PubMed

    Xin, Zheng; Wenchao, Zang; Zhenguang, Yan; Yiguo, Hong; Zhengtao, Liu; Xianliang, Yi; Xiaonan, Wang; Tingting, Liu; Liming, Zhou

    2015-10-01

    Acute toxicity data of six heavy metals [Cu, Hg, Cd, Cr(VI), Pb, Zn] to aquatic organisms were collected and screened. Species sensitivity distributions (SSD) curves of vertebrate and invertebrate were constructed by log-logistic model separately. The comprehensive comparisons of the sensitivities of different trophic species to six typical heavy metals were performed. The results indicated invertebrate taxa to each heavy metal exhibited higher sensitivity than vertebrates. However, with respect to the same taxa species, Cu had the most adverse effect on vertebrate, followed by Hg, Cd, Zn and Cr. When datasets from all species were included, Cu and Hg were still more toxic than the others. In particular, the toxicities of Pb to vertebrate and fish were complicated as the SSD curves of Pb intersected with those of other heavy metals, while the SSD curves of Pb constructed by total species no longer crossed with others. The hazardous concentrations for 5 % of the species (HC5) affected were derived to determine the concentration protecting 95 % of species. The HC5 values of the six heavy metals were in the descending order: Zn > Pb > Cr > Cd > Hg > Cu, indicating toxicities in opposite order. Moreover, potential affected fractions were calculated to assess the ecological risks of different heavy metals at certain concentrations of the selected heavy metals. Evaluations of sensitivities of the species at various trophic levels and toxicity analysis of heavy metals are necessary prior to derivation of water quality criteria and the further environmental protection.

  20. Isolation and analysis of ginseng: advances and challenges

    PubMed Central

    Wang, Chong-Zhi

    2011-01-01

    Ginseng occupies a prominent position in the list of best-selling natural products in the world. Because of its complex constituents, multidisciplinary techniques are needed to validate the analytical methods that support ginseng’s use worldwide. In the past decade, rapid development of technology has advanced many aspects of ginseng research. The aim of this review is to illustrate the recent advances in the isolation and analysis of ginseng, and to highlight their new applications and challenges. Emphasis is placed on recent trends and emerging techniques. The current article reviews the literature between January 2000 and September 2010. PMID:21258738

  1. Sensitivity-analysis techniques: self-teaching curriculum

    SciTech Connect

    Iman, R.L.; Conover, W.J.

    1982-06-01

    This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.

  2. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  3. An analytic method for sensitivity analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping Alexandre; Li, Wei; Cai, Xu

    2017-03-01

    Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and identifying which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A γ-order relative uncertainty for output, denoted by Rvγ, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the input-output function is almost linear. Two applications of the analytic formula are performed to the power grid and economic systems where the sensitivities of both actual power output and Economic Order Quantity models are analyzed. The importance of each input variable in response to the model output is quantified by the analytic formula.

  4. Optimizing human activity patterns using global sensitivity analysis

    SciTech Connect

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  5. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  6. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  7. Advanced Post-Irradiation Examination Capabilities Alternatives Analysis Report

    SciTech Connect

    Jeff Bryan; Bill Landman; Porter Hill

    2012-12-01

    An alternatives analysis was performed for the Advanced Post-Irradiation Capabilities (APIEC) project in accordance with the U.S. Department of Energy (DOE) Order DOE O 413.3B, “Program and Project Management for the Acquisition of Capital Assets”. The Alternatives Analysis considered six major alternatives: ? No Action ? Modify Existing DOE Facilities – capabilities distributed among multiple locations ? Modify Existing DOE Facilities – capabilities consolidated at a few locations ? Construct New Facility ? Commercial Partnership ? International Partnerships Based on the alternatives analysis documented herein, it is recommended to DOE that the advanced post-irradiation examination capabilities be provided by a new facility constructed at the Materials and Fuels Complex at the Idaho National Laboratory.

  8. "ATLAS" Advanced Technology Life-cycle Analysis System

    NASA Technical Reports Server (NTRS)

    Lollar, Louis F.; Mankins, John C.; ONeil, Daniel A.

    2004-01-01

    Making good decisions concerning research and development portfolios-and concerning the best systems concepts to pursue - as early as possible in the life cycle of advanced technologies is a key goal of R&D management This goal depends upon the effective integration of information from a wide variety of sources as well as focused, high-level analyses intended to inform such decisions Life-cycle Analysis System (ATLAS) methodology and tool kit. ATLAS encompasses a wide range of methods and tools. A key foundation for ATLAS is the NASA-created Technology Readiness. The toolkit is largely spreadsheet based (as of August 2003). This product is being funded by the Human and Robotics The presentation provides a summary of the Advanced Technology Level (TRL) systems Technology Program Office, Office of Exploration Systems, NASA Headquarters, Washington D.C. and is being integrated by Dan O Neil of the Advanced Projects Office, NASA/MSFC, Huntsville, AL

  9. Sensitivity Analysis of Hardwired Parameters in GALE Codes

    SciTech Connect

    Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.

    2008-12-01

    The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.

  10. Multiplexed analysis of chromosome conformation at vastly improved sensitivity

    PubMed Central

    Davies, James O.J.; Telenius, Jelena M.; McGowan, Simon; Roberts, Nigel A.; Taylor, Stephen; Higgs, Douglas R.; Hughes, Jim R.

    2015-01-01

    Since methods for analysing chromosome conformation in mammalian cells are either low resolution or low throughput and are technically challenging they are not widely used outside of specialised laboratories. We have re-designed the Capture-C method producing a new approach, called next generation (NG) Capture-C. This produces unprecedented levels of sensitivity and reproducibility and can be used to analyse many genetic loci and samples simultaneously. Importantly, high-resolution data can be produced on as few as 100,000 cells and SNPs can be used to generate allele specific tracks. The method is straightforward to perform and should therefore greatly facilitate the task of linking SNPs identified by genome wide association studies with the genes they influence. The complete and detailed protocol presented here, with new publicly available tools for library design and data analysis, will allow most laboratories to analyse chromatin conformation at levels of sensitivity and throughput that were previously impossible. PMID:26595209

  11. Numerical Sensitivity Analysis of a Composite Impact Absorber

    NASA Astrophysics Data System (ADS)

    Caputo, F.; Lamanna, G.; Scarano, D.; Soprano, A.

    2008-08-01

    This work deals with a numerical investigation on the energy absorbing capability of structural composite components. There are several difficulties associated with the numerical simulation of a composite impact-absorber, such as high geometrical non-linearities, boundary contact conditions, failure criteria, material behaviour; all those aspects make the calibration of numerical models and the evaluation of their sensitivity to the governing geometrical, physical and numerical parameters one of the main objectives of whatever numerical investigation. The last aspect is a very important one for designers in order to make the application of the model to real cases robust from both a physical and a numerical point of view. At first, on the basis of experimental data from literature, a preliminary calibration of the numerical model of a composite impact absorber and then a sensitivity analysis to the variation of the main geometrical and material parameters have been developed, by using explicit finite element algorithms implemented in the Ls-Dyna code.

  12. SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL

    SciTech Connect

    Crawford, C; Tommy Edwards, T; Bill Wilmarth, B

    2006-08-01

    A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.

  13. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  14. Develop Advanced Nonlinear Signal Analysis Topographical Mapping System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1997-01-01

    During the development of the SSME, a hierarchy of advanced signal analysis techniques for mechanical signature analysis has been developed by NASA and AI Signal Research Inc. (ASRI) to improve the safety and reliability for Space Shuttle operations. These techniques can process and identify intelligent information hidden in a measured signal which is often unidentifiable using conventional signal analysis methods. Currently, due to the highly interactive processing requirements and the volume of dynamic data involved, detailed diagnostic analysis is being performed manually which requires immense man-hours with extensive human interface. To overcome this manual process, NASA implemented this program to develop an Advanced nonlinear signal Analysis Topographical Mapping System (ATMS) to provide automatic/unsupervised engine diagnostic capabilities. The ATMS will utilize a rule-based Clips expert system to supervise a hierarchy of diagnostic signature analysis techniques in the Advanced Signal Analysis Library (ASAL). ASAL will perform automatic signal processing, archiving, and anomaly detection/identification tasks in order to provide an intelligent and fully automated engine diagnostic capability. The ATMS has been successfully developed under this contract. In summary, the program objectives to design, develop, test and conduct performance evaluation for an automated engine diagnostic system have been successfully achieved. Software implementation of the entire ATMS system on MSFC's OISPS computer has been completed. The significance of the ATMS developed under this program is attributed to the fully automated coherence analysis capability for anomaly detection and identification which can greatly enhance the power and reliability of engine diagnostic evaluation. The results have demonstrated that ATMS can significantly save time and man-hours in performing engine test/flight data analysis and performance evaluation of large volumes of dynamic test data.

  15. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    SciTech Connect

    M. Wasiolek

    2004-10-15

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.

  16. Squaraine dyes for dye-sensitized solar cells: recent advances and future challenges.

    PubMed

    Qin, Chuanjiang; Wong, Wai-Yeung; Han, Liyuan

    2013-08-01

    In the past few years, squaraine dyes have received increasing attention as a sensitizer for application in dye-sensitized solar cells. This class of dyes not only leaves open a good opportunity to afford conventional high performance dyes but also holds great promise for applications in transparent solar cells due to its low absorption intensity in the eye-sensitive region. This review provides a summary of the developments on squaraine dyes in the field of dye-sensitized solar cells and the opportunities used to improve their overall energy conversion efficiency. In particular, the main factors responsible for the low values of open-circuit voltage, short-circuit photocurrent and fill factor are discussed in detail. Future directions in research and development of near-infrared (NIR) organic materials and their applications are proposed from a personal perspective.

  17. Advances in nanowire transistors for biological analysis and cellular investigation.

    PubMed

    Li, Bor-Ran; Chen, Chiao-Chen; Kumar, U Rajesh; Chen, Yit-Tsong

    2014-04-07

    Electrical biosensors based on silicon nanowire field-effect transistors (SiNW-FETs) have attracted enormous interest in the biosensing field. SiNW-FETs have proven to be significant and efficient in detecting diverse biomolecular species with the advantages of high probing sensitivity, target selectivity, real-time recording and label-free detection. In recent years, significant advances in biosensors have been achieved, particularly for cellular investigation and biomedical diagnosis. In this critical review, we will report on the latest developments in biosensing with SiNW-FETs and discuss recent advancements in the innovative designs of SiNW-FET devices. This critical review introduces the basic instrumental setup and working principle of SiNW-FETs. Technical approaches that attempted to enhance the detection sensitivity and target selectivity of SiNW-FET sensors are discussed. In terms of applications, we review the recent achievements with SiNW-FET biosensors for the investigations of protein-protein interaction, DNA/RNA/PNA hybridization, virus detection, cellular recording, biological kinetics, and clinical diagnosis. In addition, the novel architecture designs of the SiNW-FET devices are highlighted in studies of live neuron cells, electrophysiological measurements and other signal transduction pathways. Despite these remarkable achievements, certain improvements remain necessary in the device performance and clinical applications of FET-based biosensors; thus, several prospects about the future development of nanowire transistor-based instruments for biosensing employments are discussed at the end of this review.

  18. Numerical analysis of the V-Y shaped advancement flap.

    PubMed

    Remache, D; Chambert, J; Pauchot, J; Jacquet, E

    2015-10-01

    The V-Y advancement flap is a usual technique for the closure of skin defects. A triangular flap is incised adjacent to a skin defect of rectangular shape. As the flap is advanced to close the initial defect, two smaller defects in the shape of a parallelogram are formed with respect to a reflection symmetry. The height of the defects depends on the apex angle of the flap and the closure efforts are related to the defects height. Andrades et al. 2005 have performed a geometrical analysis of the V-Y flap technique in order to reach a compromise between the flap size and the defects width. However, the geometrical approach does not consider the mechanical properties of the skin. The present analysis based on the finite element method is proposed as a complement to the geometrical one. This analysis aims to highlight the major role of the skin elasticity for a full analysis of the V-Y advancement flap. Furthermore, the study of this technique shows that closing at the flap apex seems mechanically the most interesting step. Thus different strategies of defect closure at the flap apex stemming from surgeon's know-how have been tested by numerical simulations.

  19. Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej

    2010-06-01

    Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.

  20. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    SciTech Connect

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  1. Path-sensitive analysis for reducing rollback overheads

    DOEpatents

    O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong

    2014-07-22

    A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.

  2. Sensitivity and uncertainty analysis of a polyurethane foam decomposition model

    SciTech Connect

    HOBBS,MICHAEL L.; ROBINSON,DAVID G.

    2000-03-14

    Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.

  3. A global sensitivity analysis of crop virtual water content

    NASA Astrophysics Data System (ADS)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  4. Advanced three-dimensional dynamic analysis by boundary element methods

    NASA Technical Reports Server (NTRS)

    Banerjee, P. K.; Ahma, S.

    1985-01-01

    Advanced formulations of boundary element method for periodic, transient transform domain and transient time domain solution of three-dimensional solids have been implemented using a family of isoparametric boundary elements. The necessary numerical integration techniques as well as the various solution algorithms are described. The developed analysis has been incorporated in a fully general purpose computer program BEST3D which can handle up to 10 subregions. A number of numerical examples are presented to demonstrate the accuracy of the dynamic analyses.

  5. Advanced superposition methods for high speed turbopump vibration analysis

    NASA Technical Reports Server (NTRS)

    Nielson, C. E.; Campany, A. D.

    1981-01-01

    The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.

  6. Analysis of Transition-Sensitized Turbulent Transport Equations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,

    2005-01-01

    The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.

  7. Parametric sensitivity analysis for temperature control in outdoor photobioreactors.

    PubMed

    Pereira, Darlan A; Rodrigues, Vinicius O; Gómez, Sonia V; Sales, Emerson A; Jorquera, Orlando

    2013-09-01

    In this study a critical analysis of input parameters on a model to describe the broth temperature in flat plate photobioreactors throughout the day is carried out in order to assess the effect of these parameters on the model. Using the design of experiment approach, variation of selected parameters was introduced and the influence of each parameter on the broth temperature was evaluated by a parametric sensitivity analysis. The results show that the major influence on the broth temperature is that from the reactor wall and the shading factor, both related to the direct and reflected solar irradiation. Other parameter which play an important role on the temperature is the distance between plates. This study provides information to improve the design and establish the most appropriate operating conditions for the cultivation of microalgae in outdoor systems.

  8. A highly sensitive and multiplexed method for focused transcript analysis.

    PubMed

    Kataja, Kari; Satokari, Reetta M; Arvas, Mikko; Takkinen, Kristiina; Söderlund, Hans

    2006-10-01

    We describe a novel, multiplexed method for focused transcript analysis of tens to hundreds of genes. In this method TRAC (transcript analysis with aid of affinity capture) mRNA targets, a set of amplifiable detection probes of distinct sizes and biotinylated oligo(dT) capture probe are hybridized in solution. The formed sandwich hybrids are collected on magnetic streptavidin-coated microparticles and washed. The hybridized probes are eluted, optionally amplified by a PCR using a universal primer pair and detected with laser-induced fluorescence and capillary electrophoresis. The probes were designed by using a computer program developed for the purpose. The TRAC method was adapted to 96-well format by utilizing an automated magnetic particle processor. Here we demonstrate a simultaneous analysis of 18 Saccharomyces cerevisiae transcripts from two experimental conditions and show a comparison with a qPCR system. The sensitivity of the method is significantly increased by the PCR amplification of the hybridized and eluted probes. Our data demonstrate a bias-free use of at least 16 cycles of PCR amplification to increase probe signal, allowing transcript analysis from 2.5 ng of the total mRNA sample. The method is fast and simple and avoids cDNA conversion. These qualifications make it a potential, new means for routine analysis and a complementing method for microarrays and high density chips.

  9. Cost-utility analysis of an advanced pressure ulcer management protocol followed by trained wound, ostomy, and continence nurses.

    PubMed

    Kaitani, Toshiko; Nakagami, Gojiro; Iizaka, Shinji; Fukuda, Takashi; Oe, Makoto; Igarashi, Ataru; Mori, Taketoshi; Takemura, Yukie; Mizokami, Yuko; Sugama, Junko; Sanada, Hiromi

    2015-01-01

    The high prevalence of severe pressure ulcers (PUs) is an important issue that requires to be highlighted in Japan. In a previous study, we devised an advanced PU management protocol to enable early detection of and intervention for deep tissue injury and critical colonization. This protocol was effective for preventing more severe PUs. The present study aimed to compare the cost-effectiveness of the care provided using an advanced PU management protocol, from a medical provider's perspective, implemented by trained wound, ostomy, and continence nurses (WOCNs), with that of conventional care provided by a control group of WOCNs. A Markov model was constructed for a 1-year time horizon to determine the incremental cost-effectiveness ratio of advanced PU management compared with conventional care. The number of quality-adjusted life-years gained, and the cost in Japanese yen (¥) ($US1 = ¥120; 2015) was used as the outcome. Model inputs for clinical probabilities and related costs were based on our previous clinical trial results. Univariate sensitivity analyses were performed. Furthermore, a Bayesian multivariate probability sensitivity analysis was performed using Monte Carlo simulations with advanced PU management. Two different models were created for initial cohort distribution. For both models, the expected effectiveness for the intervention group using advanced PU management techniques was high, with a low expected cost value. The sensitivity analyses suggested that the results were robust. Intervention by WOCNs using advanced PU management techniques was more effective and cost-effective than conventional care.

  10. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  11. Three-dimensional aerodynamic shape optimization using discrete sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Burgreen, Gregory W.

    1995-01-01

    An aerodynamic shape optimization procedure based on discrete sensitivity analysis is extended to treat three-dimensional geometries. The function of sensitivity analysis is to directly couple computational fluid dynamics (CFD) with numerical optimization techniques, which facilitates the construction of efficient direct-design methods. The development of a practical three-dimensional design procedures entails many challenges, such as: (1) the demand for significant efficiency improvements over current design methods; (2) a general and flexible three-dimensional surface representation; and (3) the efficient solution of very large systems of linear algebraic equations. It is demonstrated that each of these challenges is overcome by: (1) employing fully implicit (Newton) methods for the CFD analyses; (2) adopting a Bezier-Bernstein polynomial parameterization of two- and three-dimensional surfaces; and (3) using preconditioned conjugate gradient-like linear system solvers. Whereas each of these extensions independently yields an improvement in computational efficiency, the combined effect of implementing all the extensions simultaneously results in a significant factor of 50 decrease in computational time and a factor of eight reduction in memory over the most efficient design strategies in current use. The new aerodynamic shape optimization procedure is demonstrated in the design of both two- and three-dimensional inviscid aerodynamic problems including a two-dimensional supersonic internal/external nozzle, two-dimensional transonic airfoils (resulting in supercritical shapes), three-dimensional transport wings, and three-dimensional supersonic delta wings. Each design application results in realistic and useful optimized shapes.

  12. Parametric sensitivity analysis of avian pancreatic polypeptide (APP).

    PubMed

    Zhang, H; Wong, C F; Thacher, T; Rabitz, H

    1995-10-01

    Computer simulations utilizing a classical force field have been widely used to study biomolecular properties. It is important to identify the key force field parameters or structural groups controlling the molecular properties. In the present paper the sensitivity analysis method is applied to study how various partial charges and solvation parameters affect the equilibrium structure and free energy of avian pancreatic polypeptide (APP). The general shape of APP is characterized by its three principal moments of inertia. A molecular dynamics simulation of APP was carried out with the OPLS/Amber force field and a continuum model of solvation energy. The analysis pinpoints the parameters which have the largest (or smallest) impact on the protein equilibrium structure (i.e., the moments of inertia) or free energy. A display of the protein with its atoms colored according to their sensitivities illustrates the patterns of the interactions responsible for the protein stability. The results suggest that the electrostatic interactions play a more dominant role in protein stability than the part of the solvation effect modeled by the atomic solvation parameters.

  13. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    PubMed

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements.

  14. Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)

    SciTech Connect

    Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.

    2012-10-01

    No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.

  15. [Advanced data analysis and visualization for clinical laboratory].

    PubMed

    Inada, Masanori; Yoneyama, Akiko

    2011-01-01

    This paper describes visualization techniques that help identify hidden structures in clinical laboratory data. The visualization of data is helpful for a rapid and better understanding of the characteristics of data sets. Various charts help the user identify trends in data. Scatter plots help prevent misinterpretations due to invalid data by identifying outliers. The representation of experimental data in figures is always useful for communicating results to others. Currently, flexible methods such as smoothing methods and latent structure analysis are available owing to the presence of advanced hardware and software. Principle component analysis, which is a well-known technique used to reduce multidimensional data sets, can be carried out on a personal computer. These methods could lead to advanced visualization with regard to exploratory data analysis. In this paper, we present 3 examples in order to introduce advanced data analysis. In the first example, a smoothing spline was fitted to a time-series from the control chart which is not in a state of statistical control. The trend line was clearly extracted from the daily measurements of the control samples. In the second example, principal component analysis was used to identify a new diagnostic indicator for Graves' disease. The multi-dimensional data obtained from patients were reduced to lower dimensions, and the principle components thus obtained summarized the variation in the data set. In the final example, a latent structure analysis for a Gaussian mixture model was used to draw complex density functions suitable for actual laboratory data. As a result, 5 clusters were extracted. The mixed density function of these clusters represented the data distribution graphically. The methods used in the above examples make the creation of complicated models for clinical laboratories more simple and flexible.

  16. Chapter 5: Modulation Excitation Spectroscopy with Phase-Sensitive Detection for Surface Analysis

    SciTech Connect

    Shulda, Sarah; Richards, Ryan M.

    2016-02-19

    Advancements in in situ spectroscopic techniques have led to significant progress being made in elucidating heterogeneous reaction mechanisms. The potential of these progressive methods is often limited only by the complexity of the system and noise in the data. Short-lived intermediates can be challenging, if not impossible, to identify with conventional spectra analysis means. Often equally difficult is separating signals that arise from active and inactive species. Modulation excitation spectroscopy combined with phase-sensitive detection analysis is a powerful tool for removing noise from the data while simultaneously revealing the underlying kinetics of the reaction. A stimulus is applied at a constant frequency to the reaction system, for example, a reactant cycled with an inert phase. Through mathematical manipulation of the data, any signal contributing to the overall spectra but not oscillating with the same frequency as the stimulus will be dampened or removed. With phase-sensitive detection, signals oscillating with the stimulus frequency but with various lag times are amplified providing valuable kinetic information. In this chapter, some examples are provided from the literature that have successfully used modulation excitation spectroscopy with phase-sensitive detection to uncover previously unobserved reaction intermediates and kinetics. Examples from a broad range of spectroscopic methods are included to provide perspective to the reader.

  17. Aerodynamic Shape Sensitivity Analysis and Design Optimization of Complex Configurations Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.

    1997-01-01

    A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.

  18. Advances in Computational Stability Analysis of Composite Aerospace Structures

    SciTech Connect

    Degenhardt, R.; Araujo, F. C. de

    2010-09-30

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

  19. Analysis of advanced solid rocket motor ignition phenomena

    NASA Astrophysics Data System (ADS)

    Foster, Winfred A., Jr.; Jenkins, Rhonald M.

    1995-07-01

    This report presents the results obtained from an experimental analysis of the flow field in the slots of the star grain section in the head-end of the advanced solid rocket motor during the ignition transient. This work represents an extension of the previous tests and analysis to include the effects of using a center port in conjunction with multiple canted igniter ports. The flow field measurements include oil smear data on the star slot walls, pressure and heat transfer coefficient measurements on the star slot walls and velocity measurements in the star slot.

  20. Advanced stress analysis methods applicable to turbine engine structures

    NASA Technical Reports Server (NTRS)

    Pian, Theodore H. H.

    1991-01-01

    The following tasks on the study of advanced stress analysis methods applicable to turbine engine structures are described: (1) constructions of special elements which contain traction-free circular boundaries; (2) formulation of new version of mixed variational principles and new version of hybrid stress elements; (3) establishment of methods for suppression of kinematic deformation modes; (4) construction of semiLoof plate and shell elements by assumed stress hybrid method; and (5) elastic-plastic analysis by viscoplasticity theory using the mechanical subelement model.

  1. Advanced Signal Analysis for Forensic Applications of Ground Penetrating Radar

    SciTech Connect

    Steven Koppenjan; Matthew Streeton; Hua Lee; Michael Lee; Sashi Ono

    2004-06-01

    Ground penetrating radar (GPR) systems have traditionally been used to image subsurface objects. The main focus of this paper is to evaluate an advanced signal analysis technique. Instead of compiling spatial data for the analysis, this technique conducts object recognition procedures based on spectral statistics. The identification feature of an object type is formed from the training vectors by a singular-value decomposition procedure. To illustrate its capability, this procedure is applied to experimental data and compared to the performance of the neural-network approach.

  2. Cost/benefit analysis of advanced materials technology candidates for the 1980's, part 2

    NASA Technical Reports Server (NTRS)

    Dennis, R. E.; Maertins, H. F.

    1980-01-01

    Cost/benefit analyses to evaluate advanced material technologies projects considered for general aviation and turboprop commuter aircraft through estimated life-cycle costs, direct operating costs, and development costs are discussed. Specifically addressed is the selection of technologies to be evaluated; development of property goals; assessment of candidate technologies on typical engines and aircraft; sensitivity analysis of the changes in property goals on performance and economics, cost, and risk analysis for each technology; and ranking of each technology by relative value. The cost/benefit analysis was applied to a domestic, nonrevenue producing, business-type jet aircraft configured with two TFE731-3 turbofan engines, and to a domestic, nonrevenue producing, business type turboprop aircraft configured with two TPE331-10 turboprop engines. In addition, a cost/benefit analysis was applied to a commercial turboprop aircraft configured with a growth version of the TPE331-10.

  3. Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models

    DTIC Science & Technology

    2002-03-01

    Method (Satisfying Method) Disjunctive Method Standart Level Elimination by Aspect Lexicograhic Semi order Lexicographic Method Ordinal Weigted Sum...framework for sensitivity analysis of hierarchical additive value models and standardizes the sensitivity analysis notation and terminology . Finally

  4. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  5. Structural Configuration Systems Analysis for Advanced Aircraft Fuselage Concepts

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Welstead, Jason R.; Quinlan, Jesse R.; Guynn, Mark D.

    2016-01-01

    Structural configuration analysis of an advanced aircraft fuselage concept is investigated. This concept is characterized by a double-bubble section fuselage with rear mounted engines. Based on lessons learned from structural systems analysis of unconventional aircraft, high-fidelity finite-element models (FEM) are developed for evaluating structural performance of three double-bubble section configurations. Structural sizing and stress analysis are applied for design improvement and weight reduction. Among the three double-bubble configurations, the double-D cross-section fuselage design was found to have a relatively lower structural weight. The structural FEM weights of these three double-bubble fuselage section concepts are also compared with several cylindrical fuselage models. Since these fuselage concepts are different in size, shape and material, the fuselage structural FEM weights are normalized by the corresponding passenger floor area for a relative comparison. This structural systems analysis indicates that an advanced composite double-D section fuselage may have a relative structural weight ratio advantage over a conventional aluminum fuselage. Ten commercial and conceptual aircraft fuselage structural weight estimates, which are empirically derived from the corresponding maximum takeoff gross weight, are also presented and compared with the FEM- based estimates for possible correlation. A conceptual full vehicle FEM model with a double-D fuselage is also developed for preliminary structural analysis and weight estimation.

  6. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement

  7. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis

  8. GPU-based Integration with Application in Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar

    2010-05-01

    The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is

  9. Nano risk analysis: advancing the science for nanomaterials risk management.

    PubMed

    Shatkin, Jo Anne; Abbott, Linda Carolyn; Bradley, Ann E; Canady, Richard Alan; Guidotti, Tee; Kulinowski, Kristen M; Löfstedt, Ragnar E; Louis, Garrick; MacDonell, Margaret; Macdonell, Margaret; Maynard, Andrew D; Paoli, Greg; Sheremeta, Lorraine; Walker, Nigel; White, Ronald; Williams, Richard

    2010-11-01

    Scientists, activists, industry, and governments have raised concerns about health and environmental risks of nanoscale materials. The Society for Risk Analysis convened experts in September 2008 in Washington, DC to deliberate on issues relating to the unique attributes of nanoscale materials that raise novel concerns about health risks. This article reports on the overall themes and findings of the workshop, uncovering the underlying issues for each of these topics that become recurring themes. The attributes of nanoscale particles and other nanomaterials that present novel issues for risk analysis are evaluated in a risk analysis framework, identifying challenges and opportunities for risk analysts and others seeking to assess and manage the risks from emerging nanoscale materials and nanotechnologies. Workshop deliberations and recommendations for advancing the risk analysis and management of nanotechnologies are presented.

  10. Sensitivity analysis of water quality for Delhi stretch of the River Yamuna, India.

    PubMed

    Parmar, D L; Keshari, Ashok K

    2012-03-01

    Simulation models are used to aid the decision makers about water pollution control and management in river systems. However, uncertainty of model parameters affects the model predictions and hence the pollution control decision. Therefore, it often is necessary to identify the model parameters that significantly affect the model output uncertainty prior to or as a supplement to model application to water pollution control and planning problems. In this study, sensitivity analysis, as a tool for uncertainty analysis was carried out to assess the sensitivity of water quality to (a) model parameters (b) pollution abatement measures such as wastewater treatment, waste discharge and flow augmentation from upstream reservoir. In addition, sensitivity analysis for the "best practical solution" was carried out to help the decision makers in choosing an appropriate option. The Delhi stretch of the river Yamuna was considered as a case study. The QUAL2E model is used for water quality simulation. The results obtained indicate that parameters K(1) (deoxygenation constant) and K(3) (settling oxygen demand), which is the rate of biochemical decomposition of organic matter and rate of BOD removal by settling, respectively, are the most sensitive parameters for the considered river stretch. Different combinations of variations in K(1) and K(2) also revealed similar results for better understanding of inter-dependability of K(1) and K(2). Also, among the pollution abatement methods, the change (perturbation) in wastewater treatment level at primary, secondary, tertiary, and advanced has the greatest effect on the uncertainty of the simulated dissolved oxygen and biochemical oxygen demand concentrations.

  11. Advanced hydrogen/oxygen thrust chamber design analysis

    NASA Technical Reports Server (NTRS)

    Shoji, J. M.

    1973-01-01

    The results are reported of the advanced hydrogen/oxygen thrust chamber design analysis program. The primary objectives of this program were to: (1) provide an in-depth analytical investigation to develop thrust chamber cooling and fatigue life limitations of an advanced, high pressure, high performance H2/O2 engine design of 20,000-pounds (88960.0 N) thrust; and (2) integrate the existing heat transfer analysis, thermal fatigue and stress aspects for advanced chambers into a comprehensive computer program. Thrust chamber designs and analyses were performed to evaluate various combustor materials, coolant passage configurations (tubes and channels), and cooling circuits to define the nominal 1900 psia (1.31 x 10 to the 7th power N/sq m) chamber pressure, 300-cycle life thrust chamber. The cycle life capability of the selected configuration was then determined for three duty cycles. Also the influence of cycle life and chamber pressure on thrust chamber design was investigated by varying in cycle life requirements at the nominal chamber pressure and by varying the chamber pressure at the nominal cycle life requirement.

  12. Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Lorence, Christopher B.; Hall, Kenneth C.

    1995-01-01

    A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide

  13. Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations

    SciTech Connect

    Petzold, L; Cao, Y; Li, S; Serban, R

    2005-08-09

    Sensitivity analysis generates essential information for model development, design optimization, parameter estimation, optimal control, model reduction and experimental design. In this paper we describe the forward and adjoint methods for sensitivity analysis, and outline some of our recent work on theory, algorithms and software for sensitivity analysis of differential-algebraic equation (DAE) and time-dependent partial differential equation (PDE) systems.

  14. Global sensitivity analysis of the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Neelam, Maheshwari; Mohanty, Binayak P.

    2015-04-01

    With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.

  15. Sensitivity analysis of channel-bend hydraulics influenced by vegetation

    NASA Astrophysics Data System (ADS)

    Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.

    2015-12-01

    Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.

  16. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  17. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  18. Dynamic global sensitivity analysis in bioreactor networks for bioethanol production.

    PubMed

    Ochoa, M P; Estrada, V; Di Maggio, J; Hoch, P M

    2016-01-01

    Dynamic global sensitivity analysis (GSA) was performed for three different dynamic bioreactor models of increasing complexity: a fermenter for bioethanol production, a bioreactors network, where two types of bioreactors were considered: aerobic for biomass production and anaerobic for bioethanol production and a co-fermenter bioreactor, to identify the parameters that most contribute to uncertainty in model outputs. Sobol's method was used to calculate time profiles for sensitivity indices. Numerical results have shown the time-variant influence of uncertain parameters on model variables. Most influential model parameters have been determined. For the model of the bioethanol fermenter, μmax (maximum growth rate) and Ks (half-saturation constant) are the parameters with largest contribution to model variables uncertainty; in the bioreactors network, the most influential parameter is μmax,1 (maximum growth rate in bioreactor 1); whereas λ (glucose-to-total sugars concentration ratio in the feed) is the most influential parameter over all model variables in the co-fermentation bioreactor.

  19. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-10-02

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  20. Space Shuttle Orbiter entry guidance and control system sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Stone, H. W.; Powell, R. W.

    1976-01-01

    An approach has been developed to determine the guidance and control system sensitivity to off-nominal aerodynamics for the Space Shuttle Orbiter during entry. This approach, which uses a nonlinear six-degree-of-freedom interactive, digital simulation, has been applied to both the longitudinal and lateral-directional axes for a portion of the orbiter entry. Boundary values for each of the aerodynamic parameters have been identified, the key parameters have been determined, and system modifications that will increase system tolerance to off-nominal aerodynamics have been recommended. The simulations were judged by specified criteria and the performance was evaluated by use of key dependent variables. The analysis is now being expanded to include the latest shuttle guidance and control systems throughout the entry speed range.

  1. Neutron activation analysis; A sensitive test for trace elements

    SciTech Connect

    Hossain, T.Z. . Ward Lab.)

    1992-01-01

    This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.

  2. Sensitivity analysis and optimization of thin-film thermoelectric coolers

    NASA Astrophysics Data System (ADS)

    Harsha Choday, Sri; Roy, Kaushik

    2013-06-01

    The cooling performance of a thermoelectric (TE) material is dependent on the figure-of-merit (ZT = S2σT/κ), where S is the Seebeck coefficient, σ and κ are the electrical and thermal conductivities, respectively. The standard definition of ZT assigns equal importance to power factor (S2σ) and thermal conductivity. In this paper, we analyze the relative importance of each thermoelectric parameter on the cooling performance using the mathematical framework of sensitivity analysis. In addition, the impact of the electrical/thermal contact parasitics on bulk and superlattice Bi2Te3 is also investigated. In the presence of significant contact parasitics, we find that the carrier concentration that results in best cooling is lower than that of the highest ZT. We also establish the level of contact parasitics that are needed such that their impact on TE cooling is negligible.

  3. Sensitivity and uncertainty analysis of the recharge boundary condition

    NASA Astrophysics Data System (ADS)

    Jyrkama, M. I.; Sykes, J. F.

    2006-01-01

    The reliability analysis method is integrated with MODFLOW to study the impact of recharge on the groundwater flow system at a study area in New Jersey. The performance function is formulated in terms of head or flow rate at a pumping well, while the recharge sensitivity vector is computed efficiently by implementing the adjoint method in MODFLOW. The developed methodology not only quantifies the reliability of head at the well in terms of uncertainties in the recharge boundary condition, but it also delineates areas of recharge that have the highest impact on the head and flow rate at the well. The results clearly identify the most important land use areas that should be protected in order to maintain the head and hence production at the pumping well. These areas extend far beyond the steady state well capture zone used for land use planning and management within traditional wellhead protection programs.

  4. Sensitivity analysis for causal inference using inverse probability weighting.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling; Were, Martin C

    2011-09-01

    Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non-parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non-parametric method. We illustrate our method with two medical data sets.

  5. Control sensitivity indices for stability analysis of HVdc systems

    SciTech Connect

    Nayak, O.B.; Gole, A.M.; Chapman, D.G.; Davies, J.B.

    1995-10-01

    This paper presents a new concept called the ``Control Sensitivity Index`` of CSI, for the stability analysis of HVdc converters connected to weak ac systems. The CSI for a particular control mode can be defined as the ratio of incremental changes in the two system variables that are most relevant to that control mode. The index provides valuable information on the stability of the system and, unlike other approaches, aids in the design of the controller. It also plays an important role in defining non-linear gains for the controller. This paper offers a generalized formulation of CSI and demonstrates its application through an analysis of the CSI for three modes of HVdc control. The conclusions drawn from the analysis are confirmed by a detailed electromagnetic transients simulation of the ac/dc system. The paper concludes that the CSI can be used to improve the controller design and, for an inverter in a weak ac system, the conventional voltage control mode is more stable than the conventional {gamma} control mode.

  6. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  7. Toward Sensitive and Accurate Analysis of Antibody Biotherapeutics by Liquid Chromatography Coupled with Mass Spectrometry

    PubMed Central

    An, Bo; Zhang, Ming

    2014-01-01

    Remarkable methodological advances in the past decade have expanded the application of liquid chromatography coupled with mass spectrometry (LC/MS) analysis of biotherapeutics. Currently, LC/MS represents a promising alternative or supplement to the traditional ligand binding assay (LBA) in the pharmacokinetic, pharmacodynamic, and toxicokinetic studies of protein drugs, owing to the rapid and cost-effective method development, high specificity and reproducibility, low sample consumption, the capacity of analyzing multiple targets in one analysis, and the fact that a validated method can be readily adapted across various matrices and species. While promising, technical challenges associated with sensitivity, sample preparation, method development, and quantitative accuracy need to be addressed to enable full utilization of LC/MS. This article introduces the rationale and technical challenges of LC/MS techniques in biotherapeutics analysis and summarizes recently developed strategies to alleviate these challenges. Applications of LC/MS techniques on quantification and characterization of antibody biotherapeutics are also discussed. We speculate that despite the highly attractive features of LC/MS, it will not fully replace traditional assays such as LBA in the foreseeable future; instead, the forthcoming trend is likely the conjunction of biochemical techniques with versatile LC/MS approaches to achieve accurate, sensitive, and unbiased characterization of biotherapeutics in highly complex pharmaceutical/biologic matrices. Such combinations will constitute powerful tools to tackle the challenges posed by the rapidly growing needs for biotherapeutics development. PMID:25185260

  8. Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.

    PubMed

    Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier

    2012-12-01

    The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project.

  9. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    SciTech Connect

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  10. A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity

    NASA Astrophysics Data System (ADS)

    Tierney, G.; Posselt, D. J.; Booth, J. F.

    2015-12-01

    The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with

  11. An educationally inspired illustration of two-dimensional Quantitative Microbiological Risk Assessment (QMRA) and sensitivity analysis.

    PubMed

    Vásquez, G A; Busschaert, P; Haberbeck, L U; Uyttendaele, M; Geeraerd, A H

    2014-11-03

    Quantitative Microbiological Risk Assessment (QMRA) is a structured methodology used to assess the risk involved by ingestion of a pathogen. It applies mathematical models combined with an accurate exploitation of data sets, represented by distributions and - in the case of two-dimensional Monte Carlo simulations - their hyperparameters. This research aims to highlight background information, assumptions and truncations of a two-dimensional QMRA and advanced sensitivity analysis. We believe that such a detailed listing is not always clearly presented in actual risk assessment studies, while it is essential to ensure reliable and realistic simulations and interpretations. As a case-study, we are considering the occurrence of listeriosis in smoked fish products in Belgium during the period 2008-2009, using two-dimensional Monte Carlo and two sensitivity analysis methods (Spearman correlation and Sobol sensitivity indices) to estimate the most relevant factors of the final risk estimate. A risk estimate of 0.018% per consumption of contaminated smoked fish by an immunocompromised person was obtained. The final estimate of listeriosis cases (23) is within the actual reported result obtained for the same period and for the same population. Variability on the final risk estimate is determined by the variability regarding (i) consumer refrigerator temperatures, (ii) the reference growth rate of L. monocytogenes, (iii) the minimum growth temperature of L. monocytogenes and (iv) consumer portion size. Variability regarding the initial contamination level of L. monocytogenes tends to appear as a determinant of risk variability only when the minimum growth temperature is not included in the sensitivity analysis; when it is included the impact regarding the variability on the initial contamination level of L. monocytogenes is disappearing. Uncertainty determinants of the final risk indicated the need of gathering more information on the reference growth rate and the minimum

  12. Composite Structure Modeling and Analysis of Advanced Aircraft Fuselage Concepts

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Sorokach, Michael R.

    2015-01-01

    NASA Environmentally Responsible Aviation (ERA) project and the Boeing Company are collabrating to advance the unitized damage arresting composite airframe technology with application to the Hybrid-Wing-Body (HWB) aircraft. The testing of a HWB fuselage section with Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) construction is presently being conducted at NASA Langley. Based on lessons learned from previous HWB structural design studies, improved finite-element models (FEM) of the HWB multi-bay and bulkhead assembly are developed to evaluate the performance of the PRSEUS construction. In order to assess the comparative weight reduction benefits of the PRSEUS technology, conventional cylindrical skin-stringer-frame models of a cylindrical and a double-bubble section fuselage concepts are developed. Stress analysis with design cabin-pressure load and scenario based case studies are conducted for design improvement in each case. Alternate analysis with stitched composite hat-stringers and C-frames are also presented, in addition to the foam-core sandwich frame and pultruded rod-stringer construction. The FEM structural stress, strain and weights are computed and compared for relative weight/strength benefit assessment. The structural analysis and specific weight comparison of these stitched composite advanced aircraft fuselage concepts demonstrated that the pressurized HWB fuselage section assembly can be structurally as efficient as the conventional cylindrical fuselage section with composite stringer-frame and PRSEUS construction, and significantly better than the conventional aluminum construction and the double-bubble section concept.

  13. ADVISOR: a systems analysis tool for advanced vehicle modeling

    NASA Astrophysics Data System (ADS)

    Markel, T.; Brooker, A.; Hendricks, T.; Johnson, V.; Kelly, K.; Kramer, B.; O'Keefe, M.; Sprik, S.; Wipke, K.

    This paper provides an overview of Advanced Vehicle Simulator (ADVISOR)—the US Department of Energy's (DOE's) ADVISOR written in the MATLAB/Simulink environment and developed by the National Renewable Energy Laboratory. ADVISOR provides the vehicle engineering community with an easy-to-use, flexible, yet robust and supported analysis package for advanced vehicle modeling. It is primarily used to quantify the fuel economy, the performance, and the emissions of vehicles that use alternative technologies including fuel cells, batteries, electric motors, and internal combustion engines in hybrid (i.e. multiple power sources) configurations. It excels at quantifying the relative change that can be expected due to the implementation of technology compared to a baseline scenario. ADVISOR's capabilities and limitations are presented and the power source models that are included in ADVISOR are discussed. Finally, several applications of the tool are presented to highlight ADVISOR's functionality. The content of this paper is based on a presentation made at the 'Development of Advanced Battery Engineering Models' workshop held in Crystal City, Virginia in August 2001.

  14. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    SciTech Connect

    Arbanas, G.; Williams, M.L.; Leal, L.C.; Dunn, M.E.; Khuwaileh, B.A.; Wang, C.; Abdel-Khalik, H.

    2015-01-15

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, “AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications,” Trans. Am. Nucl. Soc. 86, 118–119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.

  15. Recent Advances of Cobalt(II/III) Redox Couples for Dye-Sensitized Solar Cell Applications.

    PubMed

    Giribabu, Lingamallu; Bolligarla, Ramababu; Panigrahi, Mallika

    2015-08-01

    In recent years dye-sensitized solar cells (DSSCs) have emerged as one of the alternatives for the global energy crisis. DSSCs have achieved a certified efficiency of >11% by using the I(-) /I3 (-) redox couple. In order to commercialize the technology almost all components of the device have to be improved. Among the various components of DSSCs, the redox couple that regenerates the oxidized sensitizer plays a crucial role in achieving high efficiency and durability of the cell. However, the I(-) /I3 (-) redox couple has certain limitations such as the absorption of triiodide up to 430 nm and the volatile nature of iodine, which also corrodes the silver-based current collectors. These limitations are obstructing the commercialization of this technology. For this reason, one has to identify alternative redox couples. In this regard, the Co(II/III) redox couple is found to be the best alternative to the existing I(-) /I3 (-) redox couple. Recently, DSSC test cell efficiency has risen up to 13% by using the cobalt redox couple. This review emphasizes the recent development of Co(II/III) redox couples for DSSC applications.

  16. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    NASA Astrophysics Data System (ADS)

    Arbanas, G.; Williams, M. L.; Leal, L. C.; Dunn, M. E.; Khuwaileh, B. A.; Wang, C.; Abdel-Khalik, H.

    2015-01-01

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, "AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications," Trans. Am. Nucl. Soc. 86, 118-119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.

  17. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    SciTech Connect

    Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany

    2015-01-01

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBEs and differential cross section data simultaneously.

  18. Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.

  19. Advanced Wireless Power Transfer Vehicle and Infrastructure Analysis (Presentation)

    SciTech Connect

    Gonder, J.; Brooker, A.; Burton, E.; Wang, J.; Konan, A.

    2014-06-01

    This presentation discusses current research at NREL on advanced wireless power transfer vehicle and infrastructure analysis. The potential benefits of E-roadway include more electrified driving miles from battery electric vehicles, plug-in hybrid electric vehicles, or even properly equipped hybrid electric vehicles (i.e., more electrified miles could be obtained from a given battery size, or electrified driving miles could be maintained while using smaller and less expensive batteries, thereby increasing cost competitiveness and potential market penetration). The system optimization aspect is key given the potential impact of this technology on the vehicles, the power grid and the road infrastructure.

  20. Creep analysis of fuel plates for the Advanced Neutron Source

    SciTech Connect

    Swinson, W.F.; Yahr, G.T.

    1994-11-01

    The reactor for the planned Advanced Neutron Source will use closely spaced arrays of fuel plates. The plates are thin and will have a core containing enriched uranium silicide fuel clad in aluminum. The heat load caused by the nuclear reactions within the fuel plates will be removed by flowing high-velocity heavy water through narrow channels between the plates. However, the plates will still be at elevated temperatures while in service, and the potential for excessive plate deformation because of creep must be considered. An analysis to include creep for deformation and stresses because of temperature over a given time span has been performed and is reported herein.

  1. Ecological sensitivity analysis in Fengshun County based on GIS

    NASA Astrophysics Data System (ADS)

    Zhou, Xia; Zhang, Hong-ou

    2008-10-01

    Ecological sensitivity in Fengshun County was analyzed by using GIS technology. Several factors were considered, which included sensitivity to acid rain, soil erosion, flood and geological disaster. Meanwhile, nature reserve and economic indicator were also considered. After single sensitivity assessment, the general ecological sensitivity was computed through GIS software. Ranging from low to extreme the ecological sensitivity was divided into five levels: not sensitive, low sensitive, moderately sensitive, highly sensitive and extremely sensitive. The results showed there was highly sensitivity in the south-east Fengshun. With the sensitivity and environment characters, the ecological function zone was also worked out, which included three big ecological function zones and ten sub-ecological zones. The three big ecological function zones were hill eco-environmental function zone, platform and plain ecological construction zone, ecological restoration and control zone. Based on the analyzed results, some strategies on environmental protection to each zone were brought forward, which provided the gist for making urban planning and environmental protection planning to Fengshun.

  2. Spatial risk assessment for critical network infrastructure using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Möderl, Michael; Rauch, Wolfgang

    2011-12-01

    The presented spatial risk assessment method allows for managing critical network infrastructure in urban areas under abnormal and future conditions caused e.g., by terrorist attacks, infrastructure deterioration or climate change. For the spatial risk assessment, vulnerability maps for critical network infrastructure are merged with hazard maps for an interfering process. Vulnerability maps are generated using a spatial sensitivity analysis of network transport models to evaluate performance decrease under investigated thread scenarios. Thereby parameters are varied according to the specific impact of a particular threat scenario. Hazard maps are generated with a geographical information system using raster data of the same threat scenario derived from structured interviews and cluster analysis of events in the past. The application of the spatial risk assessment is exemplified by means of a case study for a water supply system, but the principal concept is applicable likewise to other critical network infrastructure. The aim of the approach is to help decision makers in choosing zones for preventive measures.

  3. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  4. Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio

    2006-01-01

    Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.

  5. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  6. Relative performance of academic departments using DEA with sensitivity analysis.

    PubMed

    Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P

    2009-05-01

    The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.

  7. Robust and sensitive video motion detection for sleep analysis.

    PubMed

    Heinrich, Adrienne; Geng, Di; Znamenskiy, Dmitry; Vink, Jelte Peter; de Haan, Gerard

    2014-05-01

    In this paper, we propose a camera-based system combining video motion detection, motion estimation, and texture analysis with machine learning for sleep analysis. The system is robust to time-varying illumination conditions while using standard camera and infrared illumination hardware. We tested the system for periodic limb movement (PLM) detection during sleep, using EMG signals as a reference. We evaluated the motion detection performance both per frame and with respect to movement event classification relevant for PLM detection. The Matthews correlation coefficient improved by a factor of 2, compared to a state-of-the-art motion detection method, while sensitivity and specificity increased with 45% and 15%, respectively. Movement event classification improved by a factor of 6 and 3 in constant and highly varying lighting conditions, respectively. On 11 PLM patient test sequences, the proposed system achieved a 100% accurate PLM index (PLMI) score with a slight temporal misalignment of the starting time (<1 s) regarding one movement. We conclude that camera-based PLM detection during sleep is feasible and can give an indication of the PLMI score.

  8. Sensitivity and uncertainty analysis of a regulatory risk model

    SciTech Connect

    Kumar, A.; Manocha, A.; Shenoy, T.

    1999-07-01

    Health Risk Assessments (H.R.A.s) are increasingly being used in the environmental decision making process, starting from problem identification to the final clean up activities. A key issue concerning the results of these risk assessments is the uncertainty associated with them. This uncertainty has been associated with highly conservative estimates of risk assessment parameters in past studies. The primary purpose of this study was to investigate error propagation through a risk model. A hypothetical glass plant situated in the state of California was studied. Air emissions from this plant were modeled using the ISCST2 model and the risk was calculated using the ACE2588 model. The downwash was also considered during the concentration calculations. A sensitivity analysis on the risk computations identified five parameters--mixing depth for human consumption, deposition velocity, weathering constant, interception factors for vine crop and the average leaf vegetable consumption--which had the greatest impact on the calculated risk. A Monte Carlo analysis using these five parameters resulted in a distribution with a lesser percentage deviation than the percentage standard deviation of the input parameters.

  9. Time course analysis of baroreflex sensitivity during postural stress.

    PubMed

    Westerhof, Berend E; Gisolf, Janneke; Karemaker, John M; Wesseling, Karel H; Secher, Niels H; van Lieshout, Johannes J

    2006-12-01

    Postural stress requires immediate autonomic nervous action to maintain blood pressure. We determined time-domain cardiac baroreflex sensitivity (BRS) and time delay (tau) between systolic blood pressure and interbeat interval variations during stepwise changes in the angle of vertical body axis (alpha). The assumption was that with increasing postural stress, BRS becomes attenuated, accompanied by a shift in tau toward higher values. In 10 healthy young volunteers, alpha included 20 degrees head-down tilt (-20 degrees), supine (0 degree), 30 and 70 degrees head-up tilt (30 degrees, 70 degrees), and free standing (90 degrees). Noninvasive blood pressures were analyzed over 6-min periods before and after each change in alpha. The BRS was determined by frequency-domain analysis and with xBRS, a cross-correlation time-domain method. On average, between 28 (-20 degrees) to 45 (90 degrees) xBRS estimates per minute became available. Following a change in alpha, xBRS reached a different mean level in the first minute in 78% of the cases and in 93% after 6 min. With increasing alpha, BRS decreased: BRS = -10.1.sin(alpha) + 18.7 (r(2) = 0.99) with tight correlation between xBRS and cross-spectral gain (r(2) approximately 0.97). Delay tau shifted toward higher values. In conclusion, in healthy subjects the sensitivity of the cardiac baroreflex obtained from time domain decreases linearly with sin(alpha), and the start of baroreflex adaptation to a physiological perturbation like postural stress occurs rapidly. The decreases of BRS and reduction of short tau may be the result of reduced vagal activity with increasing alpha.

  10. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information

  11. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  12. Tool for Sizing Analysis of the Advanced Life Support System

    NASA Technical Reports Server (NTRS)

    Yeh, Hue-Hsie Jannivine; Brown, Cheryl B.; Jeng, Frank J.

    2005-01-01

    Advanced Life Support Sizing Analysis Tool (ALSSAT) is a computer model for sizing and analyzing designs of environmental-control and life support systems (ECLSS) for spacecraft and surface habitats involved in the exploration of Mars and Moon. It performs conceptual designs of advanced life support (ALS) subsystems that utilize physicochemical and biological processes to recycle air and water, and process wastes in order to reduce the need of resource resupply. By assuming steady-state operations, ALSSAT is a means of investigating combinations of such subsystems technologies and thereby assisting in determining the most cost-effective technology combination available. In fact, ALSSAT can perform sizing analysis of the ALS subsystems that are operated dynamically or steady in nature. Using the Microsoft Excel spreadsheet software with Visual Basic programming language, ALSSAT has been developed to perform multiple-case trade studies based on the calculated ECLSS mass, volume, power, and Equivalent System Mass, as well as parametric studies by varying the input parameters. ALSSAT s modular format is specifically designed for the ease of future maintenance and upgrades.

  13. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  14. Imaging spectroscopic analysis at the Advanced Light Source

    SciTech Connect

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  15. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis

  16. Spectral comb mitigation to improve continuous-wave search sensitivity in Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Neunzert, Ansel; LIGO Scientific Collaboration; Virgo Collaboration

    2017-01-01

    Searches for continuous gravitational waves, such as those emitted by rapidly spinning non-axisymmetric neutron stars, are degraded by the presence of narrow noise ``lines'' in detector data. These lines either reduce the spectral band available for analysis (if identified as noise and removed) or cause spurious outliers (if unidentified). Many belong to larger structures known as combs: series of evenly-spaced lines which appear across wide frequency ranges. This talk will focus on the challenges of comb identification and mitigation. I will discuss tools and methods for comb analysis, and case studies of comb mitigation at the LIGO Hanford detector site.

  17. Advanced Automation for Ion Trap Mass Spectrometry-New Opportunities for Real-Time Autonomous Analysis

    NASA Technical Reports Server (NTRS)

    Palmer, Peter T.; Wong, C. M.; Salmonson, J. D.; Yost, R. A.; Griffin, T. P.; Yates, N. A.; Lawless, James G. (Technical Monitor)

    1994-01-01

    The utility of MS/MS for both target compound analysis and the structure elucidation of unknowns has been described in a number of references. A broader acceptance of this technique has not yet been realized as it requires large, complex, and costly instrumentation which has not been competitive with more conventional techniques. Recent advancements in ion trap mass spectrometry promise to change this situation. Although the ion trap's small size, sensitivity, and ability to perform multiple stages of mass spectrometry have made it eminently suitable for on-line, real-time monitoring applications, advance automation techniques are required to make these capabilities more accessible to non-experts. Towards this end we have developed custom software for the design and implementation of MS/MS experiments. This software allows the user to take full advantage of the ion trap's versatility with respect to ionization techniques, scan proxies, and ion accumulation/ejection methods. Additionally, expert system software has been developed for autonomous target compound analysis. This software has been linked to ion trap control software and a commercial data system to bring all of the steps in the analysis cycle under control of the expert system. These software development efforts and their utilization for a number of trace analysis applications will be described.

  18. Adaptive Modeling, Engineering Analysis and Design of Advanced Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Hsu, Su-Yuen; Mason, Brian H.; Hicks, Mike D.; Jones, William T.; Sleight, David W.; Chun, Julio; Spangler, Jan L.; Kamhawi, Hilmi; Dahl, Jorgen L.

    2006-01-01

    This paper describes initial progress towards the development and enhancement of a set of software tools for rapid adaptive modeling, and conceptual design of advanced aerospace vehicle concepts. With demanding structural and aerodynamic performance requirements, these high fidelity geometry based modeling tools are essential for rapid and accurate engineering analysis at the early concept development stage. This adaptive modeling tool was used for generating vehicle parametric geometry, outer mold line and detailed internal structural layout of wing, fuselage, skin, spars, ribs, control surfaces, frames, bulkheads, floors, etc., that facilitated rapid finite element analysis, sizing study and weight optimization. The high quality outer mold line enabled rapid aerodynamic analysis in order to provide reliable design data at critical flight conditions. Example application for structural design of a conventional aircraft and a high altitude long endurance vehicle configuration are presented. This work was performed under the Conceptual Design Shop sub-project within the Efficient Aerodynamic Shape and Integration project, under the former Vehicle Systems Program. The project objective was to design and assess unconventional atmospheric vehicle concepts efficiently and confidently. The implementation may also dramatically facilitate physics-based systems analysis for the NASA Fundamental Aeronautics Mission. In addition to providing technology for design and development of unconventional aircraft, the techniques for generation of accurate geometry and internal sub-structure and the automated interface with the high fidelity analysis codes could also be applied towards the design of vehicles for the NASA Exploration and Space Science Mission projects.

  19. Advanced stoichiometric analysis of metabolic networks of mammalian systems.

    PubMed

    Orman, Mehmet A; Berthiaume, Francois; Androulakis, Ioannis P; Ierapetritou, Marianthi G

    2011-01-01

    Metabolic engineering tools have been widely applied to living organisms to gain a comprehensive understanding about cellular networks and to improve cellular properties. Metabolic flux analysis (MFA), flux balance analysis (FBA), and metabolic pathway analysis (MPA) are among the most popular tools in stoichiometric network analysis. Although application of these tools into well-known microbial systems is extensive in the literature, various barriers prevent them from being utilized in mammalian cells. Limited experimental data, complex regulatory mechanisms, and the requirement of more complex nutrient media are some major obstacles in mammalian cell systems. However, mammalian cells have been used to produce therapeutic proteins, to characterize disease states or related abnormal metabolic conditions, and to analyze the toxicological effects of some medicinally important drugs. Therefore, there is a growing need for extending metabolic engineering principles to mammalian cells in order to understand their underlying metabolic functions. In this review article, advanced metabolic engineering tools developed for stoichiometric analysis including MFA, FBA, and MPA are described. Applications of these tools in mammalian cells are discussed in detail, and the challenges and opportunities are highlighted.

  20. AEG-1 as a predictor of sensitivity to neoadjuvant chemotherapy in advanced epithelial ovarian cancer

    PubMed Central

    Wang, Yao; Jin, Xin; Song, Hongtao; Meng, Fanling

    2016-01-01

    Objectives Astrocyte elevated gene-1 (AEG-1) plays a critical role in tumor progression and chemoresistance. The aim of the present study was to investigate the protein expression of AEG-1 in patients with epithelial ovarian cancer (EOC) who underwent debulking surgery after neoadjuvant chemotherapy (NAC). Materials and methods The protein expression of AEG-1 was analyzed using immunohistochemistry in 162 patients with EOC. The relationship between AEG-1 expression and chemotherapy resistance was assessed using univariate and multivariate logistic regression analyses with covariate adjustments. Results High AEG-1 expression was significantly associated with the International Federation of Gynecology and Obstetrics stage, age, serum cancer antigen-125 concentration, histological grade, the presence of residual tumor after the interval debulking surgery, and lymph node metastasis. Furthermore, AEG-1 expression was significantly higher in NAC-resistant disease than in NAC-sensitive disease (P<0.05). Multivariate analyses indicated that elevated AEG-1 expression predicted poor survival. Conclusion Our findings indicate that AEG-1 may be a potential new biomarker for predicting chemoresistance and poor prognoses in patients with EOC. PMID:27143933

  1. Phase I Study of Daily Irinotecan as a Radiation Sensitizer for Locally Advanced Pancreatic Cancer

    SciTech Connect

    Fouchardiere, Christelle de la; Negrier, Sylvie; Labrosse, Hugues; Martel Lafay, Isabelle; Desseigne, Francoise; Meeus, Pierre; Tavan, David; Petit-Laurent, Fabien; Rivoire, Michel; Perol, David; Carrie, Christian

    2010-06-01

    Purpose: The study aimed to determine the maximum tolerated dose of daily irinotecan given with concomitant radiotherapy in patients with locally advanced adenocarcinoma of the pancreas. Methods and Materials: Between September 2000 and March 2008, 36 patients with histologically proven unresectable pancreas adenocarcinoma were studied prospectively. Irinotecan was administered daily, 1 to 2 h before irradiation. Doses were started at 6 mg/m{sup 2} per day and then escalated by increments of 2 mg/m{sup 2} every 3 patients. Radiotherapy was administered in 2-Gy fractions, 5 fractions per week, up to a total dose of 50 Gy to the tumor volume. Inoperability was confirmed by a surgeon involved in a multidisciplinary team. All images and responses were centrally reviewed by radiologists. Results: Thirty-six patients were enrolled over a period of 8 years through eight dose levels (6 mg/m{sup 2} to 20 mg/m{sup 2} per day). The maximum tolerated dose was determined to be 18 mg/m{sup 2} per day. The dose-limiting toxicities were nausea/vomiting, diarrhea, anorexia, dehydration, and hypokalemia. The median survival time was 12.6 months with a median follow-up of 53.8 months. The median progression-free survival time was 6.5 months, and 4 patients (11.4%) with very good responses could undergo surgery. Conclusions: The maximum tolerated dose of irinotecan is 18 mg/m{sup 2} per day for 5 weeks. Dose-limiting toxicities are mainly gastrointestinal. Even though efficacy was not the aim of this study, the results are very promising, with a median survival time of 12.6 months.

  2. Sensitivity analysis of a two-dimensional probabilistic risk assessment model using analysis of variance.

    PubMed

    Mokhtari, Amirhossein; Frey, H Christopher

    2005-12-01

    This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.

  3. Carbonaceous materials and their advances as a counter electrode in dye-sensitized solar cells: challenges and prospects.

    PubMed

    Kouhnavard, Mojgan; Ludin, Norasikin Ahmad; Ghaffari, Babak V; Sopian, Kamarozzaman; Ikeda, Shoichiro

    2015-05-11

    Dye-sensitized solar cells (DSSCs) serve as low-costing alternatives to silicon solar cells because of their low material and fabrication costs. Usually, they utilize Pt as the counter electrode (CE) to catalyze the iodine redox couple and to complete the electric circuit. Given that Pt is a rare and expensive metal, various carbon materials have been intensively investigated because of their low costs, high surface areas, excellent electrochemical stabilities, reasonable electrochemical activities, and high corrosion resistances. In this feature article, we provide an overview of recent studies on the electrochemical properties and photovoltaic performances of carbon-based CEs (e.g., activated carbon, nanosized carbon, carbon black, graphene, graphite, carbon nanotubes, and composite carbon). We focus on scientific challenges associated with each material and highlight recent advances achieved in overcoming these obstacles. Finally, we discuss possible future directions for this field of research aimed at obtaining highly efficient DSSCs.

  4. High sensitivity far infrared laser diagnostics for the C-2U advanced beam-driven field-reversed configuration plasmas.

    PubMed

    Deng, B H; Beall, M; Schroeder, J; Settles, G; Feng, P; Kinley, J S; Gota, H; Thompson, M C

    2016-11-01

    A high sensitivity multi-channel far infrared laser diagnostics with switchable interferometry and polarimetry operation modes for the advanced neutral beam-driven C-2U field-reversed configuration (FRC) plasmas is described. The interferometer achieved superior resolution of 1 × 10(16) m(-2) at >1.5 MHz bandwidth, illustrated by measurement of small amplitude high frequency fluctuations. The polarimetry achieved 0.04° instrument resolution and 0.1° actual resolution in the challenging high density gradient environment with >0.5 MHz bandwidth, making it suitable for weak internal magnetic field measurements in the C-2U plasmas, where the maximum Faraday rotation angle is less than 1°. The polarimetry resolution data is analyzed, and high resolution Faraday rotation data in C-2U is presented together with direct evidences of field reversal in FRC magnetic structure obtained for the first time by a non-perturbative method.

  5. Advanced probabilistic risk analysis using RAVEN and RELAP-7

    SciTech Connect

    Rabiti, Cristian; Alfonsi, Andrea; Mandelli, Diego; Cogliati, Joshua; Kinoshita, Robert

    2014-06-01

    RAVEN, under the support of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program [1], is advancing its capability to perform statistical analyses of stochastic dynamic systems. This is aligned with its mission to provide the tools needed by the Risk Informed Safety Margin Characterization (RISMC) path-lead [2] under the Department Of Energy (DOE) Light Water Reactor Sustainability program [3]. In particular this task is focused on the synergetic development with the RELAP-7 [4] code to advance the state of the art on the safety analysis of nuclear power plants (NPP). The investigation of the probabilistic evolution of accident scenarios for a complex system such as a nuclear power plant is not a trivial challenge. The complexity of the system to be modeled leads to demanding computational requirements even to simulate one of the many possible evolutions of an accident scenario (tens of CPU/hour). At the same time, the probabilistic analysis requires thousands of runs to investigate outcomes characterized by low probability and severe consequence (tail problem). The milestone reported in June of 2013 [5] described the capability of RAVEN to implement complex control logic and provide an adequate support for the exploration of the probabilistic space using a Monte Carlo sampling strategy. Unfortunately the Monte Carlo approach is ineffective with a problem of this complexity. In the following year of development, the RAVEN code has been extended with more sophisticated sampling strategies (grids, Latin Hypercube, and adaptive sampling). This milestone report illustrates the effectiveness of those methodologies in performing the assessment of the probability of core damage following the onset of a Station Black Out (SBO) situation in a boiling water reactor (BWR). The first part of the report provides an overview of the available probabilistic analysis capabilities, ranging from the different types of distributions available, possible sampling

  6. Microstructure-sensitive extreme value probabilities of fatigue in advanced engineering alloys

    NASA Astrophysics Data System (ADS)

    Przybyla, Craig P.

    A novel microstructure-sensitive extreme value probabilistic framework is introduced to evaluate material performance/variability for damage evolution processes (e.g., fatigue, fracture, creep). This framework employs newly developed extreme value marked correlation functions (EVMCF) to identify the coupled microstructure attributes (e.g., phase/grain size, grain orientation, grain misorientation) that have the greatest statistical relevance to the extreme value response variables (e.g., stress, elastic/plastic strain) that describe the damage evolution processes of interest. This is an improvement on previous approaches that account for distributed extreme value response variables that describe the damage evolution process of interest based only on the extreme value distributions of a single microstructure attribute; previous approaches have given no consideration of how coupled microstructure attributes affect the distributions of extreme value response. This framework also utilizes computational modeling techniques to identify correlations between microstructure attributes that significantly raise or lower the magnitudes of the damage response variables of interest through the simulation of multiple statistical volume elements (SVE). Each SVE for a given response is constructed to be a statistical sample of the entire microstructure ensemble (i.e., bulk material); therefore, the response of interest in each SVE is not expected to be the same. This is in contrast to computational simulation of a single representative volume element (RVE), which often is untenably large for response variables dependent on the extreme value microstructure attributes. This framework has been demonstrated in the context of characterizing microstructure-sensitive high cycle fatigue (HCF) variability due to the processes of fatigue crack formation (nucleation and microstructurally small crack growth) in polycrystalline metallic alloys. Specifically, the framework is exercised to

  7. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-01-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  8. Sensitivity analysis on parameters and processes affecting vapor intrusion risk.

    PubMed

    Picone, Sara; Valstar, Johan; van Gaans, Pauline; Grotenhuis, Tim; Rijnaarts, Huub

    2012-05-01

    A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion.

  9. Sensitivity analysis of near-infrared functional lymphatic imaging

    PubMed Central

    Weiler, Michael; Kassis, Timothy

    2012-01-01

    Abstract. Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150  μg/mL ICG and 60  g/L albumin. ICG fluorescence can be detected at a concentration of 150  μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time. PMID:22734775

  10. Nonparametric Bounds and Sensitivity Analysis of Treatment Effects

    PubMed Central

    Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.

    2015-01-01

    This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743

  11. Experimental sensitivity analysis of oxygen transfer in the capillary fringe.

    PubMed

    Haberer, Christina M; Cirpka, Olaf A; Rolle, Massimo; Grathwohl, Peter

    2014-01-01

    Oxygen transfer in the capillary fringe (CF) is of primary importance for a wide variety of biogeochemical processes occurring in shallow groundwater systems. In case of a fluctuating groundwater table two distinct mechanisms of oxygen transfer within the capillary zone can be identified: vertical predominantly diffusive mass flux of oxygen, and mass transfer between entrapped gas and groundwater. In this study, we perform a systematic experimental sensitivity analysis in order to assess the influence of different parameters on oxygen transfer from entrapped air within the CF to underlying anoxic groundwater. We carry out quasi two-dimensional flow-through experiments focusing on the transient phase following imbibition to investigate the influence of the horizontal flow velocity, the average grain diameter of the porous medium, as well as the magnitude and the speed of the water table rise. We present a numerical flow and transport model that quantitatively represents the main mechanisms governing oxygen transfer. Assuming local equilibrium between the aqueous and the gaseous phase, the partitioning process from entrapped air can be satisfactorily simulated. The different experiments are monitored by measuring vertical oxygen concentration profiles at high spatial resolution with a noninvasive optode technique as well as by determining oxygen fluxes at the outlet of the flow-through chamber. The results show that all parameters investigated have a significant effect and determine different amounts of oxygen transferred to the oxygen-depleted groundwater. Particularly relevant are the magnitude of the water table rise and the grain size of the porous medium.

  12. Plans for a sensitivity analysis of bridge-scour computations

    USGS Publications Warehouse

    Dunn, David D.; Smith, Peter N.

    1993-01-01

    Plans for an analysis of the sensitivity of Level 2 bridge-scour computations are described. Cross-section data from 15 bridge sites in Texas are modified to reflect four levels of field effort ranging from no field surveys to complete surveys. Data from United States Geological Survey (USGS) topographic maps will be used to supplement incomplete field surveys. The cross sections are used to compute the water-surface profile through each bridge for several T-year recurrence-interval design discharges. The effect of determining the downstream energy grade-line slope from topographic maps is investigated by systematically varying the starting slope of each profile. The water-surface profile analyses are then used to compute potential scour resulting from each of the design discharges. The planned results will be presented in the form of exceedance-probability versus scour-depth plots with the maximum and minimum scour depths at each T-year discharge presented as error bars.

  13. Sensitivity analysis of near-infrared functional lymphatic imaging

    NASA Astrophysics Data System (ADS)

    Weiler, Michael; Kassis, Timothy; Dixon, J. Brandon

    2012-06-01

    Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time.

  14. Sensitivity analysis on an AC600 aluminum skin component

    NASA Astrophysics Data System (ADS)

    Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.

    2016-08-01

    New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.

  15. Sensitivity analysis of a wide-field telescope

    NASA Astrophysics Data System (ADS)

    Lim, Juhee; Lee, Sangon; Moon, Il Kweon; Yang, Ho-Soon; Lee, Jong Ung; Choi, Young-Jun; Park, Jang-Hyun; Jin, Ho

    2013-07-01

    We are developing three ground-based wide-field telescopes. A wide-field Cassegrain telescope consists of two hyperbolic mirrors, aberration correctors and a field flattener for a 2-degree field of view. The diameters of the primary mirror and the secondary mirror are 500 mm and 200 mm, respectively. Corrective optics combined with four lenses, a filter and a window are also considered. For the imaging detection device, we use a charge coupled device (CCD) which has a 4096 × 4096 array with a 9-µm2 pixel size. One of the requirements is that the image motion limit of the opto-mechanical structure be less than 1 pixel size of the CCD on the image plane. To meet this requirement, we carried out an optical design evaluation and a misalignment analysis. Line-of-sight sensitivity equations are obtained from the rigid-body rotation in three directions and the rigid-body translation in three directions. These equations express the image motions at the image plane in terms of the independent motions of the optical components. We conducted a response simulation to evaluate the finite element method models under static load conditions, and the result is represented by the static response function. We show that the wide-field telescope system is stiff and stable enough to be supported and operated during its operating time.

  16. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  17. Advanced analysis of metal distributions in human hair

    SciTech Connect

    Kempson, Ivan M.; Skinner, William M.

    2008-06-09

    A variety of techniques (secondary electron microscopy with energy dispersive X-ray analysis, time-of-flight-secondary ion mass spectrometry, and synchrotron X-ray fluorescence) were utilized to distinguish metal contamination occurring in hair arising from endogenous uptake from an individual exposed to a polluted environment, in this case a lead smelter. Evidence was sought for elements less affected by contamination and potentially indicative of biogenic activity. The unique combination of surface sensitivity, spatial resolution, and detection limits used here has provided new insight regarding hair analysis. Metals such as Ca, Fe, and Pb appeared to have little representative value of endogenous uptake and were mainly due to contamination. Cu and Zn, however, demonstrate behaviors worthy of further investigation into relating hair concentrations to endogenous function.

  18. Probabilistic seismic demand analysis using advanced ground motion intensity measures

    USGS Publications Warehouse

    Tothong, P.; Luco, N.

    2007-01-01

    One of the objectives in performance-based earthquake engineering is to quantify the seismic reliability of a structure at a site. For that purpose, probabilistic seismic demand analysis (PSDA) is used as a tool to estimate the mean annual frequency of exceeding a specified value of a structural demand parameter (e.g. interstorey drift). This paper compares and contrasts the use, in PSDA, of certain advanced scalar versus vector and conventional scalar ground motion intensity measures (IMs). One of the benefits of using a well-chosen IM is that more accurate evaluations of seismic performance are achieved without the need to perform detailed ground motion record selection for the nonlinear dynamic structural analyses involved in PSDA (e.g. record selection with respect to seismic parameters such as earthquake magnitude, source-to-site distance, and ground motion epsilon). For structural demands that are dominated by a first mode of vibration, using inelastic spectral displacement (Sdi) can be advantageous relative to the conventionally used elastic spectral acceleration (Sa) and the vector IM consisting of Sa and epsilon (??). This paper demonstrates that this is true for ordinary and for near-source pulse-like earthquake records. The latter ground motions cannot be adequately characterized by either Sa alone or the vector of Sa and ??. For structural demands with significant higher-mode contributions (under either of the two types of ground motions), even Sdi (alone) is not sufficient, so an advanced scalar IM that additionally incorporates higher modes is used.

  19. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  20. Systems analysis and futuristic designs of advanced biofuel factory concepts.

    SciTech Connect

    Chianelli, Russ; Leathers, James; Thoma, Steven George; Celina, Mathias Christopher; Gupta, Vipin P.

    2007-10-01

    The U.S. is addicted to petroleum--a dependency that periodically shocks the economy, compromises national security, and adversely affects the environment. If liquid fuels remain the main energy source for U.S. transportation for the foreseeable future, the system solution is the production of new liquid fuels that can directly displace diesel and gasoline. This study focuses on advanced concepts for biofuel factory production, describing three design concepts: biopetroleum, biodiesel, and higher alcohols. A general schematic is illustrated for each concept with technical description and analysis for each factory design. Looking beyond current biofuel pursuits by industry, this study explores unconventional feedstocks (e.g., extremophiles), out-of-favor reaction processes (e.g., radiation-induced catalytic cracking), and production of new fuel sources traditionally deemed undesirable (e.g., fusel oils). These concepts lay the foundation and path for future basic science and applied engineering to displace petroleum as a transportation energy source for good.

  1. Thermodynamic analysis of the advanced zero emission power plant

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Job, Marcin

    2016-03-01

    The paper presents the structure and parameters of advanced zero emission power plant (AZEP). This concept is based on the replacement of the combustion chamber in a gas turbine by the membrane reactor. The reactor has three basic functions: (i) oxygen separation from the air through the membrane, (ii) combustion of the fuel, and (iii) heat transfer to heat the oxygen-depleted air. In the discussed unit hot depleted air is expanded in a turbine and further feeds a bottoming steam cycle (BSC) through the main heat recovery steam generator (HRSG). Flue gas leaving the membrane reactor feeds the second HRSG. The flue gas consist mainly of CO2 and water vapor, thus, CO2 separation involves only the flue gas drying. Results of the thermodynamic analysis of described power plant are presented.

  2. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  3. Recent trends in the advanced analysis of bioactive fatty acids.

    PubMed

    Ruiz-Rodriguez, Alejandro; Reglero, Guillermo; Ibañez, Elena

    2010-01-20

    The consumption of dietary fats have been long associated to chronic diseases such as obesity, diabetes, cancer, arthritis, asthma, and cardiovascular disease; although some controversy still exists in the role of dietary fats in human health, certain fats have demonstrated their positive effect in the modulation of abnormal fatty acid and eicosanoid metabolism, both of them associated to chronic diseases. Among the different fats, some fatty acids can be used as functional ingredients such as alpha-linolenic acid (ALA), arachidonic acid (AA), eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), gamma-linolenic acid (GLA), stearidonic acid (STA) and conjugated linoleic acid (CLA), among others. The present review is focused on recent developments in FAs analysis, covering sample preparation methods such as extraction, fractionation and derivatization as well as new advances in chromatographic methods such as GC and HPLC. Special attention is paid to trans fatty acids due its increasing interest for the food industry.

  4. Analysis of biofluids by paper spray MS: advances and challenges.

    PubMed

    Manicke, Nicholas E; Bills, Brandon J; Zhang, Chengsen

    2016-03-01

    Paper spray MS is part of a cohort of ambient ionization or direct analysis methods that seek to analyze complex samples without prior sample preparation. Extraction and electrospray ionization occur directly from the paper substrate upon which a dried matrix spot is stored. Paper spray MS is capable of detecting drugs directly from dried blood, plasma and urine spots at the low ng/ml to pg/ml levels without sample preparation. No front end separation is performed, so MS/MS or high-resolution MS is required. Here, we discuss paper spray methodology, give a comprehensive literature review of the use of paper spray MS for bioanalysis, discuss technological advancements and variations on this technique and discuss some of its limitations.

  5. Advanced in aerospace lubricant and wear metal analysis

    SciTech Connect

    Saba, C.S.; Centers, P.W.

    1995-09-01

    Wear metal analysis continues to play an effective diagnostic role for condition monitoring of gas turbine engines. Since the early 1960s the United States` military services have been using spectrometric oil analysis program (SOAP) to monitor the condition of aircraft engines. The SOAP has proven to be effective in increasing reliability, fleet readiness and avoiding losses of lives and machinery. Even though historical data have demonstrated the success of the SOAP in terms of detecting imminent engine failure verified by maintenance personnel, the SOAP is not a stand-alone technique and is limited in its detection of large metallic wear debris. In response, improved laboratory, portable, in-line and on-line diagnostic techniques to perfect SOAP and oil condition monitoring have been sought. The status of research and development as well as the direction of future developmental activities in oil analysis due to technological opportunities, advanced in engine development and changes in military mission are reviewed and discussed. 54 refs.

  6. Sorption of redox-sensitive elements: critical analysis

    SciTech Connect

    Strickert, R.G.

    1980-12-01

    The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.

  7. Sensitivity analysis of a pharmaceutical tablet production process from the control engineering perspective.

    PubMed

    Rehrl, Jakob; Gruber, Arlin; Khinast, Johannes G; Horn, Martin

    2017-01-30

    This paper presents a sensitivity analysis of a pharmaceutical direct compaction process. Sensitivity analysis is an important tool for gaining valuable process insights and designing a process control concept. Examining its results in a systematic manner makes it possible to assign actuating signals to controlled variables. This paper presents mathematical models for individual unit operations, on which the sensitivity analysis is based. Two sensitivity analysis methods are outlined: (i) based on the so-called Sobol indices and (ii) based on the steady-state gains and the frequency response of the proposed plant model.

  8. Decoupled direct method for sensitivity analysis in combustion kinetics

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1987-01-01

    An efficient, decoupled direct method for calculating the first order sensitivity coefficients of homogeneous, batch combustion kinetic rate equations is presented. In this method the ordinary differential equations for the sensitivity coefficients are solved separately from , but sequentially with, those describing the combustion chemistry. The ordinary differential equations for the thermochemical variables are solved using an efficient, implicit method (LSODE) that automatically selects the steplength and order for each solution step. The solution procedure for the sensitivity coefficients maintains accuracy and stability by using exactly the same steplengths and numerical approximations. The method computes sensitivity coefficients with respect to any combination of the initial values of the thermochemical variables and the three rate constant parameters for the chemical reactions. The method is illustrated by application to several simple problems and, where possible, comparisons are made with exact solutions and those obtained by other techniques.

  9. Sensitivity analysis of static resistance of slender beam under bending

    NASA Astrophysics Data System (ADS)

    Valeš, Jan

    2016-06-01

    The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.

  10. Sensitivity analysis of the age-structured malaria transmission model

    NASA Astrophysics Data System (ADS)

    Addawe, Joel M.; Lope, Jose Ernie C.

    2012-09-01

    We propose an age-structured malaria transmission model and perform sensitivity analyses to determine the relative importance of model parameters to disease transmission. We subdivide the human population into two: preschool humans (below 5 years) and the rest of the human population (above 5 years). We then consider two sets of baseline parameters, one for areas of high transmission and the other for areas of low transmission. We compute the sensitivity indices of the reproductive number and the endemic equilibrium point with respect to the two sets of baseline parameters. Our simulations reveal that in areas of either high or low transmission, the reproductive number is most sensitive to the number of bites by a female mosquito on the rest of the human population. For areas of low transmission, we find that the equilibrium proportion of infectious pre-school humans is most sensitive to the number of bites by a female mosquito. For the rest of the human population it is most sensitive to the rate of acquiring temporary immunity. In areas of high transmission, the equilibrium proportion of infectious pre-school humans and the rest of the human population are both most sensitive to the birth rate of humans. This suggests that strategies that target the mosquito biting rate on pre-school humans and those that shortens the time in acquiring immunity can be successful in preventing the spread of malaria.

  11. Design, analysis, and test verification of advanced encapsulation systems

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Minning, C.

    1982-01-01

    Design sensitivities are established for the development of photovoltaic module criteria and the definition of needed research tasks. The program consists of three phases. In Phase I, analytical models were developed to perform optical, thermal, electrical, and structural analyses on candidate encapsulation systems. From these analyses several candidate systems will be selected for qualification testing during Phase II. Additionally, during Phase II, test specimens of various types will be constructed and tested to determine the validity of the analysis methodology developed in Phase I. In Phse III, a finalized optimum design based on knowledge gained in Phase I and II will be developed. All verification testing was completed during this period. Preliminary results and observations are discussed. Descriptions of the thermal, thermal structural, and structural deflection test setups are included.

  12. Advanced High Temperature Reactor Systems and Economic Analysis

    SciTech Connect

    Holcomb, David Eugene; Peretz, Fred J; Qualls, A L

    2011-09-01

    The Advanced High Temperature Reactor (AHTR) is a design concept for a large-output [3400 MW(t)] fluoride-salt-cooled high-temperature reactor (FHR). FHRs, by definition, feature low-pressure liquid fluoride salt cooling, coated-particle fuel, a high-temperature power cycle, and fully passive decay heat rejection. The AHTR's large thermal output enables direct comparison of its performance and requirements with other high output reactor concepts. As high-temperature plants, FHRs can support either high-efficiency electricity generation or industrial process heat production. The AHTR analysis presented in this report is limited to the electricity generation mission. FHRs, in principle, have the potential to be low-cost electricity producers while maintaining full passive safety. However, no FHR has been built, and no FHR design has reached the stage of maturity where realistic economic analysis can be performed. The system design effort described in this report represents early steps along the design path toward being able to predict the cost and performance characteristics of the AHTR as well as toward being able to identify the technology developments necessary to build an FHR power plant. While FHRs represent a distinct reactor class, they inherit desirable attributes from other thermal power plants whose characteristics can be studied to provide general guidance on plant configuration, anticipated performance, and costs. Molten salt reactors provide experience on the materials, procedures, and components necessary to use liquid fluoride salts. Liquid metal reactors provide design experience on using low-pressure liquid coolants, passive decay heat removal, and hot refueling. High temperature gas-cooled reactors provide experience with coated particle fuel and graphite components. Light water reactors (LWRs) show the potentials of transparent, high-heat capacity coolants with low chemical reactivity. Modern coal-fired power plants provide design experience with

  13. Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances

    PubMed Central

    Alarifi, Abdulrahman; Al-Salman, AbdulMalik; Alsaleh, Mansour; Alnafessah, Ahmad; Al-Hadhrami, Suheer; Al-Ammar, Mai A.; Al-Khalifa, Hend S.

    2016-01-01

    In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space. PMID:27196906

  14. Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances.

    PubMed

    Alarifi, Abdulrahman; Al-Salman, AbdulMalik; Alsaleh, Mansour; Alnafessah, Ahmad; Al-Hadhrami, Suheer; Al-Ammar, Mai A; Al-Khalifa, Hend S

    2016-05-16

    In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space.

  15. Inside Single Cells: Quantitative Analysis with Advanced Optics and Nanomaterials

    PubMed Central

    Cui, Yi; Irudayaraj, Joseph

    2014-01-01

    Single cell explorations offer a unique window to inspect molecules and events relevant to mechanisms and heterogeneity constituting the central dogma of biology. A large number of nucleic acids, proteins, metabolites and small molecules are involved in determining and fine-tuning the state and function of a single cell at a given time point. Advanced optical platforms and nanotools provide tremendous opportunities to probe intracellular components with single-molecule accuracy, as well as promising tools to adjust single cell activity. In order to obtain quantitative information (e.g. molecular quantity, kinetics and stoichiometry) within an intact cell, achieving the observation with comparable spatiotemporal resolution is a challenge. For single cell studies both the method of detection and the biocompatibility are critical factors as they determine the feasibility, especially when considering live cell analysis. Although a considerable proportion of single cell methodologies depend on specialized expertise and expensive instruments, it is our expectation that the information content and implication will outweigh the costs given the impact on life science enabled by single cell analysis. PMID:25430077

  16. Quantitative Computed Tomography and Image Analysis for Advanced Muscle Assessment

    PubMed Central

    Edmunds, Kyle Joseph; Gíslason, Magnus K.; Arnadottir, Iris D.; Marcante, Andrea; Piccione, Francesco; Gargiulo, Paolo

    2016-01-01

    Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

  17. Nuclear methods of analysis in the advanced neutron source

    SciTech Connect

    Robinson, L.; Dyer, F.F.

    1994-12-31

    The Advanced Neutron Source (ANS) research reactor is presently in the conceptual design phase. The thermal power of this heavy water cooled and moderated reactor will be about 350 megawatts. The core volume of 27 liter is designed to provide the optimum neutron fluence rate for the numerous experimental facilities. The peak thermal neutron fluence rate is expected to be slightly less than 10{sup 20} neutrons/m{sup 2}s. In addition to the more than 40 neutron scattering stations, there will be extensive facilities for isotope production, material irradiation and analytical chemistry including neutron activation analysis (NAA) and a slow positron source. The highlight of this reactor will be the capability that it will provide for conducting research using cold neutrons. Two cryostats containing helium-cooled liquid deuterium will be located in the heavy water reflector tank. Each cryostat will provide low-temperature neutrons to researchers via numerous guides. A hot source with two beam tubes and several thermal beam tubes will also be available. The NAA facilities in the ANS will consist of seven pneumatic tubes, one cold neutron guide for prompt gamma-ray neutron activation analysis (PGNAA), and one cold neutron slanted guide for neutron depth profiling (NDP). In addition to these neutron interrogation systems, a gamma-ray irradiation facility for materials testing will be housed in a spent fuel storage pool. This paper will provide detailed information regarding the design and use of these various experimental systems.

  18. Advancement in analysis of Salviae miltiorrhizae Radix et Rhizoma (Danshen).

    PubMed

    Li, Yong-Guo; Song, Long; Liu, Mei; Hu, Zhi-Bi; Wang, Zheng-Tao

    2009-03-13

    This review summarizes the recent advances in the chemical analysis of Danshen and its finished products, including the introduction of the identified bioactive components, analytical methods for quantitative determination of target analytes and fingerprinting authentication, quality criteria of Danshen crude herb and its preparations, as well as the pharmacokinetic and pharmacodynamic studies on the active components of Danshen and its finished products. Danshen contains mainly two types of constituents, the hydrophilic depsides and lipophilic diterpenoidal quinones and both of them are responsible for the pharmacological activities of Danshen. In order to monitor simultaneously both types of components which have different physicochemical properties, numerous analytical methods have been reported using various chromatographic and spectrophotometric technologies. In this review, 110 papers on analysis of Danshen are discussed, various analytical methods and their chromatographic conditions are briefly described and their advantages/disadvantages are compared. For obtaining a quick, accurate and applicable analytical approach for quality evaluation and establishing a harmonized criteria of Danshen and its finished products, the authors' suggestion and opinions are given, including the reasonable selection of marker compounds with high concentration and commercial availability, a simple sample preparation procedure with high recoveries of both the hydrophilic phenols and lipophilic tanshinones, and an optimized chromatographic condition with ideal resolutions of all the target components. The chemical degradation and transformation of the predominant constituent salvianolic acid B in Danshen during processing and manufacturing are also emphasized in order to assure the quality consistency of Danshen containing products.

  19. An analytical approach to grid sensitivity analysis. [of NACA wing sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, Ideen; Smith, Robert E.; Tiwari, Surendra N.

    1992-01-01

    Sensitivity analysis in Computational Fluid Dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite-difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  20. An analytical approach to grid sensitivity analysis for NACA four-digit wing sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1992-01-01

    Sensitivity analysis in computational fluid dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  1. How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Razavi, Saman

    2016-04-01

    Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.

  2. Advanced In-Situ Detection and Chemical Analysis of Interstellar Dust Particles

    NASA Astrophysics Data System (ADS)

    Sternovsky, Z.; Gemer, A.; Gruen, E.; Horanyi, M.; Kempf, S.; Maute, K.; Postberg, F.; Srama, R.; Williams, E.; O'brien, L.; Rocha, J. R. R.

    2015-12-01

    The Ulysses dust detector discovered that interstellar dust particles pass through the solar system. The Hyperdsut instrument is developed for the in-situ detection and analysis of these particles to determine the elemental, chemical and isotopic compositions. Hyperdust builds on the heritage of previous successful instruments, e.g. the Cosmic Dust Analyzer (CDA) on Cassini. Hyperdust combines a highly sensitive Dust Trajectory Sensor (DTS) and the high mass resolution Chemical Analyzer (CA). The DTS will detect dust particles as small as 0.3 μm in radius, and the velocity vector information is used to confirm the interstellar origin and/or reveal the dynamics from the interactions within the solar system. The effective target area of the CA is > 600 cm2 achieves mass resolution in excess of 200, which is considerably higher than that of CDA, and is acheved by advanced ion optics design. The Hyperdust instrument is in the final phases of development to TRL 6.

  3. Depletion GPT-free sensitivity analysis for reactor eigenvalue problems

    SciTech Connect

    Kennedy, C.; Abdel-Khalik, H.

    2013-07-01

    This manuscript introduces a novel approach to solving depletion perturbation theory problems without the need to set up or solve the generalized perturbation theory (GPT) equations. The approach, hereinafter denoted generalized perturbation theory free (GPT-Free), constructs a reduced order model (ROM) using methods based in perturbation theory and computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error from using the ROM is quantified in the GPT-Free approach by means of a Wilks' order statistics error metric denoted the K-metric. Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally intractable. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT capabilities unless envisioned during code development. The GPT-Free approach addresses this limitation by requiring only the ability to compute the fundamental adjoint. This manuscript demonstrates the GPT-Free approach for depletion reactor calculations performed in SCALE6 using the 7x7 UAM assembly model. A ROM is developed for the assembly over a time horizon of 990 days. The approach both calculates the reduction error over the lifetime of the simulation using the K-metric and benchmarks the obtained sensitivities using sample calculations. (authors)

  4. Advanced Diagnostic and Prognostic Testbed (ADAPT) Testability Analysis Report

    NASA Technical Reports Server (NTRS)

    Ossenfort, John

    2008-01-01

    As system designs become more complex, determining the best locations to add sensors and test points for the purpose of testing and monitoring these designs becomes more difficult. Not only must the designer take into consideration all real and potential faults of the system, he or she must also find efficient ways of detecting and isolating those faults. Because sensors and cabling take up valuable space and weight on a system, and given constraints on bandwidth and power, it is even more difficult to add sensors into these complex designs after the design has been completed. As a result, a number of software tools have been developed to assist the system designer in proper placement of these sensors during the system design phase of a project. One of the key functions provided by many of these software programs is a testability analysis of the system essentially an evaluation of how observable the system behavior is using available tests. During the design phase, testability metrics can help guide the designer in improving the inherent testability of the design. This may include adding, removing, or modifying tests; breaking up feedback loops, or changing the system to reduce fault propagation. Given a set of test requirements, the analysis can also help to verify that the system will meet those requirements. Of course, a testability analysis requires that a software model of the physical system is available. For the analysis to be most effective in guiding system design, this model should ideally be constructed in parallel with these efforts. The purpose of this paper is to present the final testability results of the Advanced Diagnostic and Prognostic Testbed (ADAPT) after the system model was completed. The tool chosen to build the model and to perform the testability analysis with is the Testability Engineering and Maintenance System Designer (TEAMS-Designer). The TEAMS toolset is intended to be a solution to span all phases of the system, from design and

  5. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1993-01-01

    The SSME has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) Develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system. (2) Develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amounts of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. A high compression ratio can be achieved to allow the minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities. (3) Integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for a quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate

  6. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Space Shuttle Main Engine (SSME) has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature, pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system; (2) develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amount of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. High compression ratio can be achieved to allow minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities; and (3) integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of

  7. Advanced uncertainty modelling for container port risk analysis.

    PubMed

    Alyami, Hani; Yang, Zaili; Riahi, Ramin; Bonsall, Stephen; Wang, Jin

    2016-08-13

    Globalization has led to a rapid increase of container movements in seaports. Risks in seaports need to be appropriately addressed to ensure economic wealth, operational efficiency, and personnel safety. As a result, the safety performance of a Container Terminal Operational System (CTOS) plays a growing role in improving the efficiency of international trade. This paper proposes a novel method to facilitate the application of Failure Mode and Effects Analysis (FMEA) in assessing the safety performance of CTOS. The new approach is developed through incorporating a Fuzzy Rule-Based Bayesian Network (FRBN) with Evidential Reasoning (ER) in a complementary manner. The former provides a realistic and flexible method to describe input failure information for risk estimates of individual hazardous events (HEs) at the bottom level of a risk analysis hierarchy. The latter is used to aggregate HEs safety estimates collectively, allowing dynamic risk-based decision support in CTOS from a systematic perspective. The novel feature of the proposed method, compared to those in traditional port risk analysis lies in a dynamic model capable of dealing with continually changing operational conditions in ports. More importantly, a new sensitivity analysis method is developed and carried out to rank the HEs by taking into account their specific risk estimations (locally) and their Risk Influence (RI) to a port's safety system (globally). Due to its generality, the new approach can be tailored for a wide range of applications in different safety and reliability engineering and management systems, particularly when real time risk ranking is required to measure, predict, and improve the associated system safety performance.

  8. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2016-11-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  9. Large-scale transient sensitivity analysis of a radiation damaged bipolar junction transistor.

    SciTech Connect

    Hoekstra, Robert John; Gay, David M.; Bartlett, Roscoe Ainsworth; Phipps, Eric Todd

    2007-11-01

    Automatic differentiation (AD) is useful in transient sensitivity analysis of a computational simulation of a bipolar junction transistor subject to radiation damage. We used forward-mode AD, implemented in a new Trilinos package called Sacado, to compute analytic derivatives for implicit time integration and forward sensitivity analysis. Sacado addresses element-based simulation codes written in C++ and works well with forward sensitivity analysis as implemented in the Trilinos time-integration package Rythmos. The forward sensitivity calculation is significantly more efficient and robust than finite differencing.

  10. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    SciTech Connect

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  11. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  12. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  13. Thermal Analysis and Design of an Advanced Space Suit

    NASA Technical Reports Server (NTRS)

    Lin, Chin H.; Campbell, Anthony B.; French, Jonathan D.; French, D.; Nair, Satish S.; Miles, John B.

    2000-01-01

    The thermal dynamics and design of an Advanced Space Suit are considered. A transient model of the Advanced Space Suit has been developed and implemented using MATLAB/Simulink to help with sizing, with design evaluation, and with the development of an automatic thermal comfort control strategy. The model is described and the thermal characteristics of the Advanced Space suit are investigated including various parametric design studies. The steady state performance envelope for the Advanced Space Suit is defined in terms of the thermal environment and human metabolic rate and the transient response of the human-suit-MPLSS system is analyzed.

  14. Continuous adjoint sensitivity analysis for aerodynamic and acoustic optimization

    NASA Astrophysics Data System (ADS)

    Ghayour, Kaveh

    1999-11-01

    A gradient-based shape optimization methodology based on continuous adjoint sensitivities has been developed for two-dimensional steady Euler equations on unstructured meshes and the unsteady transonic small disturbance equation. The continuous adjoint sensitivities of the Helmholtz equation for acoustic applications have also been derived and discussed. The highlights of the developments for the steady two-dimensional Euler equations are the generalization of the airfoil surface boundary condition of the adjoint system to allow a proper closure of the Lagrangian functional associated with a general cost functional and the results for an inverse problem with density as the prescribed target. Furthermore, it has been demonstrated that a transformation to the natural coordinate system, in conjunction with the reduction of the governing state equations to the control surface, results in sensitivity integrals that are only a function of the tangential derivatives of the state variables. This approach alleviates the need for directional derivative computations with components along the normal to the control surface, which can render erroneous results. With regard to the unsteady transonic small disturbance equation (UTSD), the continuous adjoint methodology has been successfully extended to unsteady flows. It has been demonstrated that for periodic airfoil oscillations leading to limit-cycle behavior, the Lagrangian functional can be only closed if the time interval of interest spans one or more periods of the flow oscillations after the limit-cycle has been attained. The steady state and limit-cycle sensitivities are then validated by comparing with the brute-force derivatives. The importance of accounting for the flow circulation sensitivity, appearing in the form of a Dirac delta in the wall boundary condition at the trailing edge, has been stressed and demonstrated. Remarkably, the cost of an unsteady adjoint solution is about 0.2 times that of a UTSD solution

  15. Advanced predoctoral implant program at UIC: description and qualitative analysis.

    PubMed

    Afshari, Fatemeh S; Yuan, Judy Chia-Chun; Quimby, Anastasiya; Harlow, Rand; Campbell, Stephen D; Sukotjo, Cortino

    2014-05-01

    Dental implant education has increasingly become an integral part of predoctoral dental curricula. However, the majority of implant education emphasizes the restorative aspect as opposed to the surgical. The University of Illinois at Chicago College of Dentistry has developed an Advanced Predoctoral Implant Program (APIP) that provides a select group of students the opportunity to place implants for single-tooth restorations and mandibular overdentures. This article describes the rationale, logistics, experiences, and perspectives of an innovative approach to provide additional learning experiences in the care of patients with partial and complete edentulism using implant-supported therapies. Student and faculty perspectives on the APIP were ascertained via focus group discussions and a student survey. The qualitative analysis of this study suggests that the select predoctoral dental students highly benefited from this experience and intend to increase their knowledge and skills in implant dentistry through formal education following graduation. Furthermore, the survey indicates that the APIP has had a positive influence on the students' interest in surgically placing implants in their future dental practice and their confidence level in restoring and surgically placing implants.

  16. XII Advanced Computing and Analysis Techniques in Physics Research

    NASA Astrophysics Data System (ADS)

    Speer, Thomas; Carminati, Federico; Werlen, Monique

    November 2008 will be a few months after the official start of LHC when the highest quantum energy ever produced by mankind will be observed by the most complex piece of scientific equipment ever built. LHC will open a new era in physics research and push further the frontier of Knowledge This achievement has been made possible by new technological developments in many fields, but computing is certainly the technology that has made possible this whole enterprise. Accelerator and detector design, construction management, data acquisition, detectors monitoring, data analysis, event simulation and theoretical interpretation are all computing based HEP activities but also occurring many other research fields. Computing is everywhere and forms the common link between all involved scientists and engineers. The ACAT workshop series, created back in 1990 as AIHENP (Artificial Intelligence in High Energy and Nuclear Research) has been covering the tremendous evolution of computing in its most advanced topics, trying to setup bridges between computer science, experimental and theoretical physics. Conference web-site: http://acat2008.cern.ch/ Programme and presentations: http://indico.cern.ch/conferenceDisplay.py?confId=34666

  17. Crashworthiness analysis using advanced material models in DYNA3D

    SciTech Connect

    Logan, R.W.; Burger, M.J.; McMichael, L.D.; Parkinson, R.D.

    1993-10-22

    As part of an electric vehicle consortium, LLNL and Kaiser Aluminum are conducting experimental and numerical studies on crashworthy aluminum spaceframe designs. They have jointly explored the effect of heat treat on crush behavior and duplicated the experimental behavior with finite-element simulations. The major technical contributions to the state of the art in numerical simulation arise from the development and use of advanced material model descriptions for LLNL`s DYNA3D code. Constitutive model enhancements in both flow and failure have been employed for conventional materials such as low-carbon steels, and also for lighter weight materials such as aluminum and fiber composites being considered for future vehicles. The constitutive model enhancements are developed as extensions from LLNL`s work in anisotropic flow and multiaxial failure modeling. Analysis quality as a function of level of simplification of material behavior and mesh is explored, as well as the penalty in computation cost that must be paid for using more complex models and meshes. The lightweight material modeling technology is being used at the vehicle component level to explore the safety implications of small neighborhood electric vehicles manufactured almost exclusively from these materials.

  18. Advances in protein complex analysis using mass spectrometry

    PubMed Central

    Gingras, Anne-Claude; Aebersold, Ruedi; Raught, Brian

    2005-01-01

    Proteins often function as components of larger complexes to perform a specific function, and formation of these complexes may be regulated. For example, intracellular signalling events often require transient and/or regulated protein–protein interactions for propagation, and protein binding to a specific DNA sequence, RNA molecule or metabolite is often regulated to modulate a particular cellular function. Thus, characterizing protein complexes can offer important insights into protein function. This review describes recent important advances in mass spectrometry (MS)-based techniques for the analysis of protein complexes. Following brief descriptions of how proteins are identified using MS, and general protein complex purification approaches, we address two of the most important issues in these types of studies: specificity and background protein contaminants. Two basic strategies for increasing specificity and decreasing background are presented: whereas (1) tandem affinity purification (TAP) of tagged proteins of interest can dramatically improve the signal-to-noise ratio via the generation of cleaner samples, (2) stable isotopic labelling of proteins may be used to discriminate between contaminants and bona fide binding partners using quantitative MS techniques. Examples, as well as advantages and disadvantages of each approach, are presented. PMID:15611014

  19. Safety Analysis of Soybean Processing for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Hentges, Dawn L.

    1999-01-01

    Soybeans (cv. Hoyt) is one of the crops planned for food production within the Advanced Life Support System Integration Testbed (ALSSIT), a proposed habitat simulation for long duration lunar/Mars missions. Soybeans may be processed into a variety of food products, including soymilk, tofu, and tempeh. Due to the closed environmental system and importance of crew health maintenance, food safety is a primary concern on long duration space missions. Identification of the food safety hazards and critical control points associated with the closed ALSSIT system is essential for the development of safe food processing techniques and equipment. A Hazard Analysis Critical Control Point (HACCP) model was developed to reflect proposed production and processing protocols for ALSSIT soybeans. Soybean processing was placed in the type III risk category. During the processing of ALSSIT-grown soybeans, critical control points were identified to control microbiological hazards, particularly mycotoxins, and chemical hazards from antinutrients. Critical limits were suggested at each CCP. Food safety recommendations regarding the hazards and risks associated with growing, harvesting, and processing soybeans; biomass management; and use of multifunctional equipment were made in consideration of the limitations and restraints of the closed ALSSIT.

  20. Steady-State Analysis Model for Advanced Fuel Cycle Schemes.

    SciTech Connect

    SARTORI, ENRICO

    2008-03-17

    Version 00 SMAFS was developed as a part of the study, "Advanced Fuel Cycles and Waste Management", which was performed during 2003-2005 by an ad-hoc expert group under the Nuclear Development Committee in the OECD/NEA. The model was designed for an efficient conduct of nuclear fuel cycle scheme cost analyses. It is simple, transparent and offers users the capability to track down cost analysis results. All the fuel cycle schemes considered in the model are represented in a graphic format and all values related to a fuel cycle step are shown in the graphic interface, i.e., there are no hidden values embedded in the calculations. All data on the fuel cycle schemes considered in the study including mass flows, waste generation, cost data, and other data such as activities, decay heat and neutron sources of spent fuel and high-level waste along time are included in the model and can be displayed. The user can easily modify values of mass flows and/or cost parameters and see corresponding changes in the results. The model calculates: front-end fuel cycle mass flows such as requirements of enrichment and conversion services and natural uranium; mass of waste based on the waste generation parameters and the mass flow; and all costs.

  1. Steady-state Analysis Model for Advanced Fuelcycle Schemes

    SciTech Connect

    2006-05-12

    The model was developed as a part of the study, "Advanced Fuel Cycles and Waste Management", which was performed during 2003—2005 by an ad-hoc expert group under the Nuclear Development Committee in the OECD/NEA. The model was designed for an efficient conduct of nuclear fuel cycle scheme cost analyses. It is simple, transparent and offers users the capability to track down the cost analysis results. All the fuel cycle schemes considered in the model are represented in a graphic format and all values related to a fuel cycle step are shown in the graphic interface, i.e., there are no hidden values embedded in the calculations. All data on the fuel cycle schemes considered in the study including mass flows, waste generation, cost data, and other data such as activities, decay heat and neutron sources of spent fuel and high—level waste along time are included in the model and can be displayed. The user can modify easily the values of mass flows and/or cost parameters and see the corresponding changes in the results. The model calculates: front—end fuel cycle mass flows such as requirements of enrichment and conversion services and natural uranium; mass of waste based on the waste generation parameters and the mass flow; and all costs. It performs Monte Carlo simulations with changing the values of all unit costs within their respective ranges (from lower to upper bounds).

  2. Sensitivity analysis of the GNSS derived Victoria plate motion

    NASA Astrophysics Data System (ADS)

    Apolinário, João; Fernandes, Rui; Bos, Machiel

    2014-05-01

    Fernandes et al. (2013) estimated the angular velocity of the Victoria tectonic block from geodetic data (GNSS derived velocities) only.. GNSS observations are sparse in this region and it is therefore of the utmost importance to use the available data (5 sites) in the most optimal way. Unfortunately, the existing time-series were/are affected by missing data and offsets. In addition, some time-series were close to the considered minimal threshold value to compute one reliable velocity solution: 2.5-3.0 years. In this research, we focus on the sensitivity of the derived angular velocity to changes in the data (longer data-span for some stations) by extending the used data-span: Fernandes et al. (2013) used data until September 2011. We also investigate the effect of adding other stations to the solution, which is now possible since more stations became available in the region. In addition, we study if the conventional power-law plus white noise model is indeed the best stochastic model. In this respect, we apply different noise models using HECTOR (Bos et al. (2013), which can use different noise models and estimate offsets and seasonal signals simultaneously. The seasonal signal estimation is also other important parameter, since the time-series are rather short or have large data spans at some stations, which implies that the seasonal signals still can have some effect on the estimated trends as shown by Blewitt and Lavellee (2002) and Bos et al. (2010). We also quantify the magnitude of such differences in the estimation of the secular velocity and their effect in the derived angular velocity. Concerning the offsets, we investigate how they can, detected and undetected, influence the estimated plate motion. The time of offsets has been determined by visual inspection of the time-series. The influence of undetected offsets has been done by adding small synthetic random walk signals that are too small to be detected visually but might have an effect on the

  3. Advanced Fuel Cycle Economic Analysis of Symbiotic Light-Water Reactor and Fast Burner Reactor Systems

    SciTech Connect

    D. E. Shropshire

    2009-01-01

    The Advanced Fuel Cycle Economic Analysis of Symbiotic Light-Water Reactor and Fast Burner Reactor Systems, prepared to support the U.S. Advanced Fuel Cycle Initiative (AFCI) systems analysis, provides a technology-oriented baseline system cost comparison between the open fuel cycle and closed fuel cycle systems. The intent is to understand their overall cost trends, cost sensitivities, and trade-offs. This analysis also improves the AFCI Program’s understanding of the cost drivers that will determine nuclear power’s cost competitiveness vis-a-vis other baseload generation systems. The common reactor-related costs consist of capital, operating, and decontamination and decommissioning costs. Fuel cycle costs include front-end (pre-irradiation) and back-end (post-iradiation) costs, as well as costs specifically associated with fuel recycling. This analysis reveals that there are large cost uncertainties associated with all the fuel cycle strategies, and that overall systems (reactor plus fuel cycle) using a closed fuel cycle are about 10% more expensive in terms of electricity generation cost than open cycle systems. The study concludes that further U.S. and joint international-based design studies are needed to reduce the cost uncertainties with respect to fast reactor, fuel separation and fabrication, and waste disposition. The results of this work can help provide insight to the cost-related factors and conditions needed to keep nuclear energy (including closed fuel cycles) economically competitive in the U.S. and worldwide. These results may be updated over time based on new cost information, revised assumptions, and feedback received from additional reviews.

  4. Flow blockage analysis for the advanced neutron source reactor

    SciTech Connect

    Stovall, T.K.; Crabtree, J.A.; Felde, D.K.; Park, J.E.

    1996-01-01

    The Advanced Neutron Source (ANS) reactor was designed to provide a research tool with capabilities beyond those of any existing reactors. One portion of its state-of-the-art design required high-speed fluid flow through narrow channels between the fuel plates in the core. Experience with previous reactors has shown that fuel plate damage can occur when debris becomes lodged at the entrance to these channels. Such debris disrupts the fluid flow to the plate surfaces and can prevent adequate cooling of the fuel. Preliminary ANS designs addressed this issue by providing an unheated entrance length for each fuel plate so that any flow disruption would recover, thus providing adequate heat removal from the downstream, heated portions of the fuel plates. As part of the safety analysis, the adequacy of this unheated entrance length was assessed using both analytical models and experimental measurements. The Flow Blockage Test Facility (FBTF) was designed and built to conduct experiments in an environment closely matching the ANS channel geometry. The FBTF permitted careful measurements of both heat transfer and hydraulic parameters. In addition to these experimental efforts, a thin, rectangular channel was modeled using the Fluent computational fluid dynamics computer code. The numerical results were compared with the experimental data to benchmark the hydrodynamics of the model. After this comparison, the model was extended to include those elements of the safety analysis that were difficult to measure experimentally. These elements included the high wall heat flux pattern and variable fluid properties. The results were used to determine the relationship between potential blockage sizes and the unheated entrance length required.

  5. Meta-Analysis and Advancement of Brucellosis Vaccinology

    PubMed Central

    Carvalho, Tatiane F.; Haddad, João Paulo A.; Paixão, Tatiane A.

    2016-01-01

    Background/Objectives In spite of all the research effort for developing new vaccines against brucellosis, it remains unclear whether these new vaccine technologies will in fact become widely used. The goal of this study was to perform a meta-analysis to identify parameters that influence vaccine efficacy as well as a descriptive analysis on how the field of Brucella vaccinology is advancing concerning type of vaccine, improvement of protection on animal models over time, and factors that may affect protection in the mouse model. Methods A total of 117 publications that met the criteria were selected for inclusion in this study, with a total of 782 individual experiments analyzed. Results Attenuated (n = 221), inactivated (n = 66) and mutant (n = 102) vaccines provided median protection index above 2, whereas subunit (n = 287), DNA (n = 68), and vectored (n = 38) vaccines provided protection indexes lower than 2. When all categories of experimental vaccines are analyzed together, the trend line clearly demonstrates that there was no improvement of the protection indexes over the past 30 years, with a low negative and non significant linear coefficient. A meta-regression model was developed including all vaccine categories (attenuated, DNA, inactivated, mutant, subunit, and vectored) considering the protection index as a dependent variable and the other parameters (mouse strain, route of vaccination, number of vaccinations, use of adjuvant, challenge Brucella species) as independent variables. Some of these variables influenced the expected protection index of experimental vaccines against Brucella spp. in the mouse model. Conclusion In spite of the large number of publication over the past 30 years, our results indicate that there is not clear trend to improve the protective potential of these experimental vaccines. PMID:27846274

  6. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  7. Skin sensitization risk assessment model using artificial neural network analysis of data from multiple in vitro assays.

    PubMed

    Tsujita-Inoue, Kyoko; Hirota, Morihiko; Ashikaga, Takao; Atobe, Tomomi; Kouzuki, Hirokazu; Aiba, Setsuya

    2014-06-01

    The sensitizing potential of chemicals is usually identified and characterized using in vivo methods such as the murine local lymph node assay (LLNA). Due to regulatory constraints and ethical concerns, alternatives to animal testing are needed to predict skin sensitization potential of chemicals. For this purpose, combined evaluation using multiple in vitro and in silico parameters that reflect different aspects of the sensitization process seems promising. We previously reported that LLNA thresholds could be well predicted by using an artificial neural network (ANN) model, designated iSENS ver.1 (integrating in vitro sensitization tests version 1), to analyze data obtained from two in vitro tests: the human Cell Line Activation Test (h-CLAT) and the SH test. Here, we present a more advanced ANN model, iSENS ver.2, which additionally utilizes the results of antioxidant response element (ARE) assay and the octanol-water partition coefficient (LogP, reflecting lipid solubility and skin absorption). We found a good correlation between predicted LLNA thresholds calculated by iSENS ver.2 and reported values. The predictive performance of iSENS ver.2 was superior to that of iSENS ver.1. We conclude that ANN analysis of data from multiple in vitro assays is a useful approach for risk assessment of chemicals for skin sensitization.

  8. Advanced methods of structural and trajectory analysis for transport aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1995-01-01

    This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.

  9. Male biological clock: a critical analysis of advanced paternal age

    PubMed Central

    Ramasamy, Ranjith; Chiba, Koji; Butler, Peter; Lamb, Dolores J.

    2016-01-01

    Extensive research defines the impact of advanced maternal age on couples’ fecundity and reproductive outcomes, but significantly less research has been focused on understanding the impact of advanced paternal age. Yet it is increasingly common for couples at advanced ages to conceive children. Limited research suggests that the importance of paternal age is significantly less than that of maternal age, but advanced age of the father is implicated in a variety of conditions affecting the offspring. This review examines three aspects of advanced paternal age: the potential problems with conception and pregnancy that couples with advanced paternal age may encounter, the concept of discussing a limit to paternal age in a clinical setting, and the risks of diseases associated with advanced paternal age. As paternal age increases, it presents no absolute barrier to conception, but it does present greater risks and complications. The current body of knowledge does not justify dissuading older men from trying to initiate a pregnancy, but the medical community must do a better job of communicating to couples the current understanding of the risks of conception with advanced paternal age. PMID:25881878

  10. Long vs. short-term energy storage:sensitivity analysis.

    SciTech Connect

    Schoenung, Susan M. (Longitude 122 West, Inc., Menlo Park, CA); Hassenzahl, William V. (,Advanced Energy Analysis, Piedmont, CA)

    2007-07-01

    This report extends earlier work to characterize long-duration and short-duration energy storage technologies, primarily on the basis of life-cycle cost, and to investigate sensitivities to various input assumptions. Another technology--asymmetric lead-carbon capacitors--has also been added. Energy storage technologies are examined for three application categories--bulk energy storage, distributed generation, and power quality--with significant variations in discharge time and storage capacity. Sensitivity analyses include cost of electricity and natural gas, and system life, which impacts replacement costs and capital carrying charges. Results are presented in terms of annual cost, $/kW-yr. A major variable affecting system cost is hours of storage available for discharge.

  11. Sensitive Detection of Deliquescent Bacterial Capsules through Nanomechanical Analysis.

    PubMed

    Nguyen, Song Ha; Webb, Hayden K

    2015-10-20

    Encapsulated bacteria usually exhibit strong resistance to a wide range of sterilization methods, and are often virulent. Early detection of encapsulation can be crucial in microbial pathology. This work demonstrates a fast and sensitive method for the detection of encapsulated bacterial cells. Nanoindentation force measurements were used to confirm the presence of deliquescent bacterial capsules surrounding bacterial cells. Force/distance approach curves contained characteristic linear-nonlinear-linear domains, indicating cocompression of the capsular layer and cell, indentation of the capsule, and compression of the cell alone. This is a sensitive method for the detection and verification of the encapsulation status of bacterial cells. Given that this method was successful in detecting the nanomechanical properties of two different layers of cell material, i.e. distinguishing between the capsule and the remainder of the cell, further development may potentially lead to the ability to analyze even thinner cellular layers, e.g. lipid bilayers.

  12. Sensitivity analysis of random shell-model interactions

    NASA Astrophysics Data System (ADS)

    Krastev, Plamen; Johnson, Calvin

    2010-02-01

    The input to the configuration-interaction shell model includes many dozens or even hundreds of independent two-body matrix elements. Previous studies have shown that when fitting to experimental low-lying spectra, the greatest sensitivity is to only a few linear combinations of matrix elements. Following Brown and Richter [1], here we consider general two-body interactions in the 1s-0d shell and find that the low-lying spectra are also only sensitive to a few linear combinations of two-body matrix elements. We find out in particular the ground state energies for both the random and non-random (here given by the USDB) interaction are dominated by similar matrix elements, which we try to interpret in terms of monopole and contact interactions, while the excitation energies have completely different character. [4pt] [1] B. Alex Brown and W. A. Richter, Phys. Rev. C 74, 034315 (2006) )

  13. Sensitivity analysis of eigenvalues for an electro-hydraulic servomechanism

    NASA Astrophysics Data System (ADS)

    Stoia-Djeska, M.; Safta, C. A.; Halanay, A.; Petrescu, C.

    2012-11-01

    Electro-hydraulic servomechanisms (EHSM) are important components of flight control systems and their role is to control the movement of the flying control surfaces in response to the movement of the cockpit controls. As flight-control systems, the EHSMs have a fast dynamic response, a high power to inertia ratio and high control accuracy. The paper is devoted to the study of the sensitivity for an electro-hydraulic servomechanism used for an aircraft aileron action. The mathematical model of the EHSM used in this paper includes a large number of parameters whose actual values may vary within some ranges of uncertainty. It consists in a nonlinear ordinary differential equation system composed by the mass and energy conservation equations, the actuator movement equations and the controller equation. In this work the focus is on the sensitivities of the eigenvalues of the linearized homogeneous system, which are the partial derivatives of the eigenvalues of the state-space system with respect the parameters. These are obtained using a modal approach based on the eigenvectors of the state-space direct and adjoint systems. To calculate the eigenvalues and their sensitivity the system's Jacobian and its partial derivatives with respect the parameters are determined. The calculation of the derivative of the Jacobian matrix with respect to the parameters is not a simple task and for many situations it must be done numerically. The system stability is studied in relation with three parameters: m, the equivalent inertial load of primary control surface reduced to the actuator rod; B, the bulk modulus of oil and p a pressure supply proportionality coefficient. All the sensitivities calculated in this work are in good agreement with those obtained through recalculations.

  14. Thermal analysis of microlens formation on a sensitized gelatin layer

    SciTech Connect

    Muric, Branka; Pantelic, Dejan; Vasiljevic, Darko; Panic, Bratimir; Jelenkovic, Branislav

    2009-07-01

    We analyze a mechanism of direct laser writing of microlenses. We find that thermal effects and photochemical reactions are responsible for microlens formation on a sensitized gelatin layer. An infrared camera was used to assess the temperature distribution during the microlens formation, while the diffraction pattern produced by the microlens itself was used to estimate optical properties. The study of thermal processes enabled us to establish the correlation between thermal and optical parameters.

  15. Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1997-01-01

    A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.

  16. Superconducting Accelerating Cavity Pressure Sensitivity Analysis and Stiffening

    SciTech Connect

    Rodnizki, J; Ben Aliz, Y; Grin, A; Horvitz, Z; Perry, A; Weissman, L; Davis, G Kirk; Delayen, Jean R.

    2014-12-01

    The Soreq Applied Research Accelerator Facility (SARAF) design is based on a 40 MeV 5 mA light ions superconducting RF linac. Phase-I of SARAF delivers up to 2 mA CW proton beams in an energy range of 1.5 - 4.0 MeV. The maximum beam power that we have reached is 5.7 kW. Today, the main limiting factor to reach higher ion energy and beam power is related to the HWR sensitivity to the liquid helium coolant pressure fluctuations. The HWR sensitivity to helium pressure is about 60 Hz/mbar. The cavities had been designed, a decade ago, to be soft in order to enable tuning of their novel shape. However, the cavities turned out to be too soft. In this work we found that increasing the rigidity of the cavities in the vicinity of the external drift tubes may reduce the cavity sensitivity by a factor of three. A preliminary design to increase the cavity rigidity is presented.

  17. Simulation of the global contrail radiative forcing: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.

    2012-12-01

    The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.

  18. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  19. Generic Repository Concepts and Thermal Analysis for Advanced Fuel Cycles

    SciTech Connect

    Hardin, Ernest; Blink, James; Carter, Joe; Massimiliano, Fratoni; Greenberg, Harris; Howard, Rob L

    2011-01-01

    The current posture of the used nuclear fuel management program in the U.S. following termination of the Yucca Mountain Project, is to pursue research and development (R&D) of generic (i.e., non-site specific) technologies for storage, transportation and disposal. Disposal R&D is directed toward understanding and demonstrating the performance of reference geologic disposal concepts selected to represent the current state-of-the-art in geologic disposal. One of the principal constraints on waste packaging and emplacement in a geologic repository is management of the waste-generated heat. This paper describes the selection of reference disposal concepts, and thermal management strategies for waste from advanced fuel cycles. A geologic disposal concept for spent nuclear fuel (SNF) or high-level waste (HLW) consists of three components: waste inventory, geologic setting, and concept of operations. A set of reference geologic disposal concepts has been developed by the U.S. Department of Energy (DOE) Used Fuel Disposition Campaign, for crystalline rock, clay/shale, bedded salt, and deep borehole (crystalline basement) geologic settings. We performed thermal analysis of these concepts using waste inventory cases representing a range of advanced fuel cycles. Concepts of operation consisting of emplacement mode, repository layout, and engineered barrier descriptions, were selected based on international progress and previous experience in the U.S. repository program. All of the disposal concepts selected for this study use enclosed emplacement modes, whereby waste packages are in direct contact with encapsulating engineered or natural materials. The encapsulating materials (typically clay-based or rock salt) have low intrinsic permeability and plastic rheology that closes voids so that low permeability is maintained. Uniformly low permeability also contributes to chemically reducing conditions common in soft clay, shale, and salt formations. Enclosed modes are associated

  20. Advanced in the Forensic Analysis of Glass Fragments with a Focus on Refractive Index and Elemental Analysis.

    PubMed

    Almirall, J R; Trejos, T

    2006-07-01

    Advances in technology provide forensic scientists with better tools to detect, to identify, and to individualize small amounts of trace evidence that have been left at a crime scene. The analysis of glass fragments can be useful in solving cases such as hit and run, burglaries, kidnappings, and bombings. The value of glass as "evidentiary material" lies in its inherent characteristics such as: (a) it is a fragile material that is often broken and hence commonly found in various types of crime scenes, (b) it can be easily transferred from the broken source to the scene, suspect, and/or victim, (c) it is relatively persistent, (d) it is chemically stable, and (e) it has measurable physical and chemical properties that can provide significant evidence of an association between the recovered glass fragments and the source of the broken glass. Forensic scientists have dedicated considerable effort to study and improve the detection and discrimination capabilities of analytical techniques in order to enhance the quality of information obtained from glass fragments. This article serves as a review of the developments in the application of both traditional and novel methods of glass analysis. The greatest progress has been made with respect to the incorporation of automated refractive index measurements and elemental analysis to the analytical scheme. Glass examiners have applied state-of-the-art technology including elemental analysis by sensitive methods such as ICPMS and LA-ICP-MS. A review of the literature regarding transfer, persistence, and interpretation of glass is also presented.

  1. Advanced microgrid design and analysis for forward operating bases

    NASA Astrophysics Data System (ADS)

    Reasoner, Jonathan

    This thesis takes a holistic approach in creating an improved electric power generation system for a forward operating base (FOB) in the future through the design of an isolated microgrid. After an extensive literature search, this thesis found a need for drastic improvement of the FOB power system. A thorough design process analyzed FOB demand, researched demand side management improvements, evaluated various generation sources and energy storage options, and performed a HOMERRTM discrete optimization to determine the best microgrid design. Further sensitivity analysis was performed to see how changing parameters would affect the outcome. Lastly, this research also looks at some of the challenges which are associated with incorporating a design which relies heavily on inverter-based generation sources, and gives possible solutions to help make a renewable energy powered microgrid a reality. While this thesis uses a FOB as the case study, the process and discussion can be adapted to aide in the design of an off-grid small-scale power grid which utilizes high-penetration levels of renewable energy.

  2. Sensitivity analysis and dynamic modification of modal parameter in mechanical transmission system

    NASA Astrophysics Data System (ADS)

    Xie, Shao-Wang; Chen, Qi-Lian; Chen, Chang-Zheng; Li, Qing-Fen

    2005-12-01

    Sensitivity analysis is one of the effective methods in the dynamic modification. The sensitivity of the modal parameters such as the natural frequencies and mode shapes in undamped free vibration of mechanical transmission system is analyzed in this paper. In particular, the sensitivities of the modal parameters to physical parameters of shaft system such as the inertia and stiffness are given. A calculation formula for dynamic modification is presented based on the analysis of modal parameter. With a mechanical transmission system as an example, the sensitivities of natural frequencies and modes shape are calculated and analyzed. Furthermore, the dynamic modification is also carried out and a good result is obtained.

  3. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  4. Implementation of terbium-sensitized luminescence in sequential-injection analysis for automatic analysis of orbifloxacin.

    PubMed

    Llorent-Martínez, E J; Ortega-Barrales, P; Molina-Díaz, A; Ruiz-Medina, A

    2008-12-01

    Orbifloxacin (ORBI) is a third-generation fluoroquinolone developed exclusively for use in veterinary medicine, mainly in companion animals. This antimicrobial agent has bactericidal activity against numerous gram-negative and gram-positive bacteria. A few chromatographic methods for its analysis have been described in the scientific literature. Here, coupling of sequential-injection analysis and solid-phase spectroscopy is described in order to develop, for the first time, a terbium-sensitized luminescent optosensor for analysis of ORBI. The cationic resin Sephadex-CM C-25 was used as solid support and measurements were made at 275/545 nm. The system had a linear dynamic range of 10-150 ng mL(-1), with a detection limit of 3.3 ng mL(-1) and an R.S.D. below 3% (n = 10). The analyte was satisfactorily determined in veterinary drugs and dog and horse urine.

  5. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach

    NASA Astrophysics Data System (ADS)

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex

    2016-06-01

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to seven selected model parameters using a modified volatility basis-set (VBS) approach: four involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semivolatile and intermediate volatility organics (SIVOCs), and NOx; two involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the model parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether or not SOA that starts as semivolatile is rapidly transformed to nonvolatile SOA by particle-phase processes such as oligomerization and/or accretion, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into two subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to nonvolatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. However

  6. Engineering design and analysis of advanced physical fine coal cleaning technologies

    SciTech Connect

    Not Available

    1992-01-20

    This project is sponsored by the United States Department of Energy (DOE) for the Engineering Design and Analysis of Advanced Physical Fine Coal Cleaning Technologies. The major goal is to provide the simulation tools for modeling both conventional and advanced coal cleaning technologies. This DOE project is part of a major research initiative by the Pittsburgh Energy Technology Center (PETC) aimed at advancing three advanced coal cleaning technologies-heavy-liquid cylconing, selective agglomeration, and advanced froth flotation through the proof-of-concept (POC) level.

  7. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  8. Presence of novel compound BCR-ABL mutations in late chronic and advanced phase imatinib sensitive CML patients indicates their possible role in CML progression.

    PubMed

    Akram, Afia Muhammad; Iqbal, Zafar; Akhtar, Tanveer; Khalid, Ahmed Mukhtar; Sabar, Muhammad Farooq; Qazi, Mahmood Hussain; Aziz, Zeba; Sajid, Nadia; Aleem, Aamer; Rasool, Mahmood; Asif, Muhammad; Aloraibi, Saleh; Aljamaan, Khaled; Iqbal, Mudassar

    2017-02-21

    BCR-ABL kinase domain (KD) mutations are well known for causing resistance against tyrosine kinase inhibitors (TKIs) and disease progression in chronic myeloid leukemia (CML). In recent years, compound BCR-ABL mutations have emerged as a new threat to CML patients by causing higher degrees of resistance involving multiple TKIs, including ponatinib. However, there are limited reports about association of compound BCR-ABL mutations with disease progression in imatinib (IM) sensitive CML patients. Therefore, we investigated presence of ABL-KD mutations in chronic phase (n = 41), late chronic phase (n = 33) and accelerated phase (n = 16) imatinib responders. Direct sequencing analysis was employed for this purpose. Eleven patients (12.22%) in late-CP CML were detected having total 24 types of point mutations, out of which eight (72.72%) harbored compound mutated sites. SH2 contact site mutations were dominant in our study cohort, with E355G (3.33%) being the most prevalent. Five patients (45%) all having compound mutated sites, progressed to advanced phases of disease during follow up studies. Two novel silent mutations G208G and E292E/E were detected in combination with other mutants, indicating limited tolerance for BCR-ABL1 kinase domain for missense mutations. However, no patient in early CP of disease manifested mutated ABL-KD. Occurrence of mutations was found associated with elevated platelet count (p = 0.037) and patients of male sex (p = 0.049). The median overall survival and event free survival of CML patients (n = 90) was 6.98 and 5.8 years respectively. The compound missense mutations in BCR-ABL kinase domain responsible to elicit disease progression, drug resistance or disease relapse in CML, can be present in yet Imatinib sensitive patients. Disease progression observed here, emphasizes the need of ABL-KD mutation screening in late chronic phase CML patients for improved clinical management of disease.

  9. Two-dimensional cross-section sensitivity and uncertainty analysis of the LBM (Lithium Blanket Module) experiments at LOTUS

    SciTech Connect

    Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.

    1988-01-01

    In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S/sub N/-transport code ONEDANT, the two-dimensional finite element S/sub N/-transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceeded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed. The goal of this analysis was the determination of the uncertainties of a calculated tritium production per source neutron from lithium along the central Li/sub 2/O rod in the LBM. Considered were the contributions from /sup 1/H, /sup 6/Li, /sup 7/Li, /sup 9/Be, /sup nat/C, /sup 14/N, /sup 16/O, /sup 23/Na, /sup 27/Al, /sup nat/Si, /sup nat/Cr, /sup nat/Fe, /sup nat/Ni, and /sup nat/Pb. 22 refs., 1 fig., 3 tabs.

  10. Advancing Risk Analysis for Nanoscale Materials: Report from an International Workshop on the Role of Alternative Testing Strategies for Advancement: Advancing Risk Analysis for Nanoscale Materials

    SciTech Connect

    Shatkin, J. A.; Ong, Kimberly J.; Beaudrie, Christian; Clippinger, Amy J.; Hendren, Christine Ogilvie; Haber, Lynne T.; Hill, Myriam; Holden, Patricia; Kennedy, Alan J.; Kim, Baram; MacDonell, Margaret; Powers, Christina M.; Sharma, Monita; Sheremeta, Lorraine; Stone, Vicki; Sultan, Yasir; Turley, Audrey; White, Ronald H.

    2016-08-01

    The Society for Risk Analysis (SRA) has a history of bringing thought leadership to topics of emerging risk. In September 2014, the SRA Emerging Nanoscale Materials Specialty Group convened an international workshop to examine the use of alternative testing strategies (ATS) for manufactured nanomaterials (NM) from a risk analysis perspective. Experts in NM environmental health and safety, human health, ecotoxicology, regulatory compliance, risk analysis, and ATS evaluated and discussed the state of the science for in vitro and other alternatives to traditional toxicology testing for NM. Based on this review, experts recommended immediate and near-term actions that would advance ATS use in NM risk assessment. Three focal areas-human health, ecological health, and exposure considerations-shaped deliberations about information needs, priorities, and the next steps required to increase confidence in and use of ATS in NM risk assessment. The deliberations revealed that ATS are now being used for screening, and that, in the near term, ATS could be developed for use in read-across or categorization decision making within certain regulatory frameworks. Participants recognized that leadership is required from within the scientific community to address basic challenges, including standardizing materials, protocols, techniques and reporting, and designing experiments relevant to real-world conditions, as well as coordination and sharing of large-scale collaborations and data. Experts agreed that it will be critical to include experimental parameters that can support the development of adverse outcome pathways. Numerous other insightful ideas for investment in ATS emerged throughout the discussions and are further highlighted in this article.

  11. Developing optical traps for ultra-sensitive analysis

    SciTech Connect

    Zhao, X.; Vieira, D.J.; Guckert, R. |; Crane, S.

    1998-09-01

    The authors describe the coupling of a magneto-optical trap to a mass separator for the ultra-sensitive detection of selected radioactive species. As a proof of principle test, they have demonstrated the trapping of {approximately} 6 million {sup 82} Rb (t{sub 1/2} = 75 s) atoms using an ion implantation and heated foil release method for introducing the sample into a trapping cell with minimal gas loading. Gamma-ray counting techniques were used to determine the efficiencies of each step in the process. By far the weakest step in the process is the efficiency of the optical trap itself (0.3%). Further improvements in the quality of the nonstick dryfilm coating on the inside of the trapping cell and the possible use of larger diameter laser beams are indicated. In the presence of a large background of scattered light, this initial work achieved a detection sensitivity of {approximately} 4,000 trapped atoms. Improved detection schemes using a pulsed trap and gated photon detection method are outlined. Application of this technology to the areas of environmental monitoring and nuclear proliferation are foreseen.

  12. Analysis of the stability and sensitivity of jets in crossflow

    NASA Astrophysics Data System (ADS)

    Regan, Marc; Mahesh, Krishnan

    2016-11-01

    Jets in crossflow (transverse jets) are a canonical fluid flow in which a jet of fluid is injected normal to a crossflow. A high-fidelity, unstructured, incompressible, DNS solver is shown (Iyer & Mahesh 2016) to reproduce the complex shear layer instability seen in low-speed jets in crossflow experiments. Vertical velocity spectra taken along the shear layer show good agreement between simulation and experiment. An analogy to countercurrent mixing layers has been proposed to explain the transition from absolute to convective stability with increasing jet to crossflow ratios. Global linear stability and adjoint sensitivity techniques are developed within the unstructured DNS solver in an effort to further understand the stability and sensitivity of jets in crossflow. An Arnoldi iterative approach is used to solve for the most unstable eigenvalues and their associated eigenmodes for the direct and adjoint formulations. Frequencies from the direct and adjoint modal analyses show good agreement with simulation and experiment. Development, validation, and results for the transverse jet will be presented. Supported by AFOSR.

  13. Recent Advance in Liquid Chromatography/Mass Spectrometry Techniques for Environmental Analysis in Japan

    PubMed Central

    Suzuki, Shigeru

    2014-01-01

    The techniques and measurement methods developed in the Environmental Survey and Monitoring of Chemicals by Japan’s Ministry of the Environment, as well as a large amount of knowledge archived in the survey, have led to the advancement of environmental analysis. Recently, technologies such as non-target liquid chromatography/high resolution mass spectrometry and liquid chromatography with micro bore column have further developed the field. Here, the general strategy of a method developed for the liquid chromatography/mass spectrometry (LC/MS) analysis of environmental chemicals with a brief description is presented. Also, a non-target analysis for the identification of environmental pollutants using a provisional fragment database and “MsMsFilter,” an elemental composition elucidation tool, is presented. This analytical method is shown to be highly effective in the identification of a model chemical, the pesticide Bendiocarb. Our improved micro-liquid chromatography injection system showed substantially enhanced sensitivity to perfluoroalkyl substances, with peak areas 32–71 times larger than those observed in conventional LC/MS. PMID:26819891

  14. FTIR gas analysis with improved sensitivity and selectivity for CWA and TIC detection

    NASA Astrophysics Data System (ADS)

    Phillips, Charles M.; Tan, Huwei

    2010-04-01

    This presentation describes the use of an FTIR (Fourier Transform Infrared)-based spectrometer designed to continuously monitor ambient air for the presence of chemical warfare agents (CWAs) and toxic industrial chemicals (TICs). The necessity of a reliable system capable of quickly and accurately detecting very low levels of CWAs and TICs while simultaneously retaining a negligible false alarm rate will be explored. Technological advancements in FTIR sensing have reduced noise while increasing selectivity and speed of detection. These novel analyzer design characteristics are discussed in detail and descriptions are provided which show how optical throughput, gas cell form factor, and detector response are optimized. The hardware and algorithms described here will explain why this FTIR system is very effective for the simultaneous detection and speciation of a wide variety of toxic compounds at ppb concentrations. Analytical test data will be reviewed demonstrating the system's sensitivity to and selectivity for specific CWAs and TICs; this will include recent data acquired as part of the DHS ARFCAM (Autonomous Rapid Facility Chemical Agent Monitor) project. These results include analyses of the data from live agent testing for the determination of CWA detection limits, immunity to interferences, detection times, residual noise analysis and false alarm rates. Sensing systems such as this are critical for effective chemical hazard identification which is directly relevant to the CBRNE community.

  15. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  16. Analysis on sensitivity and landscape ecological spatial structure of site resources.

    PubMed

    Li, Zhen; He, Fang; Wu, Qiao-jun; Tao, Wei

    2003-03-01

    This article establishes a set of indicators and standards for landscape ecological sensitivity analysis of site resources by using the theories and approaches of landscape ecology. It uses landscape diversity index (H), evenness (E), natural degree (N), contrast degree (C) to study spatial structure and landscape heterogeneity of site resources and thus provides a qualitative-quantitative evaluation method for land planning and management of small, medium scale areas. The analysis of Yantian District, Shenzhen of China showed that Wutong Mountain belonged to high landscape ecological sensitivity area, Sanzhoutian Reservoir and Shangping Reservoir were medium landscape sensitivity and high ecological sensitivity area; Dameisha and Xiaomeisha belonged to medium sensitivity area caused by the decline of natural ecological areas. Shatoujiao, Yantian Pier belonged to low sensitivity area but urban landscape ecological development had reshaped and influenced their landscape ecological roles in a great extent. Suggestions on planning, protection goals and development intensity of each site or district were raised.

  17. Upper limb strength estimation of physically impaired persons using a musculoskeletal model: A sensitivity analysis.

    PubMed

    Carmichael, Marc G; Liu, Dikai

    2015-01-01

    Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.

  18. Preconditioned domain decomposition scheme for three-dimensional aerodynamic sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Eleshaky, Mohammed E.; Baysal, Oktay

    1993-01-01

    A preconditioned domain decomposition scheme is introduced for the solution of the 3D aerodynamic sensitivity equation. This scheme uses the iterative GMRES procedure to solve the effective sensitivity equation of the boundary-interface cells in the sensitivity analysis domain-decomposition scheme. Excluding the dense matrices and the effect of cross terms between boundary-interfaces is found to produce an efficient preconditioning matrix.

  19. Inference of Climate Sensitivity from Analysis of Earth's Energy Budget

    NASA Astrophysics Data System (ADS)

    Forster, Piers M.

    2016-06-01

    Recent attempts to diagnose equilibrium climate sensitivity (ECS) from changes in Earth's energy budget point toward values at the low end of the Intergovernmental Panel on Climate Change Fifth Assessment Report (AR5)'s likely range (1.5-4.5 K). These studies employ observations but still require an element of modeling to infer ECS. Their diagnosed effective ECS over the historical period of around 2 K holds up to scrutiny, but there is tentative evidence that this underestimates the true ECS from a doubling of carbon dioxide. Different choices of energy imbalance data explain most of the difference between published best estimates, and effective radiative forcing dominates the overall uncertainty. For decadal analyses the largest source of uncertainty comes from a poor understanding of the relationship between ECS and decadal feedback. Considerable progress could be made by diagnosing effective radiative forcing in models.

  20. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    SciTech Connect

    Dryer, F.L.; Yetter, R.A.

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  1. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  2. Are treelines advancing? A global meta-analysis of treeline response to climate warming.

    PubMed

    Harsch, Melanie A; Hulme, Philip E; McGlone, Matt S; Duncan, Richard P

    2009-10-01

    Treelines are temperature sensitive transition zones that are expected to respond to climate warming by advancing beyond their current position. Response to climate warming over the last century, however, has been mixed, with some treelines showing evidence of recruitment at higher altitudes and/or latitudes (advance) whereas others reveal no marked change in the upper limit of tree establishment. To explore this variation, we analysed a global dataset of 166 sites for which treeline dynamics had been recorded since 1900 AD. Advance was recorded at 52% of sites with only 1% reporting treeline recession. Treelines that experienced strong winter warming were more likely to have advanced, and treelines with a diffuse form were more likely to have advanced than those with an abrupt or krummholz form. Diffuse treelines may be more responsive to warming because they are more strongly growth limited, whereas other treeline forms may be subject to additional constraints.

  3. Uncertainty analysis and global sensitivity analysis of techno-economic assessments for biodiesel production.

    PubMed

    Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao

    2015-01-01

    There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants.

  4. Analysis of Sensitivity and Uncertainty in an Individual-Based Model of a Threatened Wildlife Species

    EPA Science Inventory

    We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...

  5. Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister

    SciTech Connect

    Wittman, Richard S.

    2013-09-20

    This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.

  6. Avogadro: an advanced semantic chemical editor, visualization, and analysis platform

    PubMed Central

    2012-01-01

    Background The Avogadro project has developed an advanced molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas. It offers flexible, high quality rendering, and a powerful plugin architecture. Typical uses include building molecular structures, formatting input files, and analyzing output of a wide variety of computational chemistry packages. By using the CML file format as its native document type, Avogadro seeks to enhance the semantic accessibility of chemical data types. Results The work presented here details the Avogadro library, which is a framework providing a code library and application programming interface (API) with three-dimensional visualization capabilities; and has direct applications to research and education in the fields of chemistry, physics, materials science, and biology. The Avogadro application provides a rich graphical interface using dynamically loaded plugins through the library itself. The application and library can each be extended by implementing a plugin module in C++ or Python to explore different visualization techniques, build/manipulate molecular structures, and interact with other programs. We describe some example extensions, one which uses a genetic algorithm to find stable crystal structures, and one which interfaces with the PackMol program to create packed, solvated structures for molecular dynamics simulations. The 1.0 release series of Avogadro is the main focus of the results discussed here. Conclusions Avogadro offers a semantic chemical builder and platform for visualization and analysis. For users, it offers an easy-to-use builder, integrated support for downloading from common databases such as PubChem and the Protein Data Bank, extracting chemical data from a wide variety of formats, including computational chemistry output, and native, semantic support for the CML file format. For developers, it can be

  7. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  8. An Introduction to Sensitivity Analysis for Unobserved Confounding in Non-Experimental Prevention Research

    PubMed Central

    Kuramoto, S. Janet; Stuart, Elizabeth A.

    2013-01-01

    Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282

  9. Advanced Satellite Research Project: SCAR Research Database. Bibliographic analysis

    NASA Technical Reports Server (NTRS)

    Pelton, Joseph N.

    1991-01-01

    The literature search was provided to locate and analyze the most recent literature that was relevant to the research. This was done by cross-relating books, articles, monographs, and journals that relate to the following topics: (1) Experimental Systems - Advanced Communications Technology Satellite (ACTS), and (2) Integrated System Digital Network (ISDN) and Advance Communication Techniques (ISDN and satellites, ISDN standards, broadband ISDN, flame relay and switching, computer networks and satellites, satellite orbits and technology, satellite transmission quality, and network configuration). Bibliographic essay on literature citations and articles reviewed during the literature search task is provided.

  10. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  11. On 3-D modeling and automatic regridding in shape design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Yao, Tse-Min

    1987-01-01

    The material derivative idea of continuum mechanics and the adjoint variable method of design sensitivity analysis are used to obtain a computable expression for the effect of shape variations on measures of structural performance of three-dimensional elastic solids.

  12. Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval

    NASA Technical Reports Server (NTRS)

    Gat, Ilana

    2012-01-01

    The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.

  13. Design tradeoff studies and sensitivity analysis, appendix B

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Further work was performed on the Near Term Hybrid Passenger Vehicle Development Program. Fuel economy on the order of 2 to 3 times that of a conventional vehicle, with a comparable life cycle cost, is possible. The two most significant factors in keeping the life cycle cost down are the retail price increment and the ratio of battery replacement cost to battery life. Both factors can be reduced by reducing the power rating of the electric drive portion of the system relative to the system power requirements. The type of battery most suitable for the hybrid, from the point of view of minimizing life cycle cost, is nickel-iron. The hybrid is much less sensitive than a conventional vehicle is, in terms of the reduction in total fuel consumption and resultant decreases in operating expense, to reductions in vehicle weight, tire rolling resistance, etc., and to propulsion system and drivetrain improvements designed to improve the brake specific fuel consumption of the engine under low road load conditions. It is concluded that modifications to package the propulsion system and battery pack can be easily accommodated within the confines of a modified carryover body such as the Ford Ltd.

  14. GCR Environmental Models I: Sensitivity Analysis for GCR Environments

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.

    2014-01-01

    Accurate galactic cosmic ray (GCR) models are required to assess crew exposure during long-duration missions to the Moon or Mars. Many of these models have been developed and compared to available measurements, with uncertainty estimates usually stated to be less than 15%. However, when the models are evaluated over a common epoch and propagated through to effective dose, relative differences exceeding 50% are observed. This indicates that the metrics used to communicate GCR model uncertainty can be better tied to exposure quantities of interest for shielding applications. This is the first of three papers focused on addressing this need. In this work, the focus is on quantifying the extent to which each GCR ion and energy group, prior to entering any shielding material or body tissue, contributes to effective dose behind shielding. Results can be used to more accurately calibrate model-free parameters and provide a mechanism for refocusing validation efforts on measurements taken over important energy regions. Results can also be used as references to guide future nuclear cross-section measurements and radiobiology experiments. It is found that GCR with Z>2 and boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. This finding is important given that most of the GCR models are developed and validated against Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer (ACE/CRIS) measurements taken below 500 MeV/n. It is therefore possible for two models to very accurately reproduce the ACE/CRIS data while inducing very different effective dose values behind shielding.

  15. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  16. Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations

    NASA Astrophysics Data System (ADS)

    Stripling, Hayes Franklin

    Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.

  17. Case Study: Sensitivity Analysis of the Barataria Basin Barrier Shoreline Wetland Value Assessment Model

    DTIC Science & Technology

    2014-07-01

    Barrier Shoreline Wetland Value Assessment Model1 by S. Kyle McKay2 and J. Craig Fischenich3 OVERVIEW: Sensitivity analysis is a technique for...scale restoration projects to reduce marsh loss and maintain these wetlands as healthy functioning ecosystems. The Barataria Basin Barrier Shoreline...Sensitivity Analysis of the Barataria Basin Barrier Shoreline Wetland Value Assessment Model1 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  18. Raman analyzer for sensitive natural gas composition analysis

    NASA Astrophysics Data System (ADS)

    Sharma, Rachit; Poonacha, Samhitha; Bekal, Anish; Vartak, Sameer; Weling, Aniruddha; Tilak, Vinayak; Mitra, Chayan

    2016-10-01

    Raman spectroscopy is of significant importance in industrial gas analysis due to its unique capability of quantitative multigas measurement, especially diatomics (N2 and H2), with a single laser. This paper presents the development of a gas analyzer system based on high pressure Raman scattering in a multipass Raman cell and demonstrates its feasibility for real-time natural gas analysis. A 64-pass Raman cell operated at elevated pressure (5 bar) is used to create multiplicative enhancement (proportional to number of passes times pressure) of the natural gas Raman signal. A relatively low power 532-nm continuous wave laser beam (200 mW) is used as the source and the signals are measured through a cooled charge-coupled device grating spectrometer (30-s exposure). A hybrid algorithm based on background-correction and least-squares error minimization is used to estimate gas concentrations. Individual gas component concentration repeatability of the order of 0.1% is demonstrated. Further, the applicability of the technique for natural gas analysis is demonstrated through measurements on calibrated gas mixtures. Experimental details, analyzer characterization, and key measurements are presented to demonstrate the performance of the technique.

  19. Design, analysis and test verification of advanced encapsulation systems

    NASA Technical Reports Server (NTRS)

    Garcia, A., III

    1983-01-01

    The analytical methodology for advanced encapsulation designs for the development of photovoltaic modules is presented. Analytical models are developed to test optical, thermal, electrical and structural properties of the various encapsulation systems. Model data is compared to relevant test data to improve model accuracy and develop general principles for the design of photovoltaic modules.

  20. A Simultaneous Analysis Problem for Advanced General Chemistry Laboratories.

    ERIC Educational Resources Information Center

    Leary, J. J.; Gallaher, T. N.

    1983-01-01

    Oxidation of magnesium metal in air has been used as an introductory experiment for determining the formula of a compound. The experiment described employs essentially the same laboratory procedure but is significantly more advanced in terms of information sought. Procedures and sample calculations/results are provided. (JN)

  1. Design sensitivity analysis of dynamic responses for a BLDC motor with mechanical and electromagnetic interactions

    NASA Astrophysics Data System (ADS)

    Im, Hyungbin; Bae, Dae Sung; Chung, Jintai

    2012-04-01

    This paper presents a design sensitivity analysis of dynamic responses of a BLDC motor with mechanical and electromagnetic interactions. Based on the equations of motion which consider mechanical and electromagnetic interactions of the motor, the sensitivity equations for the dynamic responses were derived by applying the direct differential method. From the sensitivity equation along with the equations of motion, the time responses for the sensitivity analysis were obtained by using the Newmark time integration method. The sensitivities of the motor performances such as the electromagnetic torque, rotating speed, and vibration level were analyzed for the six design parameters of rotor mass, shaft/bearing stiffness, rotor eccentricity, winding resistance, coil turn number, and residual magnetic flux density. Furthermore, to achieve a higher torque, higher speed, and lower vibration level, a new BLDC motor was designed by applying the multi-objective function method. It was found that all three performances are sensitive to the design parameters in the order of the coil turn number, magnetic flux density, rotor mass, winding resistance, rotor eccentricity, and stiffness. It was also found that the torque and vibration level are more sensitive to the parameters than the rotating speed. Finally, by applying the sensitivity analysis results, a new optimized design of the motor resulted in better performances. The newly designed motor showed an improved torque, rotating speed, and vibration level.

  2. Volumes to learn: advancing therapeutics with innovative computed tomography image data analysis.

    PubMed

    Maitland, Michael L

    2010-09-15

    Semi-automated methods for calculating tumor volumes from computed tomography images are a new tool for advancing the development of cancer therapeutics. Volumetric measurements, relying on already widely available standard clinical imaging techniques, could shorten the observation intervals needed to identify cohorts of patients sensitive or resistant to treatment.

  3. Parameter sensitivity analysis of a simplified electrochemical and thermal model for Li-ion batteries aging

    NASA Astrophysics Data System (ADS)

    Edouard, C.; Petit, M.; Forgez, C.; Bernard, J.; Revel, R.

    2016-09-01

    In this work, a simplified electrochemical and thermal model that can predict both physicochemical and aging behavior of Li-ion batteries is studied. A sensitivity analysis of all its physical parameters is performed in order to find out their influence on the model output based on simulations under various conditions. The results gave hints on whether a parameter needs particular attention when measured or identified and on the conditions (e.g. temperature, discharge rate) under which it is the most sensitive. A specific simulation profile is designed for parameters involved in aging equations in order to determine their sensitivity. Finally, a step-wise method is followed to limit the influence of parameter values when identifying some of them, according to their relative sensitivity from the study. This sensitivity analysis and the subsequent step-wise identification method show very good results, such as a better fitting of the simulated cell voltage with experimental data.

  4. Applying Bayesian Compressed Sensing (BCS) for sensitivity analysis ofclimate model outputs that depend on a high-dimensional input space

    NASA Astrophysics Data System (ADS)

    Chowdhary, K.; Guo, Z.; Wang, M.; Lucas, D. D.; Debusschere, B.

    2014-12-01

    High-dimensional parametric uncertainty exists in many parts of atmospheric climatemodels. It is computationally intractable to fully understand their impact on the climatewithout a significant reduction in the number of dimensions. We employ Bayesian CompressedSensing (BCS) to perform adaptive sensitivity analysis in order to determine whichparameters affect the Quantity of Interest (QoI) the most and the least. In short, BCSfits a polynomial to the QoI via a Bayesian framework with an L1 (Laplace) prior. Thus,BCS tries to find the sparsest polynomial representation of the QoI, i.e., the fewestterms, while still trying to retain high accuracy. This procedure is adaptive in the sensethat higher order polynomial terms can be added to the polynomial model when it is likely thatparticular parameters have a significant effect on the QoI. This helps avoid overfitting and is much more computationally efficient. We apply the BCS algorithm to two sets of single column CAM (Community Atmosphere Model)simulations. In the first application, we analyze liquid cloud fraction as modeled byCLUBB (Cloud Layers Unified By Binormals), an atmospheric cloud and turbulence model.This liquid cloud fraction QoI depends on 29 different input parameters. We compare mainSobol sensitivity indices obtained with the BCS algorithm for the liquid cloud fraction in6 cases, with a previous approach to sensitivity analysis using deviance. We show BCS canprovide almost identical sensitivity analysis results. Additionally, BCS can provide animproved, lower-dimensional, higher order model for prediction. In the secondapplication, we study the time averaged ozone concentration, at varying altitudes, as afunction of 95 photochemical parameters, in order to study the sensitivity to theseparameters. To further improve model prediction, we also explore k-fold cross validationto obtain a better model for both liquid cloud fraction in CLUBB and ozone concentrationin CAM. This material is based upon work

  5. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  6. An efficient finite-difference strategy for sensitivity analysis of stochastic models of biochemical systems.

    PubMed

    Morshed, Monjur; Ingalls, Brian; Ilie, Silvana

    2017-01-01

    Sensitivity analysis characterizes the dependence of a model's behaviour on system parameters. It is a critical tool in the formulation, characterization, and verification of models of biochemical reaction networks, for which confident estimates of parameter values are often lacking. In this paper, we propose a novel method for sensitivity analysis of discrete stochastic models of biochemical reaction systems whose dynamics occur over a range of timescales. This method combines finite-difference approximations and adaptive tau-leaping strategies to efficiently estimate parametric sensitivities for stiff stochastic biochemical kinetics models, with negligible loss in accuracy compared with previously published approaches. We analyze several models of interest to illustrate the advantages of our method.

  7. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  8. An analysis of rate-sensitive skin in gas wells

    SciTech Connect

    Meehan, D.N.; Schell, E.J.

    1983-10-01

    This paper documents the analysis of rate dependent skin in a gas well. Three build-up tests and an isochronal test are analyzed in some detail. The results indicate the rate dependent skin is due to nondarcy flow near the wellbore. Evidence is presented that suggest the non-darcy flow results from calcium carbonate scale partially plugging the perforations. Also, the summary of a pressure build-up study is included on the wells recently drilled in Champlin's Stratton-Agua Dulce Field.

  9. Analysis of rate-sensitive skin in gas wells

    SciTech Connect

    Meehan, D.N.; Schell, E.J.

    1983-01-01

    This study documents the analysis of rate dependent skin in a gas well. Three build-up tests and an isochronal test are analyzed in some detail. The results indicate the rate dependent skin is due to non-Darcy flow near the well bore. Evidence is presented that suggest the non-Darcy flow results from calcium carbonate scale partially plugging the perforations. Also, the summary of a pressure build-up study is included on the wells recently drilled in Champlin's Stratton-Agua Dulce field.

  10. Dive Angle Sensitivity Analysis for Flight Test Safety and Efficiency

    DTIC Science & Technology

    2010-03-01

    These points develop into high- speed dives and require an accurate predictive model to prevent possible testing accidents. As a flight test is...Looking back at this concept and approach, Equation 2.1 and 2.4 are combined to obtain Equation 2.5.  dh V V dVT D dt W g dt...number of attempts at each test point as well as prevent possible accidents and crashes from data that is misrepresented. The analysis took a Dive

  11. Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene J. W.; Kenny, Sean P.

    1991-01-01

    A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.

  12. Interglacial climate dynamics and advanced time series analysis

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel; Köhler, Peter; Lohmann, Gerrit

    2013-04-01

    , Fischer H, Joos F, Knutti R, Lohmann G, Masson-Delmotte V (2010) What caused Earth's temperature variations during the last 800,000 years? Data-based evidence on radiative forcing and constraints on climate sensitivity. Quaternary Science Reviews 29:129. Loulergue L, Schilt A, Spahni R, Masson-Delmotte V, Blunier T, Lemieux B, Barnola J-M, Raynaud D, Stocker TF, Chappellaz J (2008) Orbital and millennial-scale features of atmospheric CH4 over the past 800,000 years. Nature 453:383. L¨ü thi D, Le Floch M, Bereiter B, Blunier T, Barnola J-M, Siegenthaler U, Raynaud D, Jouzel J, Fischer H, Kawamura K, Stocker TF (2008) High-resolution carbon dioxide concentration record 650,000-800,000 years before present. Nature 453:379. Mudelsee M (2000) Ramp function regression: A tool for quantifying climate transitions. Computers and Geosciences 26:293. Mudelsee M (2002) TAUEST: A computer program for estimating persistence in unevenly spaced weather/climate time series. Computers and Geosciences 28:69. Mudelsee M (2010) Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Springer, Dordrecht, 474 pp. [www.manfredmudelsee.com/book] Siegenthaler U, Stocker TF, Monnin E, L¨ü thi D, Schwander J, Stauffer B, Raynaud D, Barnola J-M, Fischer H, Masson-Delmotte V, Jouzel J (2005) Stable carbon cycle-climate relationship during the late Pleistocene. Science 310:1313.

  13. Kinetic modeling and sensitivity analysis of plasma-assisted combustion

    NASA Astrophysics Data System (ADS)

    Togai, Kuninori

    Plasma-assisted combustion (PAC) is a promising combustion enhancement technique that shows great potential for applications to a number of different practical combustion systems. In this dissertation, the chemical kinetics associated with PAC are investigated numerically with a newly developed model that describes the chemical processes induced by plasma. To support the model development, experiments were performed using a plasma flow reactor in which the fuel oxidation proceeds with the aid of plasma discharges below and above the self-ignition thermal limit of the reactive mixtures. The mixtures used were heavily diluted with Ar in order to study the reactions with temperature-controlled environments by suppressing the temperature changes due to chemical reactions. The temperature of the reactor was varied from 420 K to 1250 K and the pressure was fixed at 1 atm. Simulations were performed for the conditions corresponding to the experiments and the results are compared against each other. Important reaction paths were identified through path flux and sensitivity analyses. Reaction systems studied in this work are oxidation of hydrogen, ethylene, and methane, as well as the kinetics of NOx in plasma. In the fuel oxidation studies, reaction schemes that control the fuel oxidation are analyzed and discussed. With all the fuels studied, the oxidation reactions were extended to lower temperatures with plasma discharges compared to the cases without plasma. The analyses showed that radicals produced by dissociation of the reactants in plasma plays an important role of initiating the reaction sequence. At low temperatures where the system exhibits a chain-terminating nature, reactions of HO2 were found to play important roles on overall fuel oxidation. The effectiveness of HO2 as a chain terminator was weakened in the ethylene oxidation system, because the reactions of C 2H4 + O that have low activation energies deflects the flux of O atoms away from HO2. For the

  14. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Lu, Lu; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-04-01

    Background: Advanced intercross lines (AIL) are segregating populations created using a multi-generation breeding protocol for fine mapping complex trait loci (QTL) in mice and other organisms. Applying QTL mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of AIL family structure in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with na ve mapping approaches in AIL populations is that the individual is not an exchangeable unit. Methodology/Principal Findings: The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. GRAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels, which are corrected using GRAIP. GRAIP also detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance: GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. The effect of

  15. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  16. A Guide for Analysis Using Advanced Distributed Simulation (ADS)

    DTIC Science & Technology

    1997-01-01

    within a broader analysis strategy, experimental design, exercise preparation and management , and post-exercise analysis. Because it is impossible to...34* Decisionmakers, such as program managers , who need to determine how ADS might support their analysis needs and how to interpret ADS analysis products... Management and System Acquisition. Contents Preface ..................................................... iii Figures

  17. An Analysis of Energy Savings Possible Through Advances in Automotive Tooling Technology

    SciTech Connect

    Rick Schmoyer, RLS

    2004-12-03

    The use of lightweight and highly formable advanced materials in automobile and truck manufacturing has the potential to save fuel. Advances in tooling technology would promote the use of these materials. This report describes an energy savings analysis performed to approximate the potential fuel savings and consequential carbon-emission reductions that would be possible because of advances in tooling in the manufacturing of, in particular, non-powertrain components of passenger cars and heavy trucks. Separate energy analyses are performed for cars and heavy trucks. Heavy trucks are considered to be Class 7 and 8 trucks (trucks rated over 26,000 lbs gross vehicle weight). A critical input to the analysis is a set of estimates of the percentage reductions in weight and drag that could be achieved by the implementation of advanced materials, as a consequence of improved tooling technology, which were obtained by surveying tooling industry experts who attended a DOE Workshop, Tooling Technology for Low-Volume Vehicle Production, held in Seattle and Detroit in October and November 2003. The analysis is also based on 2001 fuel consumption totals and on energy-audit component proportions of fuel use due to drag, rolling resistance, and braking. The consumption proportions are assumed constant over time, but an allowance is made for fleet growth. The savings for a particular component is then the product of total fuel consumption, the percentage reduction of the component, and the energy audit component proportion. Fuel savings estimates for trucks also account for weight-limited versus volume-limited operations. Energy savings are assumed to be of two types: (1) direct energy savings incurred through reduced forces that must be overcome to move the vehicle or to slow it down in braking. and (2) indirect energy savings through reductions in the required engine power, the production and transmission of which incur thermodynamic losses, internal friction, and other

  18. Sensitivity Evaluation of the Daily Thermal Predictions of the AGR-1 Experiment in the Advanced Test Reactor

    SciTech Connect

    Grant Hawkes; James Sterbentz; John Maki

    2011-05-01

    A temperature sensitivity evaluation has been performed for the AGR-1 fuel experiment on an individual capsule. A series of cases were compared to a base case by varying different input parameters into the ABAQUS finite element thermal model. These input parameters were varied by ±10% to show the temperature sensitivity to each parameter. The most sensitive parameters are the outer control gap distance, heat rate in the fuel compacts, and neon gas fraction. Thermal conductivity of the compacts and graphite holder were in the middle of the list for sensitivity. The smallest effects were for the emissivities of the stainless steel, graphite, and thru tubes. Sensitivity calculations were also performed varying with fluence. These calculations showed a general temperature rise with an increase in fluence. This is a result of the thermal conductivity of the fuel compacts and graphite holder decreasing with fluence.

  19. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  20. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-01-01

    Abstract Background Advanced intercross lines (AIL) are segregating populations created using a multigeneration breeding protocol for fine mapping complex traits in mice and other organisms. Applying quantitative trait locus (QTL) mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of family structure in AIL populations in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with a na ve mapping approach in such AIL populations is that the individual is not an exchangeable unit given the family structure. Methodology/Principal Findings The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. RAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome- ide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels in our AIL population, which are corrected by use of GRAIP. We also show that GRAIP detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance GRAIP determines appropriate genome-wide significance thresholds